[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[SCOOP #BIF-872981]: SCOOP IDD config



Hi Gerry,

Sorry for the late reply.  Steve Emmerson asked me to reply to your request
last week, but I have been (and still am) hammered trying to get my
McIDAS-X v2006 release finished...

re:
> I've mentioned in the past that SCOOP has effectively established a
> point-to-point IDD for its data, with all LDM-enabled partners
> effectively requesting data from each other, instead of creating a
> hierarchy and using top-tier LDM servers for retrieving such data.

Yes.  This approach requires that sites pay close attention to requests,
queue sizes, etc. so that data does not cycle through the system more
than once.

> I've proposed to that group creating a hierarchy where the two
> recognized SCOOP archives (TAMU and LSU's CCT) would request all data
> from all the data providers, and would then serve out the EXP feed,
> possibly subdivided by product-specific requests, from one of the 2
> archive sites.

This is very close to the strategy adopted by the TIGGE folks.  In TIGGE
data from modeling centers is sent to the three archive sites directly:
ECMWF, NCAR, and CMA (Chinese Meteorological Agency).

> In my vision, I'm envisioning the data providers creating ALLOW
> statements for TAMU and LSU, and pqinsert-ing data into their queues for
> the archives to receive, and TAMU/LSU creating ALLOW statements for the
> partners.

Sounds reasonable.

> This would, I believe, alleviate some of the multiple-copy problems
> folks are seeing... where CRC isn't being triggered and pqacts are
> triggered multiple times.  THis is problemmatical because folks are
> using LDM data acquisition to fire off data-driven applications.

I agree.

> Does this sound like a reasonable move, from the ad hoc point-to-point
> architecture to a more conventional hierarchical structure?

Yes.

> And, do you
> think the current structure, and especially the point-to-point nature of
> the architecture, are contributing to some of our problems?

Yes, absolutely.  We have talked on several occasions about strategies
that would cut down or possibly eliminate the "data circulation" problem
you are seeing.

> I'm really trying to make sure I've got my thoughts in order as SCOOP
> proceeds.  I talked to Tom about getting y'all into a meeting with all
> the SCOOP LDM admins to do a cleanup and fixup of the SCOOP mini-IDD.
> I'm thinking that'd be after the current hurricane system.

OK.  As you may know, the summer is one of the busiest times for folks
here at the UPC.  Some efforts will slack off in the Fall, so a meeting
in late September or later would be best -- as long as it does not conflict
with our series of training workshops.

> Thanks! gerry

No worries.  Again, I apologize for taking so long to send back this response!

Cheers,

Tom
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: BIF-872981
Department: Support IDD SCOOP
Priority: High
Status: Closed


pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy