[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[IDD #XEC-580619]: IDD Peering request from Unidata
- Subject: [IDD #XEC-580619]: IDD Peering request from Unidata
- Date: Thu, 14 Aug 2014 12:37:40 -0600
Hi Mike,
Quick follow-up to the email I sent this past Monday....
I was looking through the real-time statistics you are reporting from
your various wright-weather.com machines, and I see that you are
REQUESTing NEXRAD2 data from idd.tamu.edu among others. Since the
IDD relay setup at TAMU was fashioned in the same way as the top
level relay that we maintain (idd.unidata.ucar.edu), and since TAMU
has 10 Gb networking, the cause of the latencies you are experiencing
are likely due to some hope between TAMU and you or somewhere closer
to you.
I am guessing that your feed request for NEXRAD2 data to TAMU is
done through a single ~ldm/etc/ldmd.conf REQUEST line. If this is
true, you should be able to significantly drop your latencies by:
- making sure that REQUEST(s) for NEXRAD2 are not coupled with
any other datastream
- splitting the single NEXRAD2 REQUEST into several (e.g., 5)
mutually-exclusive REQUESTs
The splitting of data feed REQUESTs into several mutually-exclusive
sub-REQUESTs gets around a problem inherent in TCP itself. We have
used the splitting of REQUESTs here at the UPC for a _very_ long time
and recommend it to sites that are experiencing higher than expected
latencies
- the other thing that is possible is that there is some sort of a
per-connection slowing of data flow to your machines
The classic example of this is what we refer to as "packet shaping"
although that term is based on the name of one package. The concept
is that software can set the maximum volume of data that any single
TCP connection can have, but the configuration of the software doing
the "shaping" might not artificially cap the aggregate amount of
data allowed for an installation. Splitting high volume flows like
that in the NEXRAD2 feed into mutually-exclusive sub-REQUESTs gets
around the per-connection limitation.
If you are willing to share your ~ldm/etc/ldmd.conf REQUESTs, we could
advise you on how best to reconfigure them to maximize data receipt
while minimizing product latency.
For interest, here is what a five-way split of CONDUIT data looks like
for a site REQUESTing CONDUIT from our top-level IDD relay,
idd.unidata.ucar.edu:
Requesting all of a feed in a single REQUEST looks like:
REQUEST CONDUIT ".*" idd.unidata.ucar.edu
Splitting the single request into 5 mutually-exclusive parts looks like:
REQUEST CONDUIT "([09]$)" idd.unidata.ucar.edu
REQUEST CONDUIT "([18]$)" idd.unidata.ucar.edu
REQUEST CONDUIT "([27]$)" idd.unidata.ucar.edu
REQUEST CONDUIT "([36]$)" idd.unidata.ucar.edu
REQUEST CONDUIT "([45]$)" idd.unidata.ucar.edu
NB:
CONDUIT is one of the easiest datastreams to split since the LDM/IDD
Product IDs used for conduit products have a sequence number at the
end. Assuming an even distribution of the last digit of the sequence
numbers (not necessarily 100% valid, but pretty close), the number
of products in each sub-REQUEST should be the same (but this does
not necessarily guarantee that the volume of each sub-REQUEST is
the same).
Comment:
We found that we need to split our CONDUIT feed REQUESTs into 10
mutually-exclusive sub-REQUESTs when moving data into the Microsoft
Azure cloud. This surprised us (me mostly) because I had assumed
that the network pipes into Azure would be huge.
Recap: I think that your ability to feed from idd.tamu.edu should
be more than sufficient to get all of the data you want with very
low latencies mainly since the relay setup at TAMU is very good.
Also, I would seriously doubt that switching your REQUESTs to us/UCAR
would result in lower latencies. Splitting your feed REQUESTs should,
on the other hand, work wonders at decreasing the latencies you are
experiencing.
Cheers,
Tom
--
****************************************************************************
Unidata User Support UCAR Unidata Program
(303) 497-8642 P.O. Box 3000
address@hidden Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage http://www.unidata.ucar.edu
****************************************************************************
Ticket Details
===================
Ticket ID: XEC-580619
Department: Support IDD
Priority: Normal
Status: Closed