[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[IDD #BWX-469327]: RE: 20080220: WSI request from UNCO to idd.unidata.ucar.edu (cont.)



Hi Gary,

re:
> No, I didn't fall of the edge of the map or forget about the ldm
> configuration.

:-)

> I have been playing with it since your last email and
> think I have fine tuned it as much as I know how.

Very good.

> Here is the result of the ldmadmin config command.  Please let me know
> if there is anything else I can do to improve our performance and keep
> things running smoothly for you and us.
> 
> hostname:      tornado.unco.edu
> os:            Linux
> release:       2.6.22.1-32.fc6
> ldmhome:       /usr/local/ldm
> bin path:      /usr/local/ldm/bin
> conf file:     /usr/local/ldm/etc/ldmd.conf
> log file:      /usr/local/ldm/logs/ldmd.log
> numlogs:       7
> log_rotate:    1
> data path:     /usr/local/ldm/data
> product queue: /usr/local/ldm/data/ldm.pq
> queue size:    2G bytes
> queue slots:   80000
> IP address:    all
> port:          388
> PID file:      /usr/local/ldm/ldmd.pid
> LDMHOSTNAME:   tornado.unco.edu
> PATH:
> /usr/local/ldm/bin:/bin:/usr/bin:/usr/sbin:/sbin:/usr/ucb:/usr/usb:/usr/
> etc:/etc:/usr/local/ldm/decoders:/usr/local/ldm/util:/usr/local/ldm/bin:
> /usr/kerberos/bin:/opt/jre1.5.0_08/bin:/usr/local/bin:/bin:/usr/bin:/opt
> /IDV:/usr/local/ncarg/bin:/home/gempak/GEMPAK5.10.3/os/linux/bin:/home/g
> empak/GEMPAK5.10.3/bin:/usr/local/ldm/bin

The 'ldmadmin config' output looks reasonable.  In particular, having a
2 GB queue matches the _average_ amount of data you are receiving per hour:

http://www.unidata.ucar.edu/cgi-bin/rtstats/rtstats_summary_volume?tornado.unco.edu

Data Volume Summary for tornado.unco.edu

Maximum hourly volume   5808.062 M bytes/hour
Average hourly volume   1861.761 M bytes/hour

Average products per hour     106364 prods/hour

Feed                           Average             Maximum     Products
                     (M byte/hour)            (M byte/hour)   number/hour
CONDUIT                 991.271    [ 53.244%]     4265.343    28503.523
NGRID                   463.777    [ 24.911%]     1065.521    11533.114
HDS                     202.841    [ 10.895%]      434.766    17465.705
NNEXRAD                 158.163    [  8.495%]      176.268    21955.773
IDS|DDPLUS               26.533    [  1.425%]       34.393    26881.955
UNIWISC                  19.176    [  1.030%]       30.915       24.182

However, the queue size is a bit small given the _maximum_ amount of data
you receive in any hour (the Maximum hourly figure above).

It appears that you are requesting the full volume for each feed.  Latency plots
for the various feeds seems to show that there are hours when your Internet
connection is being saturated by all fo the data you receive.

Question:

- do you really need all of the data you are requesting?

Decreasing the amount of data you are requesting would result in decreased 
system
load and would likely result in lower overall latencies.

Since I don't know what sort of loads you are seeing on tornado, nor do I know 
if the
products you are receiving are being processed out of your LDM queue 
efficiently/quickly,
I can't make any recommendations about further tweeks you could make.  This 
leads
to another question:

- are the data that are being received being processed out of your LDM queue 
without
  large lags, or do you feel that you are experiencing degrated performance.

I ask this question because we have seen reports of high latencies from a number
of users when, in fact, their LDM was receiving products with little to no 
latency.
Their perception of things not working well were actually rooted in bottlenecks
being experienced when processing products out of their LDM queue.  In more 
than one
case, the only solution for the user was to take a hard look at the products 
that
were being requested with an eye on eliminating data that was either rarely or
never used.

> Thanks!

No worries.  The latencies I see for the various feeds on tornado show that 
great
progress has been made with your tweeking activities.  The latencies still being
experienced in your CONDUIT feed:

http://www.unidata.ucar.edu/cgi-bin/rtstats/iddstats_nc?CONDUIT+tornado.unco.edu

are telling us that you are either requesting more data than your Internet 
connection
can handle, or that you need to split your CONDUIT feed request into 
mutually-exclusive
subrequests.  For instance, you may want to try the following:

change:

request CONDUIT ".*" idd.unidata.ucar.edu

to:

request CONDUIT "([09]$)" idd.unidata.ucar.edu
request CONDUIT "([18]$)" idd.unidata.ucar.edu
request CONDUIT "([27]$)" idd.unidata.ucar.edu
request CONDUIT "([36]$)" idd.unidata.ucar.edu
request CONDUIT "([45]$)" idd.unidata.ucar.edu

This 5-way split for the CONDUIT request should result in significantly lower
latencies for products in the CONDUIT stream.

However, please be aware that if you receive the products faster, then your 
system
will be faced with a higher rate of data to process.  If your system is already 
bound
up in any way (see my comments above), this will make the situation worse.  In 
this case,
your best bet would be to decide if you _really_ need all of the data you are 
requesting.

Cheers,

Tom
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: BWX-469327
Department: Support IDD
Priority: Normal
Status: Closed


pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy