[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[LDM #VZT-383836]: LDM host status
- Subject: [LDM #VZT-383836]: LDM host status
- Date: Fri, 06 Oct 2006 16:19:04 -0600
Hi Gary,
Sorry for the tardy reply...
> We have finally transitioned over to our new LDM hardware and configurations.
> We
> have been monitoring our 'LDM gateway' computer, called awl. Now that all
> users
> have switched over to awl, we are seeing load that averages around 7 to 8,
> with
> peaks up to 25. We are not seeing any issues with swapping. Right now we have
> around 144 instances of rpc.ldmd. The age of oldest product in the queue
> ranges
> from 1.5 hours to 3 hours. We have a 10G queue and 12G of memory.
Interesting. The real data servers that are part of our toplevel IDD relay
cluster each have between 14 - 16 GB of RAM and have smaller LDM queues. The
reason we sized the machines this way was so that there was never any swapping.
The nodes that are running with over 130 downstream connections have load
averages
that range from less than 1 to about 7 at the most. Your average load of 7-8
with peaks of 25 "feel" high...
> We think awl is running well, but we wanted to get your opinion. If you need
> more information. let us know. We can provide more details.
At some point when I get some time, I would love to be able to logon to awl
and do some snooping. Any chance for that?
> We have attached the ldmd.conf file.
After looking through your ldmd.conf file (but not studying it), I would guess
that your higher load averages are a result of the large number of upstream
requests that the machine has (37). Our experience on the old, SunFire 480R
thelma.ucar.edu showed us that the load on a machine was a strong function
of the number of ingest requests and a weaker function of the number of
feeds that were being serviced to downstreams. The reason for this was that
request process use write locks on the queue while feed processes use
lighter weight read locks. When we built our cluster, we paied special
attendtion
to this situation by dedicating two machines to ingest duties, and three (now
four)
machines to relay duties. A schematic of the cluster can be seen in:
http://www.unidata.ucar.edu/newsletter/2005june/clusterpiece.htm
Cheers,
Tom
****************************************************************************
Unidata User Support UCAR Unidata Program
(303) 497-8642 P.O. Box 3000
address@hidden Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage http://www.unidata.ucar.edu
****************************************************************************
Ticket Details
===================
Ticket ID: VZT-383836
Department: Support LDM
Priority: Normal
Status: Open