[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[McIDAS #XRR-599672]: FW: Please remove port 23 access from shu.cup.edu, and add ports 22 & 112.



Hi Samuel,

re:
> Ya, I know it takes a while for ldm to restart.  Yes shu is IO bound.  I'm 
> hoping that if we
> get our feeds from a closer site (like PSU, or Drexel), then we might not 
> have that problem.

Getting feeds from closer sites is unlikely to have any effect on your I/O 
problems.
Determining the set of data you really want to ingest and sizing the LDM queue 
is your best bet
for tuning.

> I was also hoping to prune down some of the feeds, once we get things 
> working, and determine
> which feeds are being used most.

Excellent.

> If you think WMO is a good feed to get, then that's fine with me,

I can guarantee that you will want the data coming in the WMO feed.

> however I think we should wait until we get these mysql/adde issues ironed 
> out.

It is already done.  Please read on...

> Mysql was installed by me, using the scripts in the /usr/local/mysql 
> directory, and
> following the directions in INSTALL-BINARY.

OK, sounds good.

> I don't think the directions mention anything about registering MySQL.

When a package is installed from RPMs, the registration is taken care of for 
you.  If,
on the other hand, one copies in distributions from another system, there is no 
record
of the package having been installed.

> I knew something was wrong when I had to mess around with LD_LIBRARY_PATH.  I 
> have to admit,
> I copied the files from /usr/local/mysql/lib to /home/mcidas/lib to try to 
> satisify McIDAS.

Bingo!  This was not a good thing to do.

To prove my point, I did the following while still at home immediately after 
seeing your comment:

-- ssh address@hidden

<as 'mcidas'>
cd ~mcidas/lib
-- remove all of the libraries you copied here

cd ~mcidas
-- edit .bash_profile and remove the LD_LIBRARY_PATH definition
-- edit .mcenv and remove the LD_LIBRARY_PATH definition

-- logoff and log back on as 'mcidas'
cd mcidas2007/src
make clobber
make all

<as 'ldm'>
-- edit ~/decoders/xcd_run and remove the LD_LIBRARY_PATH definition
-- edit ~/decoders/batch.k and remove the LD_LIBRARY_PATH definition
-- edit ~/util/mcscour.sh and remove the LD_LIBRARY_PATH definition
ldmadmin stop

<as 'mcidas'>
cd ~/mcidas2007/src
make install.all

<as 'ldm'>
ldmadmin start

After making all of the above changes I tested remote access to the data being
ingested/processed, and things look good.

Interestingly, however, I am unable to access the ADDE server from my work 
machine,
yakov.unidata.ucar.edu (128.117.156.86)!?  I then tried accessing the ADDE 
server on shu
from a machine at another user's site, weather.admin.niu.edu (131.156.8.47), 
and the access
worked nicely.  Finally, I tried accessing the shu ADDE server from a machine 
on our
other network, zero.unidata.ucar.edu (127.117.140.56), and that works nicely.  
So, it seems
that there is some sort of a block to port 112 access for machines from our 
128.117.156 network???

> Not the most germane way to satisfy the linker.  I was simply trying to get 
> things built.

I undid this so that your setup would be standard.  This will make your job 
much easier
during upgrades.

> O.K. I looked at the ADDE server.  I see it works now.  Thank you SOO much!  
> I might want to
> tweak what datasets are available, but I THINK I can do that on my own.

OK.  I already modified ~mcidas/data/LSSERVE.BAT to use the Unidata-Wisconsin 
images that
are being filed under /data/ldm/mcidas/images/...

> I just simply didn't know why it wasn't working.

The reason ADDE wasn't working was that the shared MySQL libraries were not 
being found.
After I deleted those libraries you copied to ~mcidas/lib and remade and 
reinstalled
the distribution, things are working as they should

> I thought I had port 112 open, what part of the firewall did you have to 
> change?

I made three modifications in /etc/sysconfig/iptables:

- allow access through port 112
- comment out the allow for port 23 (telnet)
- add an allow through port 388 (LDM)

I made the changes active using:

<as 'root'>
/etc/init.d/iptables restart

There also appears to be a CUP firewall that controls access to shu.  For 
instance,
I am not able to do an 'ldmping' from machines on either of our subnets to shu.

> Hey, just some general questions:
> 
> Why does the mcidas GUI interface look different when I run it from the 
> account mcidas,
> as opposed to when I run it from Zehel?

There are two different GUIs in McIDAS:  GUI and MCGUI.  I suspect that you are 
using
one in 'mcidas' and the other in 'Zehel'.

> How do I load some data from the local ADDE server using the local 
> shu.cup.edu McIDAS gui?

Access to ADDE datasets is controlled by the McIDAS DATALOC command.  This 
command can
also list where your session will go to get data.  For instance:

<as 'mcidas' on shu>
cd $MCDATA

-bash-2.05b$ dataloc.k LIST

Group Name                    Server IP Address
--------------------         ----------------------------------------
AMRC                         UWAMRC.SSEC.WISC.EDU
CIMSS                        <LOCAL-DATA>
EAST                         UNIDATA2.SSEC.WISC.EDU
EUM_AD                       193.17.10.4
GINICOMP                     SHU.CUP.EDU
GINIEAST                     SHU.CUP.EDU
GINIWEST                     SHU.CUP.EDU
GOESEAST                     UNIDATA2.SSEC.WISC.EDU
ME7                          IO.SCA.UQAM.CA
MYDATA                       <LOCAL-DATA>
NEXRCOMP                     SHU.CUP.EDU
PUB                          GP16.SSD.NESDIS.NOAA.GOV
RTGRIB2                      <LOCAL-DATA>
RTGRIBS                      <LOCAL-DATA>
RTGRIDS                      <LOCAL-DATA>
RTIMAGES                     <LOCAL-DATA>
RTNEXRAD                     <LOCAL-DATA>
RTPTSRC                      <LOCAL-DATA>
RTWXTEXT                     <LOCAL-DATA>
SHU.CUP.EDU                  158.83.1.174
SOUTH                        GOESSOUTH.UNIDATA.UCAR.EDU
WEST                         UNIDATA2.SSEC.WISC.EDU

<LOCAL-DATA> indicates that data will be accessed from the local data 
directory.DATALOC -- done

The entries that list their source as LOCAL-DATA will try to access the data 
using definitions
in the local account.  The ones specifying SHU.CUP.EDU (e.g., NEXRCOMP) go 
through the remote
ADDE interface.  In either case, the data that will be accessed will be those 
ingested/processed
on the local machine.

I strongly recommend that all access be through the remote ADDE interface.  The 
reason for
this is more complex than can be explained in a brief paragraph.

Here is what the installation instructions want you go do:

<as 'mcidas'>
cd ~mcidas/data
-- create if necessary LOCDATA.BAT as a copy of DATALOC.BAT; this would have 
been
   done for you when you ran 'mcxconfig'
-- edit LOCDATA.BAT to setup your client routing table entries
-- make the LOCDATA.BAT entries active:

cd $MCDATA
batch.k LOCDATA.BAT CONTINUE=YES

Given what I have seen is being ingested on shu, here is what I suggest your 
LOCDATA.BAT
entries look like:

DATALOC ADD CIMSS     shu.cup.edu
DATALOC ADD GINICOMP  adde.cise-nsf.gov
DATALOC ADD GINIEAST  adde.cise-nsf.gov
DATALOC ADD GINIWEST  adde.cise-nsf.gov
DATALOC ADD NEXRCOMP  shu.cup.edu
DATALOC ADD RTGRIBS   shu.cup.edu
DATALOC ADD RTGRIB2   adde.cise-nsf.gov
DATALOC ADD RTGRIDS   adde.ucar.edu
DATALOC ADD RTIMAGES  shu.cup.edu
DATALOC ADD RTNEXRAD  shu.cup.edu
DATALOC ADD RTPTSRC   shu.cup.edu
DATALOC ADD RTWXTEXT  shu.cup.edu
DATALOC ADD TOPO      shu.cup.edu

DATALOC ADD AMRC      uwamrc.ssec.wisc.edu
DATALOC ADD EAST      unidata2.ssec.wisc.edu
DATALOC ADD EUM_AD    193.17.10.4
DATALOC ADD GOESEAST  unidata2.ssec.wisc.edu
DATALOC ADD ME7       io.sca.uqam.ca
DATALOC ADD PUB       gp16.ssd.nesdis.noaa.gov
DATALOC ADD SOUTH     goessouth.unidata.ucar.edu
DATALOC ADD WEST      unidata2.ssec.wisc.edu

DATALOC ADD BLIZZARD  adde.cise-nsf.gov

If/when you start ingesting the NIMAGE IDD datastream, you will be able to
serve the NOAAPORT images locally (datasets GINICOMP, GINIEAST, and GINIWEST).

Once your client routing table entries are setup, you should be able to run 
McIDAS with the
MCGUI and easily look at a wide variety of imagery.

> I can load data from the ADDE server on shu.cup.edu, using my IDV interface 
> on my
> windows machine.

Very good.

> THANK YOU for getting the ADDE server up.

No worries.

> I do run across some errors when using the GUI on the Zehel account, I'm not 
> sure if it's normal,
> but when I try to use Display/Imagery/GOESEAST option from the main menu, and 
> hit "Display&Close",
> I get the following error:
> expected integer but got ""
> expected integer but got ""
> while executing
> "format "%2d" $curfram"
> (procedure "curFrame" line 4)
> invoked from within
> "curFrame $bframe"
> invoked from within
> "set CurFrame [curFrame $bframe]"
> (command bound to event)

The MCGUI assumes that you have started a McIDAS session with at least 16 
frames.  The
default out of the box after a McIDAS installation is for 10 frames, so this is 
likely
what is going on.

Here is what to do:

<as 'mcidas'>
cd $MCDATA
mcidas -config

Specify the number of frames at the top of the gui widget to be at least 16.  
While
at it, change the size so that your frames are larger.  Finally, make sure to 
select
'Yes' for the 'Save configuration values to defaults file' action at the bottom 
of
the widget and then click the 'START' button.  After making this adjustment, you
should be able to display the images that failed before.

Last comments for now:

1) you setup automatic startup of the LDM in the /etc/rc.d/rc.local file.  I 
recommend
   that you setup the start of the LDM as a separate action in /etc/init.d

2) I see that you are running ntpdate every 15 minutes.  I strongly recommend 
that
   you setup and use ntpd.  This will keep your clock accurate and help prevent
   data loss on LDM restarts.  While at it, I sent the ouput generated from
   the ntpdate runs out of 'root's cron to the bitbucket (/dev/null).  The 
reason
   I did this was there were some 2300 mail messages in 'root's inbox 99% of
   which were the informational messages generated each time ntpdate was run.

3) we (my system administrator and I) looked into the I1 vs I2 routing issues 
you
   have encountered (with data feeds from Penn State and with NSF/ATM).  We 
believe
   that the cause of the I1 routing resides at Drexel.  You and/or your network
   administrator should contact the networking folks at Drexel and request that
   your LDM/IDD and ADDE traffic be routed over I2.

4) we did a preliminary scan to see if you had gotten hacked into while telnet
   was open.  Our cursory look did not show anything suspicious.

5) when I was configuring the LDM yesterday morning, I decreased the size of
   the LDM queue from 4 GB to 1 GB.  I did this since shu only has 1.5 GB
   of real memory:

cat /proc/meminfo
        total:    used:    free:  shared: buffers:  cached:
Mem:  1575665664 1523150848 52514816        0 54665216 1178963968
Swap: 4293509120        0 4293509120
MemTotal:      1538736 kB
MemFree:         51284 kB
MemShared:           0 kB
Buffers:         53384 kB
Cached:        1151332 kB
SwapCached:          0 kB
Active:        1089644 kB
ActiveAnon:     210884 kB
ActiveCache:    878760 kB
Inact_dirty:    239680 kB
Inact_laundry:   63352 kB
Inact_clean:     10276 kB
Inact_target:   280588 kB
HighTotal:      655168 kB
HighFree:         5704 kB
LowTotal:       883568 kB
LowFree:         45580 kB
SwapTotal:     4192880 kB
SwapFree:      4192880 kB
CommitLimit:   4962248 kB
Committed_AS:   535988 kB
HugePages_Total:     0
HugePages_Free:      0
Hugepagesize:     2048 kB

  The huge over commitment of memory was one of, if not THE, main reasons that
  your machine was so I/O bound and took _forever_ when doing an 'ldmadmin 
stop'.

  The only reason that I did not reduce the LDM queue size even more was your
  ingesting of the full CONDUIT datasetream.  If you decide that you do
  not need to ingest CONDUIT, then I recommend reducing your LDM queue to
  something on the order of 500 MB.  This will result in the machine running
  much more efficiently.  The again, if you find that you need a larger LDM
  queue (e.g., you want to relay data to others and want to keep a significant
  amount of data in your queue, or if you find that it is taking a long time
  to process data out of your queue), then you should install more memory;
  4 GB is not unreasonable nowadays.

That's enough for now...

Cheers,

Tom
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: XRR-599672
Department: Support McIDAS
Priority: Normal
Status: Closed


pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy