[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: NetCDF Java Read API
- Subject: Re: NetCDF Java Read API
- Date: Fri, 14 Nov 2008 22:14:29 -0500
John Caron wrote:
Hi Greg:
1) what version of netcdf-java are you using?
Hi John,
Ai-Hoa was actually running the Java code for me, but I think she
told me she's running the latest Java 4.0. Ai-Hoa can tell you for sure.
If it helps, our 'java -version' returns:
java version "1.5.0_09"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_09-b01)
Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_09-b01, mixed mode)
2) how much memory do you give the JVM (-Xmx option). the default can be as low
as 32 Megs.
Running our Java reader on that sample file with a max memory of
4096M failed whereas running with 5096M succeeded. That test
was done on a Linux box with a 'uname -a' response as follows:
Linux paris 2.6.9-78.0.1.ELsmp #1 SMP Tue Jul 22 18:01:05 EDT \
2008 x86_64 x86_64 x86_64 GNU/Linux
3) are you reading the entire data into memory?
I believe so. I recall seeing that the Java code invoked a read()
which returned an 'Array'. Ai-Hoa could probably give you more
details on Monday.
It doesnt appear that you are correctly chunking. an h5dump -h shows:
DATASET "VIL" {
DATATYPE H5T_STD_I16LE
DATASPACE SIMPLE { ( 24, 1, 3520, 5120 ) / ( 24, 1, 3520, 5120 ) }
ATTRIBUTE "DIMENSION_LIST" {
DATATYPE H5T_VLEN { H5T_REFERENCE}
DATASPACE SIMPLE { ( 4 ) / ( 4 ) }
}
chunking on ( 1, 1, 3520, 5120 ) would probably be a reasonable thing to do,
depending on how you plan to access the data.
I'm not really familiar enough with the HDF5 file content to know what
to look for in the h5dump output regarding chunking size. What part of
the h5dump indicates the chunking size?
Regarding access to the files, as I mentioned earlier, I write the entire
set of grids in one C++ call to NcVar::put. This has always resulted in
fast write times. Likewise, I expected data to be acquired with one read
but would be willing to read each of the 24 forecast grids if that is what
it takes.
Greg.