ROMS Large File I/O

General scientific issues regarding ROMS

Moderators: arango, robertson

Post Reply
Message
Author
lanerolle
Posts: 157
Joined: Mon Apr 28, 2003 5:12 pm
Location: NOAA

ROMS Large File I/O

#1 Post by lanerolle » Mon Jun 06, 2011 9:18 pm

I am running a large ROMS application (662 x 667 x 50 grid points) and I use the PERFECT_RESTART CPP option because the simulation needs to be run in continuous segments. When the restart file is written out, I find that is it ~3.97Gb in size and the variables up to AKv are written but those including AKv and beyond are not written out - a ncdump of the file gives the following :

AKv =

So there are no values written for AKv.

I tried using the 3.5.0 and 4.0.1 versions of the NetCDF libraries and the outcomes are the same.

According to http://www.unidata.ucar.edu/software/ne ... 20Support0 NetCDF library version 3.6.0 onwards should automatically take care of large files and we need not worry about it. Furthermore, it says that if we type the command od -An -c -N4 ocean_rst.nc and we get C D F 001 then we are using the "classic" version of NetCDF with a file size limit of 2Gb and if we get C D F 002, we are using the 64-bit offset which supports much larger files. When I ran this command, I get C D F 001 with both (above) versions of the NetCDF libraries. Hence, it appars that no matter what, ROMS always invokes the "classic" NetCDF I/O option.

In the makefile, I use USE_LARGE ?= on and I thought this may help with the file sizes also.

Could someone please let me know :

(1) whether you have been able to write files >= 4Gb with ROMS?
(2) if (1) is affirmative then, what version of the NetCDF libraries are being used? and,
(3) if (1), (2) are affirmative then whether I need to turn on/off any special flags in the makefile and/or the appropriate *.mk file in the Compilers/ directory?

Many thanks for your help in advance.

User avatar
shchepet
Posts: 185
Joined: Fri Nov 14, 2003 4:57 pm

Re: ROMS Large File I/O

#2 Post by shchepet » Tue Jun 07, 2011 1:04 am

Official netCDF limitations for 64-bit format are practically unlimited, and are described here:

http://www.unidata.ucar.edu/software/ne ... tions.html

i.e., there is still 4GByte limit on a fix-size variable and on record length
(aggregate size of a single record of all variables with unlimited dimensions),
but the whole file can grow beyond this limit.

In practice, the above limits are not constraining at all, at least not for
the grid-dimensions you have specified (662 x 667 x 50 grid points).


1. Both 3.5.0 and 4.0.1 versions of the NetCDF libraries are helplessly outdated,
and are known to have bugs, especially 3.5.0 with respect to 64-bit offset.
I cannot point a report on the web, but I do remember it out of my head.

Replace it with 3.6.3 which is the latest and final release of netCDF 3
generation.

2. As for the version 4.0.1, it is virtually the first release of netCDF 4 and
it is buggy by the definition, and also officially known to be buggy.

The current version of netCDF 4 is 4.1.2

3. Personally I use 3.6.3 and 4.1.1 (did not bother with 4.1.2 yet), and I prefer
3.6.3 over 4.1.1 for performance reasons. Here we do not use the HDF features
which makes in necessary to use netCDF 4, but as for the native netCDF 64-bit
format 3.6.3 is decisively superior (under some circumstances by a factor of 5
in writing speed -- I can prove it to a interesting party by direct
demonstration).

4. Upgrade your libraries, and see whether it solves the problem.

User avatar
shchepet
Posts: 185
Joined: Fri Nov 14, 2003 4:57 pm

Re: ROMS Large File I/O

#3 Post by shchepet » Tue Jun 07, 2011 1:23 am

Interestingly enough, in the netCDF web suite pointed by Lyon, among other
things it says

Should I start using the new 64-bit offset format for all my netCDF files?

No, we discourage users from making use of the new format unless they need it for
very large files. It may be some time until third-party software that uses the
netCDF library is upgraded to 3.6 or later versions that support the new large
file facilities, so we advise continuing to use the classic netCDF format for data
that doesn't require huge file offsets. The library makes this recommendation easy
to follow, since the default for file creation is the classic format.
Here at UCLA we did not notice this warning, and several years ago (I guess about
summer 2004) just made all ROMS output files to be always 64-bit. Just hardcoded
whenever nf_create call is made. We did not implement a CPP-switch between 32-
and 64-bit.

Never had any problem with 3rd-party software.

I guess, there was no such thing as 3rd-party software: in the past one had to
compile mexcdf / mexnc package for Matlab, and that is based on the netCDF library,
which was always the same as used by the ROMS code itself. Nowadays Matlab
natively reads and writes netCDF files, but since this came too late, a mature
netCDF library was used by matlab. So there is no problem.

There is also no performance penalty for using 64-bit offset.

lanerolle
Posts: 157
Joined: Mon Apr 28, 2003 5:12 pm
Location: NOAA

Re: ROMS Large File I/O

#4 Post by lanerolle » Wed Jun 08, 2011 11:38 pm

Thanks for your advice and I tried out the NetCDF 3.6.3 library. Unfortunately, the outcome is the same as using the 4.0.1 library. ROMS begins to run and writes out the history file (the initial conditions) but when it comes to writing out the restart file (for the very first time) it crashes gives the following screen output :

STEP Day HH:MM:SS KINETIC_ENRG POTEN_ENRG TOTAL_ENRG NET_VOLUME

0 68 00:00:00 0.000000E+00 2.386324E+04 2.386324E+04 9.077648E+14
DEF_HIS - creating history file: ocean_his1.nc
WRT_HIS - wrote history fields (Index=1,1) into time record = 0000001

NETCDF_ENDDEF - unable to end definition mode for file:
ocean_rst1.nc
call from: def_rst.F

Elapsed CPU time (seconds):

So it writes out the history file successfully but crashes when writing out the restart file!

If I do a ncdump on the history file, the contents look perfectly fine but if I do so on the restart file, I get :

>>ncdump: ocean_rst1.nc: Not a netCDF file

Could this be a bug in the restart file writing/defining routines? In the event that it is, I am also posting a summary the above on the ROMS Bugs forum.

I am running ROMS/TOMS version 3.4 (SVN version/update 526) which I believe is relatively recent.

User avatar
shchepet
Posts: 185
Joined: Fri Nov 14, 2003 4:57 pm

Re: ROMS Large File I/O

#5 Post by shchepet » Thu Jun 09, 2011 3:03 am

This thread continues on viewtopic.php?t=2289
because the last post from Lyon was actually placed twice, here and following the link.

lanerolle
Posts: 157
Joined: Mon Apr 28, 2003 5:12 pm
Location: NOAA

Re: ROMS Large File I/O

#6 Post by lanerolle » Thu Jun 30, 2011 3:56 pm

I have found a partial solution to my large restart file I/O problem. If we look at ROMS/Modules/mod_netcdf.F, we see the following code segments :

#ifdef OUT_DOUBLE
integer, parameter :: NF_FOUT = nf90_double
#else
integer, parameter :: NF_FOUT = nf90_real
#endif
#ifdef RST_SINGLE
integer, parameter :: NF_FRST = nf90_real
#else
integer, parameter :: NF_FRST = nf90_double
#endif
#ifdef DOUBLE_PRECISION
integer, parameter :: NF_TYPE = nf90_double
#else
integer, parameter :: NF_TYPE = nf90_real
#endif
!
! Netcdf file creation mode flag.
!
#if defined HDF5
integer :: CMODE = nf90_netcdf4 ! NetCDF-4/HDF5 format file
#else
integer :: CMODE = nf90_clobber ! NetCDF classic format file
#endif

So if we do not define HDF5, the NetCDF file creation mode is "classic" and if you look at the unidata website, this option limits file sizes to 2Gb. Hence, if we run ROMS with a PERFECT_RESTART option and write-out the fields in double precision and our application is somewhat big, we can easily exceed this file limit for restart files.

What I did was to continue using the PERFECT_RESTART cpp option but also use RST_SINGLE and then the restart file it writes is just under 2Gb and I am able to run the ROMS and restart computations successfully. So this is a work-around solution to my problem.

When I compared my numerical solutions with this single precision perfect restart with solutions from a continuous model run (which did not need to restart), I found appreciable differences which dissipated with time (~1/2 day time scale). So using RST_SINGLE does have consequences and reduces the accuracy of a perfect restart in ROMS.

Is there a plan to remove this NetCDF "classic" restriction in ROMS in the future so that the history, stations and restart files we write can be unlimited in size? The only limitation then will be that each file record needs to be less than 4Gb and this is easily satisfied in almost all computations.

User avatar
shchepet
Posts: 185
Joined: Fri Nov 14, 2003 4:57 pm

Re: ROMS Large File I/O

#7 Post by shchepet » Thu Jun 30, 2011 5:21 pm


So if we do not define HDF5, the NetCDF file creation mode is "classic" and if you look at the unidata website, this option limits file sizes to 2Gb. Hence, if we run ROMS with a PERFECT_RESTART option and write-out the fields in double precision and our application is somewhat big, we can easily exceed this file limit for restart files.
This is not accurate. In fact, starting with v. 4.0.x netCDF supports not two,
but three types of files:

"classic" (same as 32-bit offset) which is subject to 2GB limit;
"64-bit offset" which was supported since, I believe, netCDF verstion 3.5.something
"HDF" (Hierarchical Data Format, HDF4 or HDF5)

The first two formats are the products of Unidata; HDF was developed by NCSA with the
intent of parallel I/O from an MPI code (although its performance is limited to basically
NCSA developed/maintained Lustre filesystem, and it is not very useful on a more common
type of filesystems), see viewtopic.php?f=29&t=1200
The major selling point of netCDF is easiness of use (de facto "standard" format for
data exchange among scientists). This does not apply to HDF. "HDF5" option within
netCDF4 is basically a netCDF look-alike wrapper around HDF5.

So undefining/disabling HDF5, does not automatically leads to "classic" format, hence 2GB limit.

The only change needs to be done when switching from "classic" to 64-bit offset is to change

Code: Select all

ierr=nf_create(ncname(1:lncn), nf_clobber, ncid)
into

Code: Select all

ierr=nf_create(ncname(1:lncn), nf_clobber+nf_64bit_offset, ncid)
where nf_64bit_offset is a parameter defined in netcdf.inc file.
This matter was previously discussed on this board, see
viewtopic.php?f=3&t=287

User avatar
shchepet
Posts: 185
Joined: Fri Nov 14, 2003 4:57 pm

Re: ROMS Large File I/O

#8 Post by shchepet » Fri Jul 01, 2011 1:31 am

All you have to do is to edit file "ROMS/Modules/mod_netcdf.F" around line 135 and change

Code: Select all

!
!  Netcdf file creation mode flag.
!
#if defined HDF5
      integer :: CMODE = nf90_hdf5         ! NetCDF-4/HDF5 format file
#else
      integer :: CMODE = nf90_clobber      ! NetCDF classic format file
#endif
into

Code: Select all

#if defined HDF5
      integer :: CMODE = nf90_hdf5         ! NetCDF-4/HDF5 format file
#else
      integer :: CMODE = nf90_64bit_offset ! NetCDF 64-bit offset format
#endif
then you do not have to define HDF5 (hence stay within native netCDF format)
and not facing the 2GB limitation.

Note that the only place where CMODE is used occurs around line 5728 the same
file "mod_netcdf.F", which is

Code: Select all

status=nf90_create(TRIM(ncname), CMODE, ncid)
...in principle one can organize a 3-way CPP switch

Code: Select all

#if defined HDF5
      integer :: CMODE = nf90_hdf5         ! NetCDF-4/HDF5 format file
#elif defined NETCDF64
      integer :: CMODE = nf90_64bit_offset ! NetCDF 64-bit offset format
#else
      integer :: CMODE = nf90_clobber      ! NetCDF classic format file
#endif
but it is actually quite silly, since the classic format can be considered
obsolete by now.

A useful further reading from Unidata would be
http://www.unidata.ucar.edu/software/ne ... q-lfs.html
or, more specifically, just jump to
http://www.unidata.ucar.edu/software/ne ... 20Support8

User avatar
arango
Site Admin
Posts: 1117
Joined: Wed Feb 26, 2003 4:41 pm
Location: IMCS, Rutgers University
Contact:

Re: ROMS Large File I/O

#9 Post by arango » Fri Jul 01, 2011 3:07 am

Yes, Sasha is correct. This was the first strategy that I recommended to you when we talked in Annapolis earlier this month. I didn't recall from the top of my head the parameter value for CMODE. Like Sasha mentioned, there are several internal parameters mod_netcdf.F that can be adjusted globally for various I/O strategies. ROMS has a generic layer for the NetCDF library routines to facilitate various configurations for parallel or serial I/O.

Post Reply