Frequently Asked Questions

From WikiROMS
Jump to navigationJump to search
Frequently Asked Questions

Installation and Configuration

NoteNote: In SVN revision 933 (January 26, 2019) all ocean_*.in files were renamed to roms_*.in and all ocean* ROMS executables were renamed to roms* in order to facilitate and clarify model coupling efforts. More information can be found in the ROMS repository Trac ticket #794. If you are working with a ROMS release prior to revision 933 you will need to replace roms_upwelling.in with ocean_upwelling.in and romsS, romsM, or romsO with oceanS, oceanM, or oceanO in all commands below.

My build finished with no errors. Where is the ROMS executable?

It is either in romsS (serial), romsO (shared-memory, OpenMP), romsM (distributed-memory, MPI), or romsG (debug). Check the makefile to see which options are on. The makefile definition BINDIR controls where to write the compiled ROMS executable.

What do I have to do to run an application?

There are basically two modes to run a ROMS application: serial or parallel

  1. Most of the canned applications that come with the ROMS distribution do not need input NetCDF files. The grid, initial conditions and forcing conditions are specified with analytical expressions in Functionals. Any of these test cases can be run by editing the ROMS_APPLICATION definition in the makefile or the build Script. A list of all pre-defined model applications can be found in header file cppdefs.h. The next step is to compile and link using the make tool and, if you are lucky, the program will build. It is good practice to execute make clean first. Then, the application should be run in serial with the command:
    romsS < roms_APPLICATION.in > & log &
    where APPLICATION is the lowercase of any of the CPP options defining a distributed test case. For example, the upwelling (UPWELLING) test case uses the input script roms_upwelling.in which is located in the ROMS/External directory.
  2. A few canned applications that come with ROMS require input NetCDF files which are located in the Data/ROMS directories. Same as above, but if you want run in parallel on a distributed-memory computer use:
    mpirun -np 2 romsM roms_test_head.in > & log &
    or to run in parallel on a shared-memory machine:
    setenv OMP_THREAD_NUM 2
    romsO < ROMS/External/roms_upwelling.in > & log &
    for the test headland case, for example. You need to be sure that the product of NtileI and NtileJ in roms_test_head.in is 2 since the above command specifies two processors to run (-np 2 or OMP_THREAD_NUM=2). Notice that to compile the model in distributed-memory, you need to edit the makefile (or build Script) and activate USE_MPI. Also, if you use MPICH rather than a native MPI library, define USE_MPIF90 as well. The MPICH library uses a script called mpif90 for compiling.

Why is only one node created when executing with mpirun?

This usually occurs when you are not using the correct mpirun. For Example, if you compiled with Open MPI and try to use the mpirun command from MPICH, you will see this error. To see which mpirun you are executing type 'which mpirun'; the first part of the path should match what the ROMS output reports as your 'Compiler command'.

> which mpirun
/opt/pgisoft/mpich/bin/mpirun
> mpirun -np 2 ./romsM roms.in
Process Information:

Node # 0 (pid= 150433) is active.

Model Input Parameters: ROMS/TOMS version 3.4

. . .

Compiler system  : pgi
Compiler command : /opt/pgisoft/openmpi/bin/mpif90
Compiler flags  : -O3 -Mfree

. . .

ERROR: Illegal domain partition.
>

Notice that only one node is active and the compiler command shows the model was compiled with Open MPI but mpirun is from the MPICH directory. To fix this problem, update your path to use the mpirun from the openmpi directory or re-compile with MPICH.

Compiling and Linking

The compiling and linking information is all in Compiling and Linking, makefile, and build Script. Please let us know if any of this is unclear.

Errors at compile time

First of all, read the error messages from the compiler carefully to see if you can figure out what they are talking about. There are several classes of errors (add more as you see them):

  • Errors from build_roms.sh(formerly build.bash):
    ./build_roms.sh: line 40: $'\r': command not found
    ./build_roms.sh: line 43: $'\r': command not found
    ./build_roms.sh: line 46: syntax error near unexpected token '$'in\r'
    ./build_roms.sh: line 46: ' case "$1" in
    You need to run the dos2unix command on build_roms.sh.
  • Errors from cpp: If there is a mismatch in the #ifdef/#endif pairs, cpp will be unhappy and will tell you so. There is a perl script called "ifdefs" which checks your #ifdefs/#endif pairings.
  • Errors like this:
    cd Build_roms; -c -O3 mod_kinds.f90
    /bin/sh: -c: not found
    The command:
    FC := $(shell which ${FC})
    in the Compilers/xxx-xxx.mk file has failed to find the compiler you specified, setting FC to the null string. You need to find out which compiler you were asking it to look for and to make sure it is in your path. If the flag USE_MPIF90 is set, it will be looking for "mpif90". What do you get from the command "which mpif90"? If you are trying for a serial run first, what does "which gfortran" give you?
  • Failing to find netcdf.mod:
    cp -f /usr/local/include/netcdf.mod /home/kekejiayu/Projects/upwelling/Build
    cp: cannot stat ‘/usr/local/include/netcdf.mod’: No such file or directory
    You need to have both the netcdf and netcdf-fortran libraries. It is strongly recommended that these be version 4.x, not in the 3 series and that you have USE_NETCDF4=on in your build. Even so, things can go wrong. For instance, recent versions of nc-config rely on finding nf-config in your PATH to work properly. In this case there are three options:
    1. Add the full path to nf-config to you your PATH environment variable.
    2. Change the appropriate NC_CONFIG declaration in your [[build.{sh|bash}|build_Script]] to the full path to your "nf-config".
    3. Change "nc-config" to "nf-config" in the appropriate file in the Compilers directory. If "nf-config" is not in your path you will need to set this to the full path of your "nf-config" command.
  • Syntax errors in the fortran: check the intermediate .f90 file for the exact code at the line numbers in the report. Be sure to edit the corresponding .F file once you determine what the trouble is.
  • Trouble with the netcdf module file:
    PGF90-F-0004-Corrupt or Old Module file ./netcdf.mod (distribute.f90: 2750)
    Netcdf-fortran must be compiled with the same version of the same compiler you are using now. If not, there will be trouble.
  • Trouble with other modules: one instance of this is due to incorrect dependency information in MakeDepend. The ROMS code must be compiled in the correct order and failure to do so might give something like:
    cd Build_roms; ifort -c bc_2d.f90
    fortcom: Error: bc_2d.f90, line 30: Error in opening the Library module file.
    [MOD_PARAM]
    USE mod_param
    ----------^
    fortcom: Error: bc_2d.f90, line 32: Error in opening the Library module file.
    Here, it failed to compile mod_param.F before bc_2d.F, leading to trouble. The '--objdir=$(BUILD_DIR)' option in MDEPFLAGS is not optional.

Errors at link time

  • One common problem is that differing options were used when compiling libraries vs. the ROMS code proper. For instance, you get many, many messages starting with:
    .../Build_roms/libUTIL.a(close_io.o):close_io.f90:(.text+0x1030): undefined reference to `__netcdf_MOD_nf90_strerror'
    Check to make sure that the link statement above the errors includes both '-lnetcdff' and '-lnetcdf' with the correct path to the libraries: '-L/usr/local/pkg/netcdf/netcdf-4.3.0.gnu-4.7.3/lib', for instance.
  • Old ROMS versions only linked to libnetcdf.a, while NetCDF 4.x also needs libnetcdff.a to be built from a separate netcdf-fortran package. The link statement needs to include both. See this thread for a discussion on it.
  • Also new in NetCDF is the option to create a library with OpenDAP capabilities. In that case, you need to also link to libcurl, in the makefile as the option USE_DAP. This option is needed if you are missing functions with "curl" in the name.
  • For all of these issues, we strongly recommend the use of USE_NETCDF4 and its method of running 'nc-config --flibs' to find out the correct linking bits, though there are systems on which 'nf-config --flibs' works better. Run these things on the command line to find out what works for you:
    pacman13 396% nc-config --flibs
    -L/usr/local/pkg/netcdf/netcdf-4.3.0.gnu-4.7.3/lib -lnetcdff \
    -L/usr/local/pkg/hdf5/hdf5-1.8.10-p1.gnu-4.7.3/lib -L/usr/local/pkg/szip/szip-2.1.gnu/lib \
    -L/usr/local/pkg/udunits/udunits-2.1.24.gnu-4.7.3/lib \
    -L/usr/local/pkg/netcdf/netcdf-4.3.0.gnu-4.7.3/lib -lnetcdf -lnetcdf
    or
    fish2 1487% nc-config --flibs
    -L/opt/cray/netcdf/4.3.2/PGI/141/lib -lnetcdff -L/opt/cray/hdf5/1.8.13/PGI/141/lib -lnetcdf

Basic Usage

What are the ways that ROMS can receive the wind and other surface forcing?

  1. One option is to provide the stresses and fluxes via analytic expressions in ana_smflux.h and ana_stflux.h.
  2. One option is to provide analytic expressions for the winds and other atmospheric fields in ana_winds.h and friends, and define BULK_FLUXES.
  3. ROMS can also read fields from NetCDF files, also stresses or winds, depending on the state of BULK_FLUXES. These can be provided as:
    1. Point measurements, in which ROMS will rotate to the correct directions from East (u) and North (v).
    2. Gridded fields on the ROMS grid, in which case the directions have to match the directions on your grid.
    3. Gridded fields on a larger, usually coarser grid. These are in East and North, with ROMS doing the rotation. ROMS picks up its cue on whether to interpolate+rotate or not based on the grid dimensions of these fields. If the dimensions exactly match that of your own grid, ROMS will assume that it is on your grid. If you plan on using global fields of dimension 180x90, say, don't build a grid with those dimensions. The cdl for a working example is shown here:
netcdf rain.1948-2006 {

dimensions:

       rain_time = UNLIMITED ; // (708 currently)
       lat = 94 ;
       lon = 192 ;

variables:

       float rain(rain_time, lat, lon) ;
               rain:_FillValue = -1.e+34f ;
               rain:units = "Kg_meter-2_sec-1\000Kg_meter-2_sec-1" ;
               rain:long_name = "rain fall rate" ;
               rain:coordinates = "lon lat" ;
       double lat(lat) ;
               lat:units = "degrees_north" ;
               lat:point_spacing = "uneven" ;
               lat:axis = "Y" ;
       double lon(lon) ;
               lon:units = "degrees_east" ;
               lon:modulo = 360. ;
               lon:point_spacing = "even" ;
               lon:axis = "X" ;
       float SNOW(rain_time, lat, lon) ;
               SNOW:missing_value = -1.e+34f ;
               SNOW:_FillValue = -1.e+34f ;
               SNOW:long_name = "SNOW_FRAC[D=2,GXYT=PRC_MOD@ASN]*PRC_MOD" ;
               SNOW:long_name_mod = "L=1:12" ;
       double rain_time(rain_time) ;
               rain_time:units = "days since 1900-01-01 00:00:00" ;
               rain_time:axis = "T" ;
               rain_time:bounds = "TIME_bnds" ;
               rain_time:time_origin = "1-JAN-1948" ;
               rain_time:calendar = "LEAP" ;

In this example, the variable names "rain" and "SNOW" would have to match with what's in the varinfo.dat file.

How can I create the initial/boundary/forcing NetCDF files for ROMS?

  1. Please don't use the old Fortran codes unless you are a glutton for punishment.
  2. Most use the Matlab scripts on the ROMS svn server.
  3. There are Python tools at github. A thorough description of how to get these running on a Mac is here. An alternate way to go is to install a VirtualBox with Linux and install them there.
  4. There are probably Ferret and other scripts out there. Ask on the ROMS forum if you have question about a specific data source.

What are the ways that ROMS can receive tides on the boundaries?

  1. You can specify a time-dependent boundary condition file that temporally resolves the tides and skip the tides file entirely.
  2. You can specify SSH_TIDES and/or UV_TIDES and provide a tides file and skip the boundary condition file entirely - though this option might require you to tell ROMS in globaldefs.h that you really, really don't need an OBC_FILE. Usual practice is to provide both surface elevation and currents in the tidal file and to use FSCHAPMAN and M2FLATHER on the open sides of your domain.
  3. You can specify other 2-d currents and surface elevation signals as coming in, plus have tides and use the ADD_M2OBC and ADD_FSOBC options.

How can I set up passive tracers?

You need to #define T_PASSIVE to get passive tracers. The flag ANA_PASSIVE determines whether or not the initial conditions are analytic (from ana_passive.h) or from the initial condition file. As for boundary conditions, all tracers have either analytic boundary conditions or they all come from the same boundary condition file. If you need to read values from a file, the field names are dye_01, dye_02, etc. I successfully set up a uniform dye_01 using ANA_PASSIVE with boundary conditions from a file. The boundary file has u,v,T,S for many times and (analytic) biological values for the beginning and end times. I have a script to read one biology value, change the attributes and the value to my uniform value, then write it into the boundary file with the name dye_01_west, etc. I then had to edit varinfo.dat to reflect that it's using bio_time rather than ocean_time.

You probably also want ANA_SPFLUX and ANA_BPFLUX for analytic surface and bottom passive tracer fluxes.

What is the CFL criterion for ROMS?

From Sasha Shchepetkin:

1. Barotropic mode. The easiest to predict and control -- based on speed of the external waves, c_g=sqrt(g*h) where h is local depth, then

with dx and dy evaluated locally; Cg_max=max{Cg} over all horizontal grid points. This one is computed at the beginning of the run and reported by routine setup_grid2 and is is not monitored by diag thereafter. Its maximum allowed value depends on the particular time stepping algorithm and should be 0.87 for generalized FB stepping or 0.9-something for LF-AM3.

2. Internal wave speed. Essentially phase speed of the first baroclinic mode multiplied by dt (slow-mode time step) and by

same as above. Once again, this is not monitored by diag because it is difficult to evaluate the phase speed, but one has to rely on oceanographic intuition to pick a reasonable value. Internal waves (first baroclinic mode is the fastest) are typically faster than the fastest horizontal advection speed, but is not too fast away from it. So internal phase speeds of about 2.5 m/sec is quite realistic.

3. Horizontal advection. The relevant number is Cu=dt*u/dx, however it should be kept in mind that for a 3D finite volume advection algorithm where fluxes in all three directions are applied simultaneously it is the combination off all three sets of fluxes what controls stability. So the most natural way to interpret Courant number for the 3D advection algorithm is the fraction of water within the control volume which being replaced in one time step. This CFL is monitored by diag.F (if proper CPP switches are activated).

4. Vertical advection. This is the one which is hard to predict in advance, and typically it becomes more and more restrictive as resolution becomes finer. This is both because your solution becomes more vivid, and all currents are intensified as well as due to the nature of hydrostatic equations (because hydrostatic pressure is computed individually in each vertical column, each horizontal velocity "feels" the effect of the nearest neighbors but not several grid intervals away as it would be in the case of nonhydrostatic model. As the result any free-travelling nonlinear internal wave tends to from a sharp shock-like front which has large vertical velocity. Eventually it becomes unrealisticly large if one chooses smaller and smaller horizontal dx. The threshold to ensure that it does not happen is the aspect ratio: thermocline depth / dx. As long as it is small (say less than unity, but does not have to be vanishingly small), vertical velocity is effectively averages over horizontal grid box area, and therefore cannot be large. Conversely, pushing dx smaller an smaller beyond this threshold makes model violate the applicability of hydrostatic approximation, and in practice result in unrealistically large vertical velocities. (Vice-versa, using nonhydrostatic code on a coarse grid such that the above ratio is vanishingly small is plain waste of resources because the non-hydrostatic effects are essentially non-existing on the scales resolved by such grid, and in this sense it is the fact that horizontal resolution is finite is what makes hydrostatic-approximation model applicable).

Vertical CFL is monitored by diag.F along with total advection CFL, and the latter should be kept under some limit to ensure model stability. The actual limiting value depends on the selection of advection algoritms and details of time stepping for 3D mode. But in any case the criterion is about 1 or so (0.8 or 0.9 or whatever).

Another complication comes from the fact that finer grid means less smoothed topography which translates into larger vertical velocities. Also having/not having tides typically makes major difference in this regard.

Another complication comes from the fact that some modern versions of ROMS have capability to circumvent vertical advection by monitoring overall 3D advective CFL budget and split vertical velocity into two parts, W=We+Wi, in such a way that We always stays within CFL allowed by the explicit advection scheme, and Wi is the "excess" portion vertical velocity which is treated implicitly for both advection of momentum and tracers equations. This offers some relief from vertical CFL.

Another complication comes from the fact that criteria 2 and 3 are actually add up to each other rather than to be constrained separately, i.e., internal wave speed must be increased by advection speed when considering stability budget.

5. Inertial period CFL. Because Coriolis term is treated explicitly it has its own CFL criterion. This is restrictive only if your time step is sufficiently large in absolute sense, that is 1 hour or more; the relevant nondimensioal number f*dt

How can I deal with "patch" files?

OK, so maybe no one actually asked this yet, but I posted something in "diff -u" format:

diff --git a/ROMS/Nonlinear/output.F b/ROMS/Nonlinear/output.F

index e4fb854..42ff7a8 100644 --- a/ROMS/Nonlinear/output.F +++ b/ROMS/Nonlinear/output.F @@ -1,4 +1,5 @@

#include "cppdefs.h"

+#define LONG_NUMS

#ifdef NONLINEAR
      SUBROUTINE output (ng)
!

@@ -111,7 +119,11 @@

            ifile=(iic(ng)-1)/ndefHIS(ng)+1
            IF (Master) THEN
              WRITE (HIS(ng)%name,10) TRIM(HIS(ng)%base), ifile

+#ifdef LONG_NUMS + 10 FORMAT (a,'_',i5.5,'.nc') +#else

  10          FORMAT (a,'_',i4.4,'.nc')

+#endif

            END IF
# ifdef DISTRIBUTE
CALL mp_bcasts (ng, iNLM, HIS(ng)%name)

Do I honestly expect you to apply this by hand? Not at all. If you save it to a file, called say "diff.out", there's more than one way to automatically apply it:

  • Because this was generated by git, you can use "git apply diff.out" to have these changes made to your output.F file. Shown here are two "hunks" starting with the "@@" lines. Git will check all the hunks before applying any so you won't end up with the file in a half-baked state.
  • A much older tool for applying these patches is "patch". The proper invocation here is "patch -p1 < diff.out" from the top of your ROMS directory tree, where the "-p1" strips off the a/b pseudo-directories from the git output. Unlike git, patch will apply as many hunks as it can, dumping the rest into an output.F.rej file for you to deal with later.

Errors at Run Time

Segmentation fault

There is more than one way to get a seg fault. Ways I know of include (a) running out of memory or some other system limit, (b) accessing an array out of bounds, (c) calling a function with too few arguments, (d) compiling on one system and running on a second with incompatible MPI libraries, (e) asking get_ngfld to read a field of all land values. For me, solving these typically involves recompiling with USE_DEBUG and running in a debugger. The array out of bounds can be found by using a bounds checking compiler flag (often included in USE_DEBUG).

One common example is shown here:

romsG 0000000000439EAC load_s2d_ 1564 inp_par.f90

romsG 0000000000481B8C read_phypar_ 1215 read_phypar.f90 romsG 000000000040F213 inp_par_ 88 inp_par.f90 romsG 000000000040666D ocean_control_mod 84 ocean_control.f90

romsG 000000000040635F MAIN__ 95 master.f90

load_s2d is used to read a list of forcing files in the trunk code. In the COAWST/ice codes, it is also used to read a list of CLM and BRY files. In the case where there can be multiple BRY files, you need to tell it how many BRY files so that the structure can be allocated. If you are using COAWST/ice, be sure your roms.in includes the setting of NCLMFILES and NBCFILES, even if you have at most one of them.

Be sure your roms.in is consistent with your ROMS version!

Blowing up in diag.F

ROMS will check for extreme values of velocity and density at run time, including looking for NaN and Inf. If found, it will save a record to the restart file and die. If this happens early in the run, you might just try a shorter timestep. If it happens after many days/years of model time, you need to dig more deeply. Find out which field went bad and where in the domain it happened. The restart record can be viewed to look for this trouble. I've had runs go bad at the bottom with a bottom stress instability, at the top with a surface forcing problem, at the boundary with boundary condition issues. I've also had it go bad in the middle of the water column in no place special due to some other instability - it goes from fine to terrible in just a few timesteps. This last can often be run past with a shorter timestep, while the other problems call for more thoughtful responses.

NETCDF_OPEN - unable to open existing NetCDF file:

followed by some random junk. In the roms.in file, it is best not to comment out the lines with input filenames. For example, if ROMS wants to read a forcing file and doesn't know the name of it, it will have junk in the filename. I know there are times when one doesn't want to provide a forcing file, yet ROMS insists that it needs one. If you leave the filename intact, you will instead trigger a message about the field it can't read from file forcing.nc. You can check globaldefs.h for the conditions that trigger the need for FRC_FILE.

Also, if you tell it you have 13 forcing files, then only provide 12, you will get this message when it tries to read that file #13.

Other troubles

ROMS tries to check everything and print useful error messages if something isn't right. Many problems can be solved by carefully reading the ROMS output. Otherwise, carefully check to make sure your roms.in file is appropriate for the version of ROMS you are running. Are you adding variables that you forgot to include in your varinfo.dat? Is your bathymetry too steep? If it's still not clear, I then go back to the debugger - a very useful skill for serious ROMS hacking.

Algorithm Design

Why the _r8 at the end of real constants?

Some computers use 32 bits for single precision real numbers and 64 bits for double precision, and some use 64 and 128, respectively. For consistent results on various machines, ROMS takes advantage of the intrinsic F90/95 function SELECTED_REAL_KIND( ). See ROMS/Modules/mod_kinds.F. This allows you to associate an integer parameter with a specific data type...in this case r8 is associated with 64-bit precision. RTFM (Read the Fortran90 Manual), or better yet, chapter 11 in Chapman, 2004.

Why we cannot have zero bathymetry (h=0)?

In ROMS the the bathymetry, h, is positive and represents the vertical water column thickness (meters) from the ocean rest state (zeta=0). We cannot have h=0 in any horizontal grid point even if it is masked by land. Therefore, during grid generation the h values are limited to a minimum value hmin. Recall that in terrain-following vertical coordinates h is used to compute the vertical level thickness (Hz). If you examine the governing equations, we divide by Hz when computing the vertical viscosity and diffusion terms. In addition, in several places of the numerical kernel we also divide by Hz. This is usually computed in the internal array oHz=1/Hz. In mathematics, the division by zero is not defined and computers will return Inf or NaN. Users need to consider carefully what value of hmin to use by taking into account the horizontal grid resolution, number of vertical levels, and vertical coordinates stretching function for a particular application. In ROMS, all the vertical levels are located above hmin. This will limit the vertical time-step of the model due to the Currant-Friedrichs-Levy (CFL) condition. Notice that we also need a critical water column thickness (Dcrit) in wetting and drying for the same reasons mentioned above.


Input/Output NetCDF files

How to add a new variable to ROMS output files?

This question has been asked several times. ROMS IO design is very flexible and easy to expand. There are several steps that need to be followed to add new variables to any of the output files:

  1. The ROMS output NetCDF files are defined by any the routines with the prefix def_ in the ROMS/Utilities directory. For example, in def_his.F a variable can be defined in the history file by just adding:
    IF (Hout(idUvel,ng)) THEN
    Vinfo( 1)=Vname(1,idUvel)
    Vinfo( 2)=Vname(2,idUvel)
    Vinfo( 3)=Vname(3,idUvel)
    Vinfo(14)=Vname(4,idUvel)
    Vinfo(16)=Vname(1,idtime)
    # if defined WRITE_WATER && defined MASKING
    Vinfo(20)='mask_u'
    # endif
    Vinfo(22)='coordinates'
    Aval(5)=REAL(Iinfo(1,idUvel,ng),r8)
    status=def_var(ncHISid(ng),hisVid(idUvel,ng),NF_FOUT, &
    & nvd4,u3dgrd,Aval,Vinfo,ncname)
    END IF
    Correspondingly, the field time records are written in routines with the prefix wrt_. For example, a field can be written in wrt_his.F as:
    IF (Hout(idUvel,ng)) THEN
    scale=1.0_r8
    gtype=gfactor*u3dvar
    status=nf_fwrite3d(ng, iNLM, ncHISid(ng), hisVid(idUvel,ng), &
    & tHISindx(ng), gtype, &
    & LBi, UBi, LBj, UBj, 1, N(ng), scale, &
    # ifdef MASKING
    & GRID(ng) % umask(LBi,LBj), &
    # endif
    & OCEAN(ng) % u(LBi,LBj,1,NOUT))
    IF (status.ne.nf90_noerr) THEN
    IF (Master) THEN
    WRITE (stdout,10) TRIM(Vname(1,idUvel)), tHISindx(ng)
    END IF
    exit_flag=3
    ioerror=status
    RETURN
    END IF
    Ass you may noticed, some knowledge of ROMS internal structure is required to define a new field. Please follow any of output fields examples in such files to define and write the new field.
  2. Notice that ROMS state variables are defined on a horizontal, staggered Arakawa C-grid. In addition, the variables are also staggered in the vertical. Therefore, you need choose the appropriate identifier flag:
    • p2dvar: 2D field at ψ-points
    • r2dvar: 2D field at ρ-points
    • u2dvar: 2D field at u-points
    • v2dvar: 2D field at v-points
    • p3dvar: 3D field at ψ-points
    • r3dvar: 3D field at ρ-points
    • u3dvar: 3D field at u-points
    • v3dvar: 3D field at v-points
    • w3dvar: 3D field at w-points
    • b3dvar: 3D bed-sediment field
  3. Select a unique six-character, case sensitive, field identifier which is defined in file ROMS/Modules/mod_ncparam.F. Notice that all the identifiers are defined in alphabetic order to facilitate the search. Choose a meaningfull identifier prefixed by id. This indentifier needs to be assigned in routine initialize_ncparam during the reading of variable metadata form file varinfo.dat. You need to add code similar to:
    CASE ('idUvel')
    idUvel=varid
    Check the example above to see how the idUvel identifier is used in the internal field metadata arrays.
  4. Define new field metadata in file ROMS/External/varinfo.dat. Follow the examples to define the following field information:
    • Field variable name string. This is the variable name in the output NetCDF file. You need to choose a unique and meaninful variable name.
    • Field long-name string. This is the variable long_name attribute in the output NetCDF file.
    • Field units string. This is the variable units attribute in the output NetCDF file.
    • Field type string. This is the variable field attribute in the output NetCDF file.
    • Associated time-variable name string. This is the variable time attribute in the output NetCDF file.
    • Field identifier string. This is the field six-character unique index used in information arrays.
    • Field staggered C-grid variable type. Use one of flags defined above.
    • Field scale floating-point value. This value is only used in input data to scale field to model units, if necessary. Otherwise, use a unity value.
      Notice that all string information above is specified within single quotes to facilitate free-format reading in mod_ncparam.F. For example:
      'u'  ! Input/Output
      'u-momentum component'
      'meter second-1'  ! [m/s]
      'u-velocity, scalar, series'
      'ocean_time'
      'idUvel'
      'u3dvar'
      1.0d0
  5. Add logical switch Hout(id....) in input script roms.in to activate the processing of the new field. For example:
    Hout(idUvel) == T  ! 3D U-velocity
    This new switch needs to be read in file ROMS/Utility/inp_par.F, routine read_PhyPar. For example:
    ELSE IF (TRIM(KeyWord).eq.'Hout(idUvel)') THEN
    IF (idUvel.eq.0) THEN
    WRITE (out,280) 'idUvel'
    STOP
    END IF
    Npts=load_l(Nval, Cval, Ngrids, Hout(idUvel,1))
    It is also a good idea to report the new logical switch to standard output. For example:
    IF (Hout(idUvel,ng)) WRITE (out,170) Hout(idUvel,ng), &
    & 'Hout(idUvel)', &
    & 'Write out 3D U-momentum component.'
  6. Follow the same steps to process time-averaged fields. In addition, you need to define, allocate, and initialize in ROMS/Modules/mod_average.F the new ROMS variable used to accumulate the time records. For example:
    real(r8), pointer :: avgu3d(:,:,:)
    ...
    allocate ( AVERAGE(ng) % avgu3d(LBi:UBi,LBj:UBj,N(ng)) )
    ...
    AVERAGE(ng) % avgv3d(i,j,k) = IniVal
    The time-averaged fields are computed in ROMS/Nonlinear/set_avg.F. Check this file to see how this is done.