4DVar Normalization Tutorial
The various files in the Normalization folder are needed to run the 4-Dimensional Variational (4D-Var) data assimilation error covariance model that computes normalization coeffiecients in the California Current System, 1/3° resolution, application (WC13).
The computation of 4D-Var error covariance normalization coefficients is very expensive and depends on the grid size. The ones computed here use the expensive "exact" method. For large grids, we need to use the "randomization" approach. The normalization coefficients need to be computed only once for a particular application provided that the grid, land/sea masking (if any), and decorrelation scales remain the same.
This page is under construction
Important CPP Options
WC13 Application CPP option
Input NetCDF Files
Nonlinear Initial File: wc13_ini.nc
Forcing File 01: ../Data/coamps_wc13_lwrad_down.nc
Forcing File 02: ../Data/coamps_wc13_Pair.nc
Forcing File 03: ../Data/coamps_wc13_Qair.nc
Forcing File 04: ../Data/coamps_wc13_rain.nc
Forcing File 05: ../Data/coamps_wc13_swrad.nc
Forcing File 06: ../Data/coamps_wc13_Tair.nc
Forcing File 07: ../Data/coamps_wc13_wind.nc
Boundary File: ../Data/wc13_ecco_bry.nc
Initial Conditions STD File: ../Data/wc13_std_i.nc
Model STD File: ../Data/wc13_std_m.nc
Boundary Conditions STD File: ../Data/wc13_std_b.nc
Surface Forcing STD File: ../Data/wc13_std_f.nc
Output NetCDF Files
Model Norm File: wc13_nrm_m.nc
Boundary Conditions Norm File: wc13_nrm_b.nc
Surface Forcing Norm File: wc13_nrm_f.nc
Various Scripts and Include Files
build.sh csh Unix script to compile application
job_normalization.sh job configuration script
ocean_wc13.in ROMS standard input script for WC13
s4dvar.in 4D-Var standard input script template
wc13.h WC13 header with CPP options
Instructions
To run this application you need to take the following steps:
- Customize your preferred "build" script and provides the appropriate values for:
- Root directory, MY_ROOT_DIR
- ROMS source code, MY_ROMS_SRC
- Fortran compiler, FORT
- MPI flags, USE_MPI and USE_MPIF90
- Path of MPI, NetCDF, and ARPACK libraries according to the compiler. Notice that you need to provide the correct places of these libraries for your computer. If you want to ignore this section, comment out the assignment for variable USE_MY_LIBS.
- Notice that the most important CPP option for this application is specified in the "build" script instead of "wc13.h":setenv MY_CPP_FLAGS "-DNORMALIZATION"this is to allow flexibility with different CPP options.
For this to work, however, any "#undef" directives MUST be avoided in the header file "wc13.h" since it has precedence during C-preprocessing. - You MUST use the "build" script to compile.
- Customize ROMS input script "ocean_wc13.in" and specify the appropriate values for the distributed-memory partition. It is set by default to:NtileI == 1 ! I-direction partitionNotice that the adjoint-based algorithms can be only run in parallel using MPI. This is because the way that the adjoint model is constructed.
NtileJ == 8 ! J-direction partition - Customize configuration script "job_psas.sh" and provide the appropriate place for the "substitute" Perl script:set SUBSTITUTE=${ROMS_ROOT}/ROMS/Bin/substituteThis script is distributed with ROMS and it is found in the ROMS/Bin sub-directory. Alternatively, you can define ROMS_ROOT environmental variable in your .cshrc login script. For example, I have:setenv ROMS_ROOT /home/arango/ocean/toms/repository/trunk
- Execute the configuration job_normalization.sh BEFORE running the model. It copies the required files needed and creates c4dvar.in input script from template s4dvar.in.
- Run ROMS with data assimilation:mpirun -np 8 oceanM ocean_wc13.in > & log &
References