4DVar Normalization Tutorial

From WikiROMS
Jump to navigationJump to search
Error Covariance Normalization



Introduction

In this tutorial you will compute the 4D-Var error covariance (D) normalization factors for the California Current System application WC13.

The error covariance matrix, D=diag(Bx, Bb, Bf, Q), is very large and not well known. B and Q are modeled as the solution of a diffusion equation following Weaver and Courtier (2001) methodology. Each covariance matrix is factorized as B = K Σ C ΣT KT, where C is a univariate correlation matrix, Σ is a diagonal matrix of error standard deviations, and K is a multivariate balance operator. The normalization coefficients are needed to ensure that the diagonal elements of the associated correlation matrix C are equal to unity.

There are two methods to compute the error covariance normalization coefficients: exact and randomization (an approximation).

The exact method is very expensive on large grids. The normalization coefficients are computed by perturbing each model grid cell with a delta function scaled by the area (2D state variables) or volume (3D state variables), and then by convolving with the squared-root adjoint and tangent linear diffusion operators.

In the cheaper approximate method, the normalization coefficients are computed using the randomization approach of Fisher and Courtier (1995). The coefficients are initialized with random numbers having a uniform distribution (drawn from a normal distribution with zero mean and unit variance). Then, they are scaled by the inverse squared-root of the cell area (2D state variable) or volume (3D state variable) and convolved with the squared-root adjoint and tangent diffusion operators over a specified number of iterations, Nrandom.

Since the grid for WC13 is relatively small, the error covariance normalization coefficients are computed using the exact method. They need to be computed only once for a particular application provided that the grid, land/sea masking (if any), and decorrelation scales remain the same.

Model Set-up

The WC13 model domain is shown in Fig. 1 and has open boundaries along the northern, western, and southern edges of the model domain.

Fig. 1: Model Bathymetry with 37°N Transect and Target Area

In the tutorial, you will perform a 4D-Var data assimilation cycle that spans the period 3-6 January, 2004. The 4D-Var control vector δz is comprised of increments to the initial conditions, δx(t0), surface forcing, δf(t), and open boundary conditions, δb(t). The prior initial conditions, xb(t0), are taken from the sequence of 4D-Var experiments described by Moore et al. (2011b) in which data were assimilated every 7 days during the period July 2002- December 2004. The prior surface forcing, fb(t), takes the form of surface wind stress, heat flux, and a freshwater flux computed using the ROMS bulk flux formulation, and using near surface air data from COAMPS (Doyle et al., 2009). Clamped open boundary conditions are imposed on (u,v) and tracers, and the prior boundary conditions, bb(t), are taken from the global ECCO product (Wunsch and Heimbach, 2007). The free-surface height and vertically integrated velocity components are subject to the usual Chapman and Flather radiation conditions at the open boundaries. The prior surface forcing and open boundary conditions are provided daily and linearly interpolated in time. Similarly, the increments δf(t) and δb(t) are also computed daily and linearly interpolated in time.

The observations assimilated into the model are satellite SST, satellite SSH in the form of a gridded product from Aviso, and hydrographic observations of temperature and salinity collected from Argo floats and during the GLOBEC/LTOP and CalCOFI cruises off the coast of Oregon and southern California, respectively. The observation locations are illustrated in Fig. 2.

Figure 2: WC13 Observations
a) Aviso SSH
b) Blended SST
c) In Situ Temperature
d) In Situ Salinity

Running 4D-Var Error Covariance Normalization

To run this tutorial, go first to the directory WC13/Normalization. Instructions for compiling and running the model are provided below or can be found in the Readme file. The ocean_wc13.in input script is configured for this exercise.

Important CPP Options

The following C-preprocessing options are activated in the build script:

NORMALIZATION 4D-Var error covariance normalization coefficients
WC13 Application CPP option

Input NetCDF Files

WC13 requires the following input NetCDF files:

Grid File: ../Data/wc13_grd.nc
Nonlinear Initial File: wc13_ini.nc
Forcing File 01: ../Data/coamps_wc13_lwrad_down.nc
Forcing File 02: ../Data/coamps_wc13_Pair.nc
Forcing File 03: ../Data/coamps_wc13_Qair.nc
Forcing File 04: ../Data/coamps_wc13_rain.nc
Forcing File 05: ../Data/coamps_wc13_swrad.nc
Forcing File 06: ../Data/coamps_wc13_Tair.nc
Forcing File 07: ../Data/coamps_wc13_wind.nc
Boundary File: ../Data/wc13_ecco_bry.nc

Initial Conditions STD File: ../Data/wc13_std_i.nc
Model STD File: ../Data/wc13_std_m.nc
Boundary Conditions STD File: ../Data/wc13_std_b.nc
Surface Forcing STD File: ../Data/wc13_std_f.nc

Output NetCDF Files

The following output NetCDF files will be created containing the error covariance normalization coefficients:

Initial Conditions Norm File: wc13_nrm_i.nc
Model Norm File: wc13_nrm_m.nc
Boundary Conditions Norm File: wc13_nrm_b.nc
Surface Forcing Norm File: wc13_nrm_f.nc

Various Scripts and Include Files

The following files will be found in WC13/Normalization directory after downloading from ROMS test cases SVN repository:

Readme instructions
build.bash bash shell script to compile application
build.sh csh Unix script to compile application
job_normalization.sh job configuration script
ocean_wc13.in ROMS standard input script for WC13
s4dvar.in 4D-Var standard input script template
wc13.h WC13 header with CPP options

Check these files for detailed information.

Important Parameters

Check following parameters in the 4D-Var input script s4dvar.in (see input script for details):

Nmethod == 0  ! normalization method
Nrandom == 5000  ! randomization iterations

LdefNRM == T T T T  ! Create a new normalization files
LwrtNRM == T T T T  ! Compute and write normalization

CnormI(isFsur) = T  ! 2D variable at RHO-points
CnormI(isUbar) = T  ! 2D variable at U-points
CnormI(isVbar) = T  ! 2D variable at V-points
CnormI(isUvel) = T  ! 3D variable at U-points
CnormI(isVvel) = T  ! 3D variable at V-points
CnormI(isTvar) = T T  ! NT tracers

CnormB(isFsur) = T  ! 2D variable at RHO-points
CnormB(isUbar) = T  ! 2D variable at U-points
CnormB(isVbar) = T  ! 2D variable at V-points
CnormB(isUvel) = T  ! 3D variable at U-points
CnormB(isVvel) = T  ! 3D variable at V-points
CnormB(isTvar) = T T  ! NT tracers

CnormF(isUstr) = T  ! surface U-momentum stress
CnormF(isVstr) = T  ! surface V-momentum stress
CnormF(isTsur) = T T  ! NT surface tracers flux

In large grid applications, you can accelerate computations by adjust the above switches to submit several simultaneous jobs to compute the normalization coefficients for each state variables separately. Of course, you will need a lot computer processors. If you use this strategy, make sure that the LdefNRM(:) switch is T for the first job and F for the other jobs. That is, we create the output normalization NetCDF file for initial conditions LdefNRM(1), model error LdefNRM(2), open boundary conditions LdefNRM(3), and surface forcing LdefNRM(4) only once in the first job. The other jobs only compute and write the specified state variable(s) error covariance normalization coefficient(s). Usually, the normalization coefficients for 2D state variables are computed quickly. However, the ones for 3D state variables are much slower as the number of vertical levels increase. If the spatial decorrelation scales for all tracer variables are the same, the algorithm compute the normalization coefficients for the first tracer (temperature) and assign the same values for the other tracers (salinity, etc).

Since this application has a small grid (54x53x30), this tutorial computes the normalization coefficients using the exact method and creates the following files:

wc13_nrm_i.nc initial conditions
wc13_nrm_m.nc model error (weak constraint)
wc13_nrm_b.nc open boundary conditions
wc13_nrm_f.nc surface forcing (wind stress and net heat flux)

Notice that the switches LdefNRM and LwrtNRM are all true (T) so the model will compute and write all the error covariance normalization coefficients.

The normalization coefficients need to be computed only once for a particular application provided that the grid, land/sea masking (if any), and decorrelation scales (HdecayI, VdecayI, HdecayB, VdecayV, and HdecayF) remain the same. Notice that large spatial changes in the normalization coefficient structure are observed near the open boundaries and land/sea masking regions.

Instructions

To run this application you need to take the following steps:

  1. We need to run the model application for a period that is long enough to compute meaningful circulation statistics, like mean and standard deviations for all prognostic state variables (zeta, u, v, T, and S). The standard deviations are written to NetCDF files and are read by the 4D-Var algorithm to convert modeled error correlations to error covariances. We need the standard deviations for initial conditions, model error (weak constraint 4D-Var), open boundary conditions (ADJUST_BOUNDARY), and surface forcing (ADJUST_WSTRESS and ADJUST_STFLUX). The standard deviations for the initial and model error are in terms of the unbalanced error covariance (K Bu KT) since the balanced operator is activated (BALANCE_OPERATOR and ZETA_ELLIPTIC).
     
    The balance operator imposes a multivariate constraint on the error covariance such that the unobserved variable information is extracted from observed data by establishing balance relationships (i.e., T-S empirical formulas, hydrostactic balance, and geostrophic balance) with other state variables (Weaver et al., 2005).
     
    These standard deviations have already been created for you:
    ../Data/wc13_std_i.nc initial conditions
    ../Data/wc13_std_m.nc model error (weak constraint)
    ../Data/wc13_std_b.nc open boundary conditions
    ../Data/wc13_std_f.nc surface forcing (wind stress and net heat flux)
  2. Customize your preferred build script and provide the appropriate values for:
    • Root directory, MY_ROOT_DIR
    • ROMS source code, MY_ROMS_SRC
    • Fortran compiler, FORT
    • MPI flags, USE_MPI and USE_MPIF90
    • Path of MPI, NetCDF, and ARPACK libraries according to the compiler. Notice that you need to provide the correct places of these libraries for your computer. If you want to ignore this section, comment out the assignment for the variable USE_MY_LIBS.
  3. Notice that the most important CPP option for this application is specified in the build script instead of wc13.h:
    setenv MY_CPP_FLAGS "-DNORMALIZATION"
    This is to allow flexibility with different CPP options.
     
    For this to work, however, any #undef directives MUST be avoided in the header file wc13.h since it has precedence during C-preprocessing.
  4. You MUST use the build script to compile.
  5. Customize the ROMS input script ocean_wc13.in and specify the appropriate values for the distributed-memory partition. It is set by default to:
    NtileI == 1  ! I-direction partition
    NtileJ == 8  ! J-direction partition
    Notice that the adjoint-based algorithms can only be run in parallel using MPI. This is because of the way that the adjoint model is constructed.
  6. Customize the configuration script job_normalization.sh and provide the appropriate place for the substitute Perl script:
    set SUBSTITUTE=${ROMS_ROOT}/ROMS/Bin/substitute
    This script is distributed with ROMS and it is found in the ROMS/Bin sub-directory. Alternatively, you can define ROMS_ROOT environmental variable in your .cshrc login script. For example, I have:
    setenv ROMS_ROOT /home/arango/ocean/toms/repository/trunk
  7. Execute the configuration job_normalization.sh before running the model. It copies the required files and creates the c4dvar.in input script from template s4dvar.in.
  8. Run ROMS with data assimilation:
    mpirun -np 8 oceanM ocean_wc13.in > & log &