Difference between revisions of "RBL4D-Var Analysis Observation Sensitivity Tutorial"

From WikiROMS
Jump to navigationJump to search
Line 30: Line 30:
To run this application you need to take the following steps:
To run this application you need to take the following steps:


#We need to run the model application for a period that is long enough to compute meaningful circulation statistics, like mean and standard deviations for all prognostic state variables ([[Variables#zeta|zeta]], [[Variables#u|u]], [[Variables#v|v]], [[Variables#T|T]], and [[Variables#S|S]]). The standard deviations are written to NetCDF files and are read by the 4D-Var algorithm to convert modeled error correlations to error covariances. The error covariance matrix, '''D''', is very large and not well known. It is modeled as the solution of a diffusion equation as in [[Bibliography#WeaverAT_2001a|Weaver and Courtier (2001)]].<br /><br />In this application, we need standard deviations for initial conditions, surface forcing ([[ADJUST_WSTRESS]] and [[ADJUST_STFLUX]]), and open boundary conditions ([[ADJUST_BOUNDARY]]). The standard deviations for the initial and open boundary conditions are in terms of the unbalanced error covariance ('''K D'''<sub>u</sub> '''K'''<sup>T</sup>) since the balanced operator is activated ([[BALANCE_OPERATOR]] and [[ZETA_ELLIPTIC]]).<br /><br />The balance operator imposes a multivariate constraint on the error covariance such that the unobserved variable information is extracted from observed data by establishing balance relationships (i.e., T-S empirical formulas, hydrostactic balance, and geostrophic balance) with other state variables ([[Bibliography#WeaverAT_2005a|Weaver ''et al.'', 2005]]).<br /><br />These standard deviations have already been created for you:<div class="box"><span class="twilightBlue">../Data/wc13_std_i.nc</span>    initial conditions<br /><span class="twilightBlue">../Data/wc13_std_m.nc</span>    model error (if weak constraint)<br /><span class="twilightBlue">../Data/wc13_std_b.nc</span>    open boundary conditions<br /><span class="twilightBlue">../Data/wc13_std_f.nc</span>    surface forcing (wind stress and net heat flux)</div>
#We need to run the model application for a period that is long enough to compute meaningful circulation statistics, like mean and standard deviations for all prognostic state variables ([[Variables#zeta|zeta]], [[Variables#u|u]], [[Variables#v|v]], [[Variables#T|T]], and [[Variables#S|S]]). The standard deviations are written to NetCDF files and are read by the 4D-Var algorithm to convert modeled error correlations to error covariances. The error covariance matrix, '''D''', is very large and not well known. It is modeled as the solution of a diffusion equation as in [[Bibliography#WeaverAT_2001a|Weaver and Courtier (2001)]].<div class="para">&nbsp;</div>In this application, we need standard deviations for initial conditions, surface forcing ([[ADJUST_WSTRESS]] and [[ADJUST_STFLUX]]), and open boundary conditions ([[ADJUST_BOUNDARY]]). The standard deviations for the initial and open boundary conditions are in terms of the unbalanced error covariance ('''K D'''<sub>u</sub> '''K'''<sup>T</sup>) since the balanced operator is activated ([[BALANCE_OPERATOR]] and [[ZETA_ELLIPTIC]]).<div class="para">&nbsp;</div>The balance operator imposes a multivariate constraint on the error covariance such that the unobserved variable information is extracted from observed data by establishing balance relationships (i.e., T-S empirical formulas, hydrostactic balance, and geostrophic balance) with other state variables ([[Bibliography#WeaverAT_2005a|Weaver ''et al.'', 2005]]).<div class="para">&nbsp;</div>These standard deviations have already been created for you:<div class="box"><span class="twilightBlue">../Data/wc13_std_i.nc</span>    initial conditions<br /><span class="twilightBlue">../Data/wc13_std_m.nc</span>    model error (if weak constraint)<br /><span class="twilightBlue">../Data/wc13_std_b.nc</span>    open boundary conditions<br /><span class="twilightBlue">../Data/wc13_std_f.nc</span>    surface forcing (wind stress and net heat flux)</div>
#Since we are modeling the error covariance matrix, '''D''', we need to compute the normalization coefficients to ensure that the diagonal elements of the associated correlation matrix '''C''' are equal to unity. There are two methods to compute normalization coefficients: exact and randomization (an approximation).<br /><br />The exact method is very expensive on large grids. The normalization coefficients are computed by perturbing each model grid cell with a delta function scaled by the area (2D state variables) or volume (3D state variables), and then by convolving with the squared-root adjoint and tangent linear diffusion operators.<br /><br />The approximate method is cheaper: the normalization  coefficients are computed using the randomization approach of [[Bibliography#FisherM_1995a|Fisher and Courtier (1995)]]. The coefficients are initialized with random numbers having a uniform distribution (drawn from a normal distribution with zero mean and unit variance). Then, they are scaled by the inverse squared-root of the cell area (2D state variable) or volume (3D state variable) and convolved with the squared-root adjoint and tangent diffusion operators over a specified number of iterations, Nrandom.<br /><br />Check following parameters in the 4D-Var input script [[s4dvar.in]] (see input script for details):<div class="box">[[Variables#Nmethod|Nmethod]]  == 0            ! normalization method<br />[[Variables#Nrandom|Nrandom]]  == 5000          ! randomization iterations<br /><br />[[Variables#LdefNRM|LdefNRM]] == F F F F        ! Create a new normalization files<br />[[Variables#LwrtNRM|LwrtNRM]] == F F F F        ! Compute and write normalization<br /><br />[[Variables#CnormI|CnormI(isFsur)]] =  T      ! 2D variable at RHO-points<br />[[Variables#CnormI|CnormI(isUbar)]] =  T      ! 2D variable at U-points<br />[[Variables#CnormI|CnormI(isVbar)]] =  T      ! 2D variable at V-points<br />[[Variables#CnormI|CnormI(isUvel)]] =  T      ! 3D variable at U-points<br />[[Variables#CnormI|CnormI(isVvel)]] =  T      ! 3D variable at V-points<br />[[Variables#CnormI|CnormI(isTvar)]] =  T T    ! NT tracers<br /><br />[[Variables#CnormB|CnormB(isFsur)]] =  T      ! 2D variable at RHO-points<br />[[Variables#CnormB|CnormB(isUbar)]] =  T      ! 2D variable at U-points<br />[[Variables#CnormB|CnormB(isVbar)]] =  T      ! 2D variable at V-points<br />[[Variables#CnormB|CnormB(isUvel)]] =  T      ! 3D variable at U-points<br />[[Variables#CnormB|CnormB(isVvel)]] =  T      ! 3D variable at V-points<br />[[Variables#CnormB|CnormB(isTvar)]] =  T T    ! NT tracers<br /><br />[[Variables#CnormF|CnormF(isUstr)]] =  T      ! surface U-momentum stress<br />[[Variables#CnormF|CnormF(isVstr)]] =  T      ! surface V-momentum stress<br />[[Variables#CnormF|CnormF(isTsur)]] =  T T    ! NT surface tracers flux</div>These normalization coefficients have already been computed for you ('''../Normalization''') using the exact method since this application has a small grid (54x53x30):<div class="box"><span class="twilightBlue">../Data/wc13_nrm_i.nc</span>    initial conditions<br /><span class="twilightBlue">../Data/wc13_std_m.nc</span>    model error (if weak constraint)<br /><span class="twilightBlue">../Data/wc13_nrm_b.nc</span>    open boundary conditions<br /><span class="twilightBlue">../Data/wc13_nrm_f.nc</span>    surface forcing (wind stress and<br />                                          net heat flux)</div>Notice that the switches [[Variables#LdefNRM|LdefNRM]] and [[Variables#LwrtNRM|LwrtNRM]] are all '''false''' (F) since we already computed these coefficients.<br /><br />The normalization coefficients need to be computed only once for a particular application provided that the grid, land/sea masking (if any), and decorrelation scales ([[Variables#HdecayI|HdecayI]], [[Variables#VdecayI|VdecayI]], [[Variables#HdecayB|HdecayB]], [[Variables#VdecayV|VdecayV]], and [[Variables#HdecayF|HdecayF]]) remain the same. Notice that large spatial changes in the normalization coefficient structure are observed near the open boundaries and land/sea masking regions.
#Since we are modeling the error covariance matrix, '''D''', we need to compute the normalization coefficients to ensure that the diagonal elements of the associated correlation matrix '''C''' are equal to unity. There are two methods to compute normalization coefficients: exact and randomization (an approximation).<div class="para">&nbsp;</div>The exact method is very expensive on large grids. The normalization coefficients are computed by perturbing each model grid cell with a delta function scaled by the area (2D state variables) or volume (3D state variables), and then by convolving with the squared-root adjoint and tangent linear diffusion operators.<div class="para">&nbsp;</div>The approximate method is cheaper: the normalization  coefficients are computed using the randomization approach of [[Bibliography#FisherM_1995a|Fisher and Courtier (1995)]]. The coefficients are initialized with random numbers having a uniform distribution (drawn from a normal distribution with zero mean and unit variance). Then, they are scaled by the inverse squared-root of the cell area (2D state variable) or volume (3D state variable) and convolved with the squared-root adjoint and tangent diffusion operators over a specified number of iterations, Nrandom.<div class="para">&nbsp;</div>Check following parameters in the 4D-Var input script [[s4dvar.in]] (see input script for details):<div class="box">[[Variables#Nmethod|Nmethod]]  == 0            ! normalization method<br />[[Variables#Nrandom|Nrandom]]  == 5000          ! randomization iterations<br /><br />[[Variables#LdefNRM|LdefNRM]] == F F F F        ! Create a new normalization files<br />[[Variables#LwrtNRM|LwrtNRM]] == F F F F        ! Compute and write normalization<br /><br />[[Variables#CnormI|CnormI(isFsur)]] =  T      ! 2D variable at RHO-points<br />[[Variables#CnormI|CnormI(isUbar)]] =  T      ! 2D variable at U-points<br />[[Variables#CnormI|CnormI(isVbar)]] =  T      ! 2D variable at V-points<br />[[Variables#CnormI|CnormI(isUvel)]] =  T      ! 3D variable at U-points<br />[[Variables#CnormI|CnormI(isVvel)]] =  T      ! 3D variable at V-points<br />[[Variables#CnormI|CnormI(isTvar)]] =  T T    ! NT tracers<br /><br />[[Variables#CnormB|CnormB(isFsur)]] =  T      ! 2D variable at RHO-points<br />[[Variables#CnormB|CnormB(isUbar)]] =  T      ! 2D variable at U-points<br />[[Variables#CnormB|CnormB(isVbar)]] =  T      ! 2D variable at V-points<br />[[Variables#CnormB|CnormB(isUvel)]] =  T      ! 3D variable at U-points<br />[[Variables#CnormB|CnormB(isVvel)]] =  T      ! 3D variable at V-points<br />[[Variables#CnormB|CnormB(isTvar)]] =  T T    ! NT tracers<br /><br />[[Variables#CnormF|CnormF(isUstr)]] =  T      ! surface U-momentum stress<br />[[Variables#CnormF|CnormF(isVstr)]] =  T      ! surface V-momentum stress<br />[[Variables#CnormF|CnormF(isTsur)]] =  T T    ! NT surface tracers flux</div>These normalization coefficients have already been computed for you ('''../Normalization''') using the exact method since this application has a small grid (54x53x30):<div class="box"><span class="twilightBlue">../Data/wc13_nrm_i.nc</span>    initial conditions<br /><span class="twilightBlue">../Data/wc13_std_m.nc</span>    model error (if weak constraint)<br /><span class="twilightBlue">../Data/wc13_nrm_b.nc</span>    open boundary conditions<br /><span class="twilightBlue">../Data/wc13_nrm_f.nc</span>    surface forcing (wind stress and<br />                                          net heat flux)</div>Notice that the switches [[Variables#LdefNRM|LdefNRM]] and [[Variables#LwrtNRM|LwrtNRM]] are all '''false''' (F) since we already computed these coefficients.<div class="para">&nbsp;</div>The normalization coefficients need to be computed only once for a particular application provided that the grid, land/sea masking (if any), and decorrelation scales ([[Variables#HdecayI|HdecayI]], [[Variables#VdecayI|VdecayI]], [[Variables#HdecayB|HdecayB]], [[Variables#VdecayV|VdecayV]], and [[Variables#HdecayF|HdecayF]]) remain the same. Notice that large spatial changes in the normalization coefficient structure are observed near the open boundaries and land/sea masking regions.
#Before you run this application, you need to run the standard [[PSAS_Tutorial|4D-PSAS]] ('''../PSAS''' directory) since we need the Lanczos vectors. Notice that in [[job_psas_sen|job_psas_sen.sh]] we have the following operation:<div class="box"><span class="red">cp -p ${Dir}/PSAS/wc13_mod.nc wc13_lcz.nc</span></div>In 4D-Var (observartion space minimization), the Lanczos vectors are stored in the output 4D-Var NetCDF file <span class="twilightBlue">wc13_mod.nc</span>.
#Before you run this application, you need to run the standard [[PSAS_Tutorial|4D-PSAS]] ('''../PSAS''' directory) since we need the Lanczos vectors. Notice that in [[job_psas_sen|job_psas_sen.sh]] we have the following operation:<div class="box"><span class="red">cp -p ${Dir}/PSAS/wc13_mod.nc wc13_lcz.nc</span></div>In 4D-Var (observartion space minimization), the Lanczos vectors are stored in the output 4D-Var NetCDF file <span class="twilightBlue">wc13_mod.nc</span>.
#In addition, to run this application you need an adjoint sensitivity functional. This is computed by the following Matlab script:<div class="box"><span class="red">../Data/adsen_37N_transport.m</span></div>which creates the NetCDF file <span class="twilightBlue">wc13_ads.nc</span>. This file has already been created for you.<br /><br />The adjoint sensitivity functional is defined as the time-averaged transport crossing 37N in the upper 500m.
#In addition, to run this application you need an adjoint sensitivity functional. This is computed by the following Matlab script:<div class="box"><span class="red">../Data/adsen_37N_transport.m</span></div>which creates the NetCDF file <span class="twilightBlue">wc13_ads.nc</span>. This file has already been created for you.<div class="para">&nbsp;</div>The adjoint sensitivity functional is defined as the time-averaged transport crossing 37N in the upper 500m.
#Customize your preferred [[build_Script|build script]] and provide the appropriate values for:
#Customize your preferred [[build_Script|build script]] and provide the appropriate values for:
#*Root directory, MY_ROOT_DIR
#*Root directory, MY_ROOT_DIR
Line 40: Line 40:
#*MPI flags, USE_MPI and USE_MPIF90
#*MPI flags, USE_MPI and USE_MPIF90
#*Path of MPI, NetCDF, and ARPACK libraries according to the compiler. Notice that you need to provide the correct places of these libraries for your computer. If you want to ignore this section, comment out the assignment for the variable USE_MY_LIBS.
#*Path of MPI, NetCDF, and ARPACK libraries according to the compiler. Notice that you need to provide the correct places of these libraries for your computer. If you want to ignore this section, comment out the assignment for the variable USE_MY_LIBS.
#Notice that the most important CPP options for this application are specified in the [[build_Script|build script]] instead of <span class="twilightBlue">wc13.h</span>:<div class="box"><span class="twilightBlue">setenv MY_CPP_FLAGS "-DW4DPSAS_SENSITIVITY"<br />setenv MY_CPP_FLAGS "${MY_CPP_FLAGS} -DAD_IMPULSE"</span></div>This is to allow flexibility with different CPP options.<br /><br />For this to work, however, any '''#undef''' directives MUST be avoided in the header file <span class="twilightBlue">wc13.h</span> since it has precedence during C-preprocessing.
#Notice that the most important CPP options for this application are specified in the [[build_Script|build script]] instead of <span class="twilightBlue">wc13.h</span>:<div class="box"><span class="twilightBlue">setenv MY_CPP_FLAGS "-DW4DPSAS_SENSITIVITY"<br />setenv MY_CPP_FLAGS "${MY_CPP_FLAGS} -DAD_IMPULSE"</span></div>This is to allow flexibility with different CPP options.<div class="para">&nbsp;</div>For this to work, however, any '''#undef''' directives MUST be avoided in the header file <span class="twilightBlue">wc13.h</span> since it has precedence during C-preprocessing.
#You MUST use the [[build_Script|build script]] to compile.
#You MUST use the [[build_Script|build script]] to compile.
#Customize the ROMS input script <span class="twilightBlue">ocean_wc13.in</span> and specify the appropriate values for the distributed-memory partition. It is set by default to:<div class="box">[[Variables#NtileI|NtileI]] == 2                              ! I-direction partition<br />[[Variables#NtileJ|NtileJ]] == 2                              ! J-direction partition</div>Notice that the adjoint-based algorithms can only be run in parallel using MPI. This is because of the way that the adjoint model is constructed.
#Customize the ROMS input script <span class="twilightBlue">ocean_wc13.in</span> and specify the appropriate values for the distributed-memory partition. It is set by default to:<div class="box">[[Variables#NtileI|NtileI]] == 2                              ! I-direction partition<br />[[Variables#NtileJ|NtileJ]] == 2                              ! J-direction partition</div>Notice that the adjoint-based algorithms can only be run in parallel using MPI. This is because of the way that the adjoint model is constructed.

Revision as of 15:31, 30 June 2010

4-DVar PSAS Observation Sensitivity

The various files in the PSAS_sensitivity folder are needed to run the strong/weak constraint, dual form of 4-Dimensional Variational (4D-Var) data assimilation observation sensitivity based on the Physical-space Statistical Analysis System (PSAS) algorithm in the California Current System, 1/3° resolution, application (WC13).


WarningThis page is under constructionWarning


Important CPP Options

W4DPSAS_SENSITIVITY 4D-PSAS observation sensitivity driver
AD_IMPULSE Force ADM with intermittent impulses
WC13 Application CPP option

Input NetCDF Files

Grid File: ../Data/wc13_grd.nc
Nonlinear Initial File: wc13_ini.nc
Forcing File 01: ../Data/coamps_wc13_lwrad_down.nc
Forcing File 02: ../Data/coamps_wc13_Pair.nc
Forcing File 03: ../Data/coamps_wc13_Qair.nc
Forcing File 04: ../Data/coamps_wc13_rain.nc
Forcing File 05: ../Data/coamps_wc13_swrad.nc
Forcing File 06: ../Data/coamps_wc13_Tair.nc
Forcing File 07: ../Data/coamps_wc13_wind.nc
Boundary File: ../Data/wc13_ecco_bry.nc

Adjoint Sensitivity File: wc13_ads.nc
Initial Conditions STD File: ../Data/wc13_std_i.nc
Model STD File: ../Data/wc13_std_m.nc
Boundary Conditions STD File: ../Data/wc13_std_b.nc
Surface Forcing STD File: ../Data/wc13_std_f.nc
Initial Conditions Norm File: ../Data/wc13_nrm_i.nc
Model Norm File: ../Data/wc13_nrm_m.nc
Boundary Conditions Norm File: ../Data/wc13_nrm_b.nc
Surface Forcing Norm File: ../Data/wc13_nrm_f.nc
Observations File: wc13_obs.nc
Lanczos Vectors File: wc13_lcz.nc

Various Scripts and Include Files

build.bash bash shell script to compile application
build.sh csh Unix script to compile application
job_psas_sen.sh job configuration script
ocean_wc13.in ROMS standard input script for WC13
s4dvar.in 4D-Var standard input script template
wc13.h WC13 header with CPP options

Important parameters in standard input ocean_wc13.in script

  • Notice that this driver uses the following adjoint sensitivity parameters (see input script for details):
DstrS == 0.0d0  ! starting day
DendS == 0.0d0  ! ending day

KstrS == 1  ! starting level
KendS == 30  ! ending level

Lstate(isFsur) == T  ! free-surface
Lstate(isUbar) == T  ! 2D U-momentum
Lstate(isVbar) == T  ! 2D V-momentum
Lstate(isUvel) == T  ! 3D U-momentum
Lstate(isVvel) == T  ! 3D V-momentum

Lstate(isTvar) == T T  ! tracers
  • Both FWDNAME and HISNAME must be the same:
FWDNAME == wc13_fwd.nc
HISNAME == wc13_fwd.nc

Instructions

To run this application you need to take the following steps:

  1. We need to run the model application for a period that is long enough to compute meaningful circulation statistics, like mean and standard deviations for all prognostic state variables (zeta, u, v, T, and S). The standard deviations are written to NetCDF files and are read by the 4D-Var algorithm to convert modeled error correlations to error covariances. The error covariance matrix, D, is very large and not well known. It is modeled as the solution of a diffusion equation as in Weaver and Courtier (2001).
     
    In this application, we need standard deviations for initial conditions, surface forcing (ADJUST_WSTRESS and ADJUST_STFLUX), and open boundary conditions (ADJUST_BOUNDARY). The standard deviations for the initial and open boundary conditions are in terms of the unbalanced error covariance (K Du KT) since the balanced operator is activated (BALANCE_OPERATOR and ZETA_ELLIPTIC).
     
    The balance operator imposes a multivariate constraint on the error covariance such that the unobserved variable information is extracted from observed data by establishing balance relationships (i.e., T-S empirical formulas, hydrostactic balance, and geostrophic balance) with other state variables (Weaver et al., 2005).
     
    These standard deviations have already been created for you:
    ../Data/wc13_std_i.nc initial conditions
    ../Data/wc13_std_m.nc model error (if weak constraint)
    ../Data/wc13_std_b.nc open boundary conditions
    ../Data/wc13_std_f.nc surface forcing (wind stress and net heat flux)
  2. Since we are modeling the error covariance matrix, D, we need to compute the normalization coefficients to ensure that the diagonal elements of the associated correlation matrix C are equal to unity. There are two methods to compute normalization coefficients: exact and randomization (an approximation).
     
    The exact method is very expensive on large grids. The normalization coefficients are computed by perturbing each model grid cell with a delta function scaled by the area (2D state variables) or volume (3D state variables), and then by convolving with the squared-root adjoint and tangent linear diffusion operators.
     
    The approximate method is cheaper: the normalization coefficients are computed using the randomization approach of Fisher and Courtier (1995). The coefficients are initialized with random numbers having a uniform distribution (drawn from a normal distribution with zero mean and unit variance). Then, they are scaled by the inverse squared-root of the cell area (2D state variable) or volume (3D state variable) and convolved with the squared-root adjoint and tangent diffusion operators over a specified number of iterations, Nrandom.
     
    Check following parameters in the 4D-Var input script s4dvar.in (see input script for details):
    Nmethod == 0  ! normalization method
    Nrandom == 5000  ! randomization iterations

    LdefNRM == F F F F  ! Create a new normalization files
    LwrtNRM == F F F F  ! Compute and write normalization

    CnormI(isFsur) = T  ! 2D variable at RHO-points
    CnormI(isUbar) = T  ! 2D variable at U-points
    CnormI(isVbar) = T  ! 2D variable at V-points
    CnormI(isUvel) = T  ! 3D variable at U-points
    CnormI(isVvel) = T  ! 3D variable at V-points
    CnormI(isTvar) = T T  ! NT tracers

    CnormB(isFsur) = T  ! 2D variable at RHO-points
    CnormB(isUbar) = T  ! 2D variable at U-points
    CnormB(isVbar) = T  ! 2D variable at V-points
    CnormB(isUvel) = T  ! 3D variable at U-points
    CnormB(isVvel) = T  ! 3D variable at V-points
    CnormB(isTvar) = T T  ! NT tracers

    CnormF(isUstr) = T  ! surface U-momentum stress
    CnormF(isVstr) = T  ! surface V-momentum stress
    CnormF(isTsur) = T T  ! NT surface tracers flux
    These normalization coefficients have already been computed for you (../Normalization) using the exact method since this application has a small grid (54x53x30):
    ../Data/wc13_nrm_i.nc initial conditions
    ../Data/wc13_std_m.nc model error (if weak constraint)
    ../Data/wc13_nrm_b.nc open boundary conditions
    ../Data/wc13_nrm_f.nc surface forcing (wind stress and
    net heat flux)
    Notice that the switches LdefNRM and LwrtNRM are all false (F) since we already computed these coefficients.
     
    The normalization coefficients need to be computed only once for a particular application provided that the grid, land/sea masking (if any), and decorrelation scales (HdecayI, VdecayI, HdecayB, VdecayV, and HdecayF) remain the same. Notice that large spatial changes in the normalization coefficient structure are observed near the open boundaries and land/sea masking regions.
  3. Before you run this application, you need to run the standard 4D-PSAS (../PSAS directory) since we need the Lanczos vectors. Notice that in job_psas_sen.sh we have the following operation:
    cp -p ${Dir}/PSAS/wc13_mod.nc wc13_lcz.nc
    In 4D-Var (observartion space minimization), the Lanczos vectors are stored in the output 4D-Var NetCDF file wc13_mod.nc.
  4. In addition, to run this application you need an adjoint sensitivity functional. This is computed by the following Matlab script:
    ../Data/adsen_37N_transport.m
    which creates the NetCDF file wc13_ads.nc. This file has already been created for you.
     
    The adjoint sensitivity functional is defined as the time-averaged transport crossing 37N in the upper 500m.
  5. Customize your preferred build script and provide the appropriate values for:
    • Root directory, MY_ROOT_DIR
    • ROMS source code, MY_ROMS_SRC
    • Fortran compiler, FORT
    • MPI flags, USE_MPI and USE_MPIF90
    • Path of MPI, NetCDF, and ARPACK libraries according to the compiler. Notice that you need to provide the correct places of these libraries for your computer. If you want to ignore this section, comment out the assignment for the variable USE_MY_LIBS.
  6. Notice that the most important CPP options for this application are specified in the build script instead of wc13.h:
    setenv MY_CPP_FLAGS "-DW4DPSAS_SENSITIVITY"
    setenv MY_CPP_FLAGS "${MY_CPP_FLAGS} -DAD_IMPULSE"
    This is to allow flexibility with different CPP options.
     
    For this to work, however, any #undef directives MUST be avoided in the header file wc13.h since it has precedence during C-preprocessing.
  7. You MUST use the build script to compile.
  8. Customize the ROMS input script ocean_wc13.in and specify the appropriate values for the distributed-memory partition. It is set by default to:
    NtileI == 2  ! I-direction partition
    NtileJ == 2  ! J-direction partition
    Notice that the adjoint-based algorithms can only be run in parallel using MPI. This is because of the way that the adjoint model is constructed.
  9. Customize the configuration script job_psas_sen.sh and provide the appropriate place for the substitute Perl script:
    set SUBSTITUTE=${ROMS_ROOT}/ROMS/Bin/substitute
    This script is distributed with ROMS and it is found in the ROMS/Bin sub-directory. Alternatively, you can define ROMS_ROOT environmental variable in your .cshrc login script. For example, I have:
    setenv ROMS_ROOT /home/arango/ocean/toms/repository/trunk
  10. Execute the configuration job_psas_sen.sh before running the model. It copies the required files and creates psas.in input script from template s4dvar.in. This has to be done every time that you run this application. We need a clean and fresh copy of the initial conditions and observation files since they are modified by ROMS during execution.
  11. Run ROMS with data assimilation:
    mpirun -np 4 oceanM ocean_wc13.in > & log &

References

The technical description of the algorithms and application used in this tutorial are described in Moore et al. (2010a, b, c).