Difference between revisions of "RBL4D-Var Tutorial"

From WikiROMS
Jump to navigationJump to search
(28 intermediate revisions by 2 users not shown)
Line 1: Line 1:
<div class="title">4-DVar Physical-space Statistical Analysis System (PSAS)</div>
<div class="title">Physical-space Statistical Analysis System (4D-PSAS)</div>
The various files in the <span class="twilightBlue">PSAS</span> folder are needed to run the strong/weak constraint, dual form of 4-Dimensional Variational ('''4D-Var''') data assimilation based on the Physical-space Statistical Analysis System ('''PSAS''') algorithm in the California Current System, 1/3&deg; resolution, application ('''WC13''').
 
 
{{warning}}This page is under construction{{warning}}




Line 10: Line 6:


<div style="clear: both"></div>
<div style="clear: both"></div>
==Introduction==
In this tutorial you will apply the strong/weak constraint, dual form of 4-Dimensional Variational ('''4D-Var''') data assimilation based on the Physical-space Statistical Analysis System ('''PSAS''') algorithm ([[Options#W4DPSAS|4D-PSAS]]) to ROMS configured for the U.S. west coast and the California Current System ([[Options#WC13|WC13]]). In [[Options#W4DPSAS|4D-PSAS]] the search for the best ocean circulation estimate proceeds in the space spanned only by the observations, as opposed to the full space spanned by the model (i.e. the primal form, [[Options#IS4DVAR|I4D-Var]]). Formally, the primal and dual formulations yield identical estimates of the ocean circulation so one might wonder if there is any advantage of one form over the other? The practical advantages and disadvantages to both approaches are discussed in Moore et al. ([[Bibliography#MooreAM_2011b|2011b]], [[Bibliography#MooreAM_2011c|c]]).
{{#lst:4DVar_Tutorial_Introduction|setup}}
==Running 4D-PSAS==
To run this tutorial, go first to the directory <span class="twilightBlue">WC13/PSAS</span>. Instructions for compiling and running the model are provided below or can be found in the <span class="twilightBlue">Readme</span> file. The recommended configuration for this exercise is one outer-loop and 26 inner-loops, and <span class="twilightBlue">roms_wc13.in</span> is configured for this default case. The number of inner-loops is controlled by the parameter [[Variables#Ninner|Ninner]] in <span class="twilightBlue">roms_wc13.in</span>.


==Important CPP Options==
==Important CPP Options==
<div class="box">  [[W4DPSAS]]                4D-PSAS driver (observation space)<br />  [[POSTERIOR_EOFS]]          Estimate posterior analysis error<br />  [[POSTERIOR_ERROR_I]]      Estimate initial posterior analysis error<br />  [[WC13]]                    Application CPP option</div>
The following C-preprocessing options are activated in the [[build_Script|build script]]:
<div class="box">  [[Options#W4DPSAS|W4DPSAS]]                4D-PSAS driver (observation space)<br />  [[Options#ANA_SPONGE|ANA_SPONGE]]              Analytical enhanced viscosity/diffusion sponge<br />  [[Options#BGQC|BGQC]]                    Backgound quality control of observations<br />  [[Options#MINRES|MINRES]]                  Minimal Residual Method for 4D-Var minimization<br />  [[Options#RPCG|RPCG]]                    Restricted B-preconditioned Lanczos minimization<br />  [[Options#POSTERIOR_EOFS|POSTERIOR_EOFS]]          Estimate posterior analysis error<br />  [[Options#POSTERIOR_ERROR_I|POSTERIOR_ERROR_I]]      Estimate initial posterior analysis error<br />  [[Options#TIME_CONV|TIME_CONV]]              Weak-constraint 4D-Var time convolution<br />  [[Options#WC13|WC13]]                    Application CPP option</div>


==Input NetCDF Files==
==Input NetCDF Files==
[[Options#WC13|WC13]] requires the following input NetCDF files:
<div class="box">                      <span class="twilightBlue">Grid File:</span>  ../Data/wc13_grd.nc<br />          <span class="twilightBlue">Nonlinear Initial File:</span>  wc13_ini.nc<br />                <span class="twilightBlue">Forcing File 01:</span>  ../Data/coamps_wc13_lwrad_down.nc<br />                <span class="twilightBlue">Forcing File 02:</span>  ../Data/coamps_wc13_Pair.nc<br />                <span class="twilightBlue">Forcing File 03:</span>  ../Data/coamps_wc13_Qair.nc<br />                <span class="twilightBlue">Forcing File 04:</span>  ../Data/coamps_wc13_rain.nc<br />                <span class="twilightBlue">Forcing File 05:</span>  ../Data/coamps_wc13_swrad.nc<br />                <span class="twilightBlue">Forcing File 06:</span>  ../Data/coamps_wc13_Tair.nc<br />                <span class="twilightBlue">Forcing File 07:</span>  ../Data/coamps_wc13_wind.nc<br />                  <span class="twilightBlue">Boundary File:</span>  ../Data/wc13_ecco_bry.nc<br /><br />    <span class="twilightBlue">Initial Conditions STD File:</span>  ../Data/wc13_std_i.nc<br />                  <span class="twilightBlue">Model STD File:</span>  ../Data/wc13_std_m.nc<br />    <span class="twilightBlue">Boundary Conditions STD File:</span>  ../Data/wc13_std_b.nc<br />        <span class="twilightBlue">Surface Forcing STD File:</span>  ../Data/wc13_std_f.nc<br />    <span class="twilightBlue">Initial Conditions Norm File:</span>  ../Data/wc13_nrm_i.nc<br />                <span class="twilightBlue">Model Norm File:</span>  ../Data/wc13_nrm_m.nc<br />  <span class="twilightBlue">Boundary Conditions Norm File:</span>  ../Data/wc13_nrm_b.nc<br />      <span class="twilightBlue">Surface Forcing Norm File:</span>  ../Data/wc13_nrm_f.nc<br/>              <span class="twilightBlue">Observations File:</span>  wc13_obs.nc</div>
<div class="box">                      <span class="twilightBlue">Grid File:</span>  ../Data/wc13_grd.nc<br />          <span class="twilightBlue">Nonlinear Initial File:</span>  wc13_ini.nc<br />                <span class="twilightBlue">Forcing File 01:</span>  ../Data/coamps_wc13_lwrad_down.nc<br />                <span class="twilightBlue">Forcing File 02:</span>  ../Data/coamps_wc13_Pair.nc<br />                <span class="twilightBlue">Forcing File 03:</span>  ../Data/coamps_wc13_Qair.nc<br />                <span class="twilightBlue">Forcing File 04:</span>  ../Data/coamps_wc13_rain.nc<br />                <span class="twilightBlue">Forcing File 05:</span>  ../Data/coamps_wc13_swrad.nc<br />                <span class="twilightBlue">Forcing File 06:</span>  ../Data/coamps_wc13_Tair.nc<br />                <span class="twilightBlue">Forcing File 07:</span>  ../Data/coamps_wc13_wind.nc<br />                  <span class="twilightBlue">Boundary File:</span>  ../Data/wc13_ecco_bry.nc<br /><br />    <span class="twilightBlue">Initial Conditions STD File:</span>  ../Data/wc13_std_i.nc<br />                  <span class="twilightBlue">Model STD File:</span>  ../Data/wc13_std_m.nc<br />    <span class="twilightBlue">Boundary Conditions STD File:</span>  ../Data/wc13_std_b.nc<br />        <span class="twilightBlue">Surface Forcing STD File:</span>  ../Data/wc13_std_f.nc<br />    <span class="twilightBlue">Initial Conditions Norm File:</span>  ../Data/wc13_nrm_i.nc<br />                <span class="twilightBlue">Model Norm File:</span>  ../Data/wc13_nrm_m.nc<br />  <span class="twilightBlue">Boundary Conditions Norm File:</span>  ../Data/wc13_nrm_b.nc<br />      <span class="twilightBlue">Surface Forcing Norm File:</span>  ../Data/wc13_nrm_f.nc<br/>              <span class="twilightBlue">Observations File:</span>  wc13_obs.nc</div>


==Various Scripts and Include Files==
==Various Scripts and Include Files==
<div class="box">  [[build_Script|build.bash]]           bash shell script to compile application<br />  [[build_Script|build.sh]]             csh Unix script to compile application<br />  [[job_psas|job_psas.sh]]          job configuration script<br />  <span class="twilightBlue">ocean_wc13.in</span>         ROMS standard input script for WC13<br />  [[s4dvar.in]]            4D-Var standard input script template<br />  <span class="twilightBlue">wc13.h</span>                WC13 header with CPP options</div>
The following files will be found in <span class="twilightBlue">WC13/PSAS</span> directory after downloading from ROMS test cases SVN repository:
<div class="box">  <span class="twilightBlue">Readme</span>                instructions<br />  [[build_Script|build_roms.bash]]       bash shell script to compile application<br />  [[build_Script|build_roms.sh]]         csh Unix script to compile application<br />  [[job_psas|job_psas.sh]]          job configuration script<br />  <span class="twilightBlue">roms_wc13.in</span>         ROMS standard input script for WC13<br />  [[s4dvar.in]]            4D-Var standard input script template<br />  <span class="twilightBlue">wc13.h</span>                WC13 header with CPP options</div>


==Instructions==
==Instructions==
To run this application you need to take the following steps:
To run this application you need to take the following steps:


#We need to run the model application for a period that is long enough to compute meaningful circulation statistics, like mean and standard deviations for all prognostic state variables ([[Variables#zeta|zeta]], [[Variables#u|u]], [[Variables#v|v]], [[Variables#T|T]], and [[Variables#S|S]]). The standard deviations are written to NetCDF files and are read by the 4D-Var algorithm to convert modeled error correlations to error covariances. The error covariance matrix, '''D''', is very large and not well known. It is modeled as the solution of a diffusion equation as in [[Bibliography#WeaverAT_2001a|Weaver and Courtier (2001)]].<br /><br />In this application, we need standard deviations for initial conditions, surface forcing ([[ADJUST_WSTRESS]] and [[ADJUST_STFLUX]]), and open boundary conditions ([[ADJUST_BOUNDARY]]). The standard deviations for the initial and open boundary conditions are in terms of the unbalanced error covariance ('''K D'''<sub>u</sub> '''K'''<sup>T</sup>) since the balanced operator is activated ([[BALANCE_OPERATOR]] and [[ZETA_ELLIPTIC]]).<br /><br />The balance operator imposes a multivariate constraint on the error covariance such that the unobserved variable information is extracted from observed data by establishing balance relationships (i.e., T-S empirical formulas, hydrostactic balance, and geostrophic balance) with other state variables ([[Bibliography#WeaverAT_2005a|Weaver ''et al.'', 2005]]).<br /><br />These standard deviations have already been created for you:<div class="box"><span class="twilightBlue">../Data/wc13_std_i.nc</span>     initial conditions<br /><span class="twilightBlue">../Data/wc13_std_m.nc</span>    model error (if weak constraint)<br /><span class="twilightBlue">../Data/wc13_std_b.nc</span>    open boundary conditions<br /><span class="twilightBlue">../Data/wc13_std_f.nc</span>     surface forcing (wind stress and net heat flux)</div>
#We need to run the model application for a period that is long enough to compute meaningful circulation statistics, like mean and standard deviations for all prognostic state variables ([[Variables#zeta|zeta]], [[Variables#u|u]], [[Variables#v|v]], [[Variables#T|T]], and [[Variables#S|S]]). The standard deviations are written to NetCDF files and are read by the 4D-Var algorithm to convert modeled error correlations to error covariances. The error covariance matrix, '''D'''=diag('''B<sub>x</sub>''', '''B<sub>b</sub>''', '''B<sub>f</sub>''', '''Q'''), is very large and not well known. '''B''' is modeled as the solution of a diffusion equation as in [[Bibliography#WeaverAT_2001a|Weaver and Courtier (2001)]]. Each covariance matrix is factorized as '''B = K &Sigma; C &Sigma;<sup>T</sup> K<sup>T</sup>''', where '''C''' is a univariate correlation matrix, '''&Sigma;''' is a diagonal matrix of error standard deviations, and '''K''' is a multivariate balance operator.<div class="para">&nbsp;</div>In this application, we need standard deviations for initial conditions, surface forcing ([[Options#ADJUST_WSTRESS|ADJUST_WSTRESS]] and [[Options#ADJUST_STFLUX|ADJUST_STFLUX]]), and open boundary conditions ([[Options#ADJUST_BOUNDARY|ADJUST_BOUNDARY]]). If the balance operator is activated ([[Options#BALANCE_OPERATOR|BALANCE_OPERATOR]] and [[Options#ZETA_ELLIPTIC|ZETA_ELLIPTIC]]), the standard deviations for the initial and boundary conditions error covariance are in terms of the unbalanced error covariance ('''K B<sub>u</sub> K<sup>T</sup>'''). The balance operator imposes a multivariate constraint on the error covariance such that the unobserved variable information is extracted from observed data by establishing balance relationships (i.e., T-S empirical formulas, hydrostatic balance, and geostrophic balance) with other state variables ([[Bibliography#WeaverAT_2005a|Weaver ''et al.'', 2005]]). The balance operator is not used in the tutorial.<div class="para">&nbsp;</div>The standard deviations for [[Options#WC13|WC13]] have already been created for you:<div class="box"><span class="twilightBlue">../Data/wc13_std_i.nc</span>    initial conditions<br /><span class="twilightBlue">../Data/wc13_std_m.nc</span>    model error (if weak constraint)<br /><span class="twilightBlue">../Data/wc13_std_b.nc</span>     open boundary conditions<br /><span class="twilightBlue">../Data/wc13_std_f.nc</span>    surface forcing (wind stress and net heat flux)</div>
#Since we are modeling the error covariance matrix, '''D''', we need to compute the normalization coefficients to ensure that the diagonal elements of the associated correlation matrix '''C''' are equal to unity. There are two methods to compute normalization coefficients: exact and randomization (an approximation).<br /><br />The exact method is very expensive on large grids. The normalization coefficients are computed by perturbing each model grid cell with a delta function scaled by the area (2D state variables) or volume (3D state variables), and then by convolving with the squared-root adjoint and tangent linear diffusion operators.<br /><br />The approximate method is cheaper: the normalization  coefficients are computed using the randomization approach of [[Bibliography#FisherM_1995a|Fisher and Courtier (1995)]]. The coefficients are initialized with random numbers having a uniform distribution (drawn from a normal distribution with zero mean and unit variance). Then, they are scaled by the inverse squared-root of the cell area (2D state variable) or volume (3D state variable) and convolved with the squared-root adjoint and tangent diffusion operators over a specified number of iterations, Nrandom.<br /><br />Check following parameters in the 4D-Var input script [[s4dvar.in]] (see input script for details):<div class="box">[[Variables#Nmethod|Nmethod]]  == 0            ! normalization method<br />[[Variables#Nrandom|Nrandom]]  == 5000          ! randomization iterations<br /><br />[[Variables#LdefNRM|LdefNRM]] == F F F F        ! Create a new normalization files<br />[[Variables#LwrtNRM|LwrtNRM]] == F F F F        ! Compute and write normalization<br /><br />[[Variables#CnormI|CnormI(isFsur)]] =  T      ! 2D variable at RHO-points<br />[[Variables#CnormI|CnormI(isUbar)]] =  T      ! 2D variable at U-points<br />[[Variables#CnormI|CnormI(isVbar)]] =  T      ! 2D variable at V-points<br />[[Variables#CnormI|CnormI(isUvel)]] = T      ! 3D variable at U-points<br />[[Variables#CnormI|CnormI(isVvel)]] = T      ! 3D variable at V-points<br />[[Variables#CnormI|CnormI(isTvar)]] = T T     ! NT tracers<br /><br />[[Variables#CnormB|CnormB(isFsur)]] = T       ! 2D variable at RHO-points<br />[[Variables#CnormB|CnormB(isUbar)]] =  T      ! 2D variable at U-points<br />[[Variables#CnormB|CnormB(isVbar)]] =  T      ! 2D variable at V-points<br />[[Variables#CnormB|CnormB(isUvel)]] =  T      ! 3D variable at U-points<br />[[Variables#CnormB|CnormB(isVvel)]] =  T      ! 3D variable at V-points<br />[[Variables#CnormB|CnormB(isTvar)]] =  T T    ! NT tracers<br /><br />[[Variables#CnormF|CnormF(isUstr)]] =  T       ! surface U-momentum stress<br />[[Variables#CnormF|CnormF(isVstr)]] =  T      ! surface V-momentum stress<br />[[Variables#CnormF|CnormF(isTsur)]] =  T T    ! NT surface tracers flux</div>These normalization coefficients have already been computed for you ('''../Normalization''') using the exact method since this application has a small grid (54x53x30):<div class="box"><span class="twilightBlue">../Data/wc13_nrm_i.nc</span>    initial conditions<br /><span class="twilightBlue">../Data/wc13_nrm_m.nc</span>    model error (if weak constraint)<br /><span class="twilightBlue">../Data/wc13_nrm_b.nc</span>    open boundary conditions<br /><span class="twilightBlue">../Data/wc13_nrm_f.nc</span>    surface forcing (wind stress and<br />                                          net heat flux)</div>Notice that the switches [[Variables#LdefNRM|LdefNRM]] and [[Variables#LwrtNRM|LwrtNRM]] are all '''false''' (F) since we already computed these coefficients.<br /><br />The normalization coefficients need to be computed only once for a particular application provided that the grid, land/sea masking (if any), and decorrelation scales ([[Variables#HdecayI|HdecayI]], [[Variables#VdecayI|VdecayI]], [[Variables#HdecayB|HdecayB]], [[Variables#VdecayV|VdecayV]], and [[Variables#HdecayF|HdecayF]]) remain the same. Notice that large spatial changes in the normalization coefficient structure are observed near the open boundaries and land/sea masking regions.<br /><br />
#Since we are modeling the error covariance matrix, '''D''', we need to compute the normalization coefficients to ensure that the diagonal elements of the associated correlation matrix '''C''' are equal to unity. There are two methods to compute normalization coefficients: exact and randomization (an approximation).<div class="para">&nbsp;</div>The exact method is very expensive on large grids. The normalization coefficients are computed by perturbing each model grid cell with a delta function scaled by the area (2D state variables) or volume (3D state variables), and then by convolving with the squared-root adjoint and tangent linear diffusion operators.<div class="para">&nbsp;</div>The approximate method is cheaper: the normalization  coefficients are computed using the randomization approach of [[Bibliography#FisherM_1995a|Fisher and Courtier (1995)]]. The coefficients are initialized with random numbers having a uniform distribution (drawn from a normal distribution with zero mean and unit variance). Then, they are scaled by the inverse squared-root of the cell area (2D state variable) or volume (3D state variable) and convolved with the squared-root adjoint and tangent diffusion operators over a specified number of iterations, Nrandom.<div class="para">&nbsp;</div>Check following parameters in the 4D-Var input script [[s4dvar.in]] (see input script for details):<div class="box">[[Variables#Nmethod|Nmethod]] == 0            ! normalization method: 0=Exact (expensive) or 1=Approximated (randomization)<br />[[Variables#Nrandom|Nrandom]] == 5000          ! randomization iterations<br /><br />[[Variables#LdefNRM|LdefNRM]] == T T T T       ! Create a new normalization files<br />[[Variables#LwrtNRM|LwrtNRM]] == T T T T       ! Compute and write normalization<br /><br />[[Variables#CnormM|CnormM(isFsur)]] =  T      ! model error covariance, 2D variable at RHO-points<br />[[Variables#CnormM|CnormM(isUbar)]] =  T      ! model error covariance, 2D variable at U-points<br />[[Variables#CnormM|CnormM(isVbar)]] =  T      ! model error covariance, 2D variable at V-points<br />[[Variables#CnormM|CnormM(isUvel)]] =  T      ! model error covariance, 3D variable at U-points<br />[[Variables#CnormM|CnormM(isVvel)]] =  T       ! model error covariance, 3D variable at V-points<br />[[Variables#CnormM|CnormM(isTvar)]] =  T T    ! model error covariance, NT tracers<br /><br />[[Variables#CnormI|CnormI(isFsur)]] =  T      ! IC error covariance, 2D variable at RHO-points<br />[[Variables#CnormI|CnormI(isUbar)]] =  T       ! IC error covariance, 2D variable at U-points<br />[[Variables#CnormI|CnormI(isVbar)]] =  T      ! IC error covariance, 2D variable at V-points<br />[[Variables#CnormI|CnormI(isUvel)]] =  T      ! IC error covariance, 3D variable at U-points<br />[[Variables#CnormI|CnormI(isVvel)]] =  T      ! IC error covariance, 3D variable at V-points<br />[[Variables#CnormI|CnormI(isTvar)]] =  T T    ! IC error covariance, NT tracers<br /><br />[[Variables#CnormB|CnormB(isFsur)]] =  T      ! BC error covariance, 2D variable at RHO-points<br />[[Variables#CnormB|CnormB(isUbar)]] =  T      ! BC error covariance, 2D variable at U-points<br />[[Variables#CnormB|CnormB(isVbar)]] =  T      ! BC error covariance, 2D variable at V-points<br />[[Variables#CnormB|CnormB(isUvel)]] =  T      ! BC error covariance, 3D variable at U-points<br />[[Variables#CnormB|CnormB(isVvel)]] =  T      ! BC error covariance, 3D variable at V-points<br />[[Variables#CnormB|CnormB(isTvar)]] =  T T    ! BC error covariance, NT tracers<br /><br />[[Variables#CnormF|CnormF(isUstr)]] =  T      ! surface forcing error covariance, U-momentum stress<br />[[Variables#CnormF|CnormF(isVstr)]] =  T      ! surface forcing error covariance, V-momentum stress<br />[[Variables#CnormF|CnormF(isTsur)]] =  T T    ! surface forcing error covariance, NT tracers fluxes</div> These normalization coefficients have already been computed for you ('''../Normalization''') using the exact method since this application has a small grid (54x53x30):<div class="box"><span class="twilightBlue">../Data/wc13_nrm_i.nc</span>    initial conditions<br /><span class="twilightBlue">../Data/wc13_nrm_m.nc</span>    model error (if weak constraint)<br /><span class="twilightBlue">../Data/wc13_nrm_b.nc</span>    open boundary conditions<br /><span class="twilightBlue">../Data/wc13_nrm_f.nc</span>    surface forcing (wind stress and<br />                                          net heat flux)</div>Notice that the switches [[Variables#LdefNRM|LdefNRM]] and [[Variables#LwrtNRM|LwrtNRM]] are all '''false''' (F) since we already computed these coefficients.<div class="para">&nbsp;</div>The normalization coefficients need to be computed only once for a particular application provided that the grid, land/sea masking (if any), and decorrelation scales ([[Variables#HdecayI|HdecayI]], [[Variables#VdecayI|VdecayI]], [[Variables#HdecayB|HdecayB]], [[Variables#VdecayV|VdecayV]], and [[Variables#HdecayF|HdecayF]]) remain the same. Notice that large spatial changes in the normalization coefficient structure are observed near the open boundaries and land/sea masking regions.
#Customize your preferred [[build_Script|build script]] and provide the appropriate values for:
#Customize your preferred [[build_Script|build script]] and provide the appropriate values for:
#*Root directory, MY_ROOT_DIR
#*Root directory, <span class="salmon">MY_ROOT_DIR</span>
#*ROMS source code, MY_ROMS_SRC
#*ROMS source code, <span class="salmon">MY_ROMS_SRC</span>
#*Fortran compiler, FORT
#*Fortran compiler, <span class="salmon">FORT</span>
#*MPI flags, USE_MPI and USE_MPIF90
#*MPI flags, <span class="salmon">USE_MPI</span> and <span class="salmon">USE_MPIF90</span>
#*Path of MPI, NetCDF, and ARPACK libraries according to the compiler. Notice that you need to provide the correct places of these libraries for your computer. If you want to ignore this section, comment out the assignment for the variable USE_MY_LIBS.<br /><br />
#*Path of MPI, NetCDF, and ARPACK libraries according to the compiler are set in [[my_build_paths.sh]]. Notice that you need to provide the correct places of these libraries for your computer. If you want to ignore this section, set <span class="salmon">USE_MY_LIBS</span> value to '''no'''.
#Notice that the most important CPP options for this application are specified in the [[build_Script|build script]] instead of <span class="twilightBlue">wc13.h</span>:<div class="box"><span class="twilightBlue">setenv MY_CPP_FLAGS "-DW4DPSAS"<br />setenv MY_CPP_FLAGS "${MY_CPP_FLAGS} -DPOSTERIOR_EOFS"<br />setenv MY_CPP_FLAGS "${MY_CPP_FLAGS} -DPOSTERIOR_ERROR_I"</span></div>This is to allow flexibility with different CPP options.<br /><br />For this to work, however, any '''#undef''' directives MUST be avoided in the header file <span class="twilightBlue">wc13.h</span> since it has precedence during C-preprocessing.<br /><br />
#Notice that the most important CPP options for this application are specified in the [[build_Script|build script]] instead of <span class="twilightBlue">wc13.h</span>:<div class="box"><span class="twilightBlue">setenv MY_CPP_FLAGS "${MY_CPP_FLAGS} -DW4DPSAS"<br />setenv MY_CPP_FLAGS "${MY_CPP_FLAGS} -DANA_SPONGE"<br \>setenv MY_CPP_FLAGS "${MY_CPP_FLAGS} -DBGQC"<br \>setenv MY_CPP_FLAGS "${MY_CPP_FLAGS} -DMINRES"<br \>setenv MY_CPP_FLAGS "${MY_CPP_FLAGS} -DRPCG"<br \>setenv MY_CPP_FLAGS "${MY_CPP_FLAGS} -DPOSTERIOR_EOFS"<br />setenv MY_CPP_FLAGS "${MY_CPP_FLAGS} -DPOSTERIOR_ERROR_I"<br \>setenv MY_CPP_FLAGS "${MY_CPP_FLAGS} -DTIME_CONV"<br \></span></div>This is to allow flexibility with different CPP options.<div class="para">&nbsp;</div>For this to work, however, any '''#undef''' directives MUST be avoided in the header file <span class="twilightBlue">wc13.h</span> since it has precedence during C-preprocessing.
#You MUST use the [[build_Script|build script]] to compile.<br /><br />
#You MUST use the [[build_Script|build script]] to compile.
#Customize the ROMS input script <span class="twilightBlue">ocean_wc13.in</span> and specify the appropriate values for the distributed-memory partition. It is set by default to:<div class="box">[[Variables#NtileI|NtileI]] == 2                              ! I-direction partition<br />[[Variables#NtileJ|NtileJ]] == 2                               ! J-direction partition</div>Notice that the adjoint-based algorithms can only be run in parallel using MPI. This is because of the way that the adjoint model is constructed.<br /><br />
#Customize the ROMS input script <span class="twilightBlue">roms_wc13.in</span> and specify the appropriate values for the distributed-memory partition. It is set by default to:<div class="box">[[Variables#NtileI|NtileI]] == 2                              ! I-direction partition<br />[[Variables#NtileJ|NtileJ]] == 4                               ! J-direction partition</div>Notice that the adjoint-based algorithms can only be run in parallel using MPI. This is because of the way that the adjoint model is constructed.
#Customize the configuration script [[job_psas|job_psas.sh]] and provide the appropriate place for the [[substitute]] Perl script:<div class="box"><span class="twilightBlue">set SUBSTITUTE=${ROMS_ROOT}/ROMS/Bin/substitute</span></div>This script is distributed with ROMS and it is found in the ROMS/Bin sub-directory. Alternatively, you can define ROMS_ROOT environmental variable in your .cshrc login script. For example, I have:<div class="box"><span class="twilightBlue">setenv ROMS_ROOT /home/arango/ocean/toms/repository/trunk</span></div>
#Customize the configuration script [[job_psas|job_psas.sh]] and provide the appropriate place for the [[substitute]] Perl script:<div class="box"><span class="twilightBlue">set SUBSTITUTE=${ROMS_ROOT}/ROMS/Bin/substitute</span></div>This script is distributed with ROMS and it is found in the ROMS/Bin sub-directory. Alternatively, you can define ROMS_ROOT environmental variable in your .cshrc login script. For example, I have:<div class="box"><span class="twilightBlue">setenv ROMS_ROOT /home/arango/ocean/toms/repository/trunk</span></div>
#Execute the configuration [[job_psas|job_psas.sh]] '''before''' running the model. It copies the required files and creates <span class="twilightBlue">psas.in</span> input script from template '''[[s4dvar.in]]'''. This has to be done '''every time''' that you run this application. We need a clean and fresh copy of the initial conditions and observation files since they are modified by ROMS during execution.<br /><br />
#Execute the configuration [[job_psas|job_psas.sh]] '''before''' running the model. It copies the required files and creates <span class="twilightBlue">psas.in</span> input script from template '''[[s4dvar.in]]'''. This has to be done '''every time''' that you run this application. We need a clean and fresh copy of the initial conditions and observation files since they are modified by ROMS during execution.
#Run ROMS with data assimilation:<div class="box"><span class="red">mpirun -np 4 oceanM ocean_wc13.in > & log &</span></div>
#Run ROMS with data assimilation:<div class="box"><span class="red">mpirun -np 8 romsM roms_wc13.in > & log &</span></div>
#We recommend creating a new subdirectory for each Case in this exercise (<span class="twilightBlue">EX3_CONGRAD</span>, <span class="twilightBlue">EX3_MINRES</span>, <span class="twilightBlue">EX3_RPCG</span>, and <span class="twilightBlue">EX4_WEAK</span>), and saving the solution in it for analysis and plotting to avoid overwriting solutions when playing with different parameters. For example<div class="box">mkdir EX3_CONGRAD<br />mv Build_roms psas.in *.nc log EX3_CONGRAD<br />cp -p romsM roms_wc13.in EX3_CONGRAD</div>where log is the ROMS standard output specified in the previous step.
 
==Plotting your Results==
 
Several Matlab scripts are provided in the directory <span class="twilightBlue">WC13/plotting</span> which will allow you to plot some of the [[Options#W4DPSAS|4D-PSAS]] output.
 
Recall that [[Options#W4DPSAS|4D-PSAS]] minimizes the cost function given by:
 
<math display="block">
  J=\frac{1}{2}\,\delta{\bf z}^{T}\,{\bf D}^{-1} \delta{\bf z}\,+\,\,
    \frac{1}{2}\,{\left({\bf G}\, \delta{\bf z} - {\bf d}\right)}^{T}\,{\bf R}^{-1}\,
    \left({\bf G}\,\delta{\bf z} - {\bf d}\right) = J_{b} + J_{o}
</math>
 
Plot first the [[Options#W4DPSAS|4D-PSAS]] cost function <math>J</math> and its components <math>J_b</math>, <math>J_o</math> and the theoretical minimum value <math>J_{min}=N_{obs}/2</math> using the Matlab script <span class="twilightBlue">plot_psas_cost.m</span>.
 
Next, plot the surface initial conditions increments <math>\delta{\bf x}(t_0)</math> and the surface forcing increments <math>\delta{\bf f}(t)</math> at initial time using Matlab script <span class="twilightBlue">plot_psas_increments.m</span> or ROMS plotting package script <span class="twilightBlue">ccnt_psas_increments.in</span> for horizontal plots at 100 m or <span class="twilightBlue">csec_psas_increments.in</span> for cross-sections along 37&deg;N.
 
==Results==
 
The [[Options#W4DPSAS|4D-PSAS]] cost function value for each inner loop iteration is shown below for various minimization algorithms:
 
{|align="center"
|-
|[[Image:psas_cost_2019.png|400px|thumb|center|<center>'''a)''' 4D-PSAS Cost Function<br />''prior'' saved daily</center>]]
|[[Image:psas_rpcg_cost_2019.png|400px|thumb|center|<center>'''b)''' 4D-PSAS Cost Function, Background Quality Control<br />''prior'' saved daily</center>]]
|[[Image:psas_cost_weak_2019.png|400px|thumb|center|<center>'''c)''' 4D-PSAS Cost Function, Weak Constraint<br />''prior'' saved daily</center>]]
|}
 
The convergence of various minimization algorithms ([[Options#W4DPSAS|PSAS]] default, [[Options#MINRES|MINRES]], and [[Options#RPCG|RPCG]]) is compared with the primal formulation [[Options#I4DVAR|I4DVAR]].
 
{|align="center"
|-
|[[Image:psas_cost_2hour_2019.png|400px|thumb|center|<center>'''a)''' 4D-PSAS Cost Function<br />''prior'' saved every 2 hours</center>]]
|[[Image:psas_rpcg_cost_2hour_2019.png|400px|thumb|center|<center>'''b)''' 4D-PSAS Cost Function, Background Quality Control<br />''prior'' saved every 2 hours</center>]]
|[[Image:psas_cost_weak_2hour_2019.png|400px|thumb|center|<center>'''c)''' 4D-PSAS Cost Function, Weak Constraint<br />''prior'' saved every 2 hours</center>]]
|}
 
In the above plots, the prior nonlinear trajectory is saved every 2 hours instead of daily (NHIS = NTLM = NSFF = NOBC = 4). It shows the convergence of various minimization algorithms ([[Options#W4DPSAS|PSAS]] default, [[Options#MINRES|MINRES]], and [[Options#RPCG|RPCG]]) is compared with the primal formulation [[Options#I4DVAR|I4DVAR]].
 
----
 
 
The [[Options#W4DPSAS|4D-PSAS]] initial conditions increments for free-surface (m), surface wind stress components (Pa), and surface net heat flux (W/m<sup>2</sup>) are shown below:
 
{|align="center"
|-
|[[Image:psas_increments_rpcg_fs_2019.png|thumb|200px|<center>'''a)''' Free-surface</center>]]
|[[Image:psas_increments_rpcg_uwind_2019.png|thumb|200px|<center>'''b)''' &tau;<sub>x</sub></center>]]
|[[Image:psas_increments_rpcg_vwind_2019.png|thumb|200px|<center>'''c)''' &tau;<sub>y</sub></center>]]
|[[Image:psas_increments_rpcg_heat_2019.png|thumb|200px|<center>'''d)''' Net Heat Flux</center>]]
|}
 
The [[Options#W4DPSAS|4D-PSAS]] initial conditions increments at the surface for temperature (&deg;C), salinity, and momentum components (m/s) are shown below:
 
{|align="center"
|-
|[[Image:psas_increments_rpcg_sur_T_2019.png|thumb|200px|<center>'''a)''' Temperature</center>]]
|[[Image:psas_increments_rpcg_sur_S_2019.png|thumb|200px|<center>'''b)''' Salinity</center>]]
|[[Image:psas_increments_rpcg_sur_u_2019.png|thumb|200px|<center>'''c)''' U-Momentum</center>]]
|[[Image:psas_increments_rpcg_sur_v_2019.png|thumb|200px|<center>'''d)''' V-Momentum</center>]]
|}
 
The [[Options#W4DPSAS|4D-PSAS]] initial conditions increments at 100m for temperature (&deg;C), salinity, and momentum components (m/s) are shown below:
 
{|align="center"
|-
|[[Image:psas_increments_rpcg_T_2019.png|thumb|200px|<center>'''a)''' Temperature</center>]]
|[[Image:psas_increments_rpcg_S_2019.png|thumb|200px|<center>'''b)''' Salinity</center>]]
|[[Image:psas_increments_rpcg_u_2019.png|thumb|200px|<center>'''c)''' U-Momentum</center>]]
|[[Image:psas_increments_rpcg_v_2019.png|thumb|200px|<center>'''d)''' V-Momentum</center>]]
|}


==References==
A cross-section along 37&deg;N for the [[Options#W4DPSAS|4D-PSAS]] initial conditions increments is shown below.


The technical description of the algorithms and application used in this tutorial are described in Moore ''et al.'' ([[Bibliography#MooreAM_2010a|2010a]], [[Bibliography#MooreAM_2010b|b]], [[Bibliography#MooreAM_2010c|c]]).
{|align="center"
|-
|[[Image:psas_increments_rpcg_sec_T_2019.png|thumb|200px|<center>'''a)''' Temperature</center>]]
|[[Image:psas_increments_rpcg_sec_S_2019.png|thumb|200px|<center>'''b)''' Salinity</center>]]
|[[Image:psas_increments_rpcg_sec_u_2019.png|thumb|200px|<center>'''c)''' U-Momentum</center>]]
|[[Image:psas_increments_rpcg_sec_v_2019.png|thumb|200px|<center>'''d)''' V-Momentum</center>]]
|}

Revision as of 13:33, 2 August 2019

Physical-space Statistical Analysis System (4D-PSAS)



Introduction

In this tutorial you will apply the strong/weak constraint, dual form of 4-Dimensional Variational (4D-Var) data assimilation based on the Physical-space Statistical Analysis System (PSAS) algorithm (4D-PSAS) to ROMS configured for the U.S. west coast and the California Current System (WC13). In 4D-PSAS the search for the best ocean circulation estimate proceeds in the space spanned only by the observations, as opposed to the full space spanned by the model (i.e. the primal form, I4D-Var). Formally, the primal and dual formulations yield identical estimates of the ocean circulation so one might wonder if there is any advantage of one form over the other? The practical advantages and disadvantages to both approaches are discussed in Moore et al. (2011b, c).


Model Set-up

The WC13 model domain is shown in Fig. 1 and has open boundaries along the northern, western, and southern edges of the model domain.

Fig. 1: Model Bathymetry with 37°N Transect and Target Area

In the tutorial, you will perform a 4D-Var data assimilation cycle that spans the period 3-6 January, 2004. The 4D-Var control vector δz is comprised of increments to the initial conditions, δx(t0), surface forcing, δf(t), and open boundary conditions, δb(t). The prior initial conditions, xb(t0), are taken from the sequence of 4D-Var experiments described by Moore et al. (2011b) in which data were assimilated every 7 days during the period July 2002- December 2004. The prior surface forcing, fb(t), takes the form of surface wind stress, heat flux, and a freshwater flux computed using the ROMS bulk flux formulation, and using near surface air data from COAMPS (Doyle et al., 2009). Clamped open boundary conditions are imposed on (u,v) and tracers, and the prior boundary conditions, bb(t), are taken from the global ECCO product (Wunsch and Heimbach, 2007). The free-surface height and vertically integrated velocity components are subject to the usual Chapman and Flather radiation conditions at the open boundaries. The prior surface forcing and open boundary conditions are provided daily and linearly interpolated in time. Similarly, the increments δf(t) and δb(t) are also computed daily and linearly interpolated in time.

The observations assimilated into the model are satellite SST, satellite SSH in the form of a gridded product from Aviso, and hydrographic observations of temperature and salinity collected from Argo floats and during the GLOBEC/LTOP and CalCOFI cruises off the coast of Oregon and southern California, respectively. The observation locations are illustrated in Fig. 2.

Figure 2: WC13 Observations
a) Aviso SSH
b) Blended SST
c) In Situ Temperature
d) In Situ Salinity

Running 4D-PSAS

To run this tutorial, go first to the directory WC13/PSAS. Instructions for compiling and running the model are provided below or can be found in the Readme file. The recommended configuration for this exercise is one outer-loop and 26 inner-loops, and roms_wc13.in is configured for this default case. The number of inner-loops is controlled by the parameter Ninner in roms_wc13.in.

Important CPP Options

The following C-preprocessing options are activated in the build script:

W4DPSAS 4D-PSAS driver (observation space)
ANA_SPONGE Analytical enhanced viscosity/diffusion sponge
BGQC Backgound quality control of observations
MINRES Minimal Residual Method for 4D-Var minimization
RPCG Restricted B-preconditioned Lanczos minimization
POSTERIOR_EOFS Estimate posterior analysis error
POSTERIOR_ERROR_I Estimate initial posterior analysis error
TIME_CONV Weak-constraint 4D-Var time convolution
WC13 Application CPP option

Input NetCDF Files

WC13 requires the following input NetCDF files:

Grid File: ../Data/wc13_grd.nc
Nonlinear Initial File: wc13_ini.nc
Forcing File 01: ../Data/coamps_wc13_lwrad_down.nc
Forcing File 02: ../Data/coamps_wc13_Pair.nc
Forcing File 03: ../Data/coamps_wc13_Qair.nc
Forcing File 04: ../Data/coamps_wc13_rain.nc
Forcing File 05: ../Data/coamps_wc13_swrad.nc
Forcing File 06: ../Data/coamps_wc13_Tair.nc
Forcing File 07: ../Data/coamps_wc13_wind.nc
Boundary File: ../Data/wc13_ecco_bry.nc

Initial Conditions STD File: ../Data/wc13_std_i.nc
Model STD File: ../Data/wc13_std_m.nc
Boundary Conditions STD File: ../Data/wc13_std_b.nc
Surface Forcing STD File: ../Data/wc13_std_f.nc
Initial Conditions Norm File: ../Data/wc13_nrm_i.nc
Model Norm File: ../Data/wc13_nrm_m.nc
Boundary Conditions Norm File: ../Data/wc13_nrm_b.nc
Surface Forcing Norm File: ../Data/wc13_nrm_f.nc
Observations File: wc13_obs.nc

Various Scripts and Include Files

The following files will be found in WC13/PSAS directory after downloading from ROMS test cases SVN repository:

Readme instructions
build_roms.bash bash shell script to compile application
build_roms.sh csh Unix script to compile application
job_psas.sh job configuration script
roms_wc13.in ROMS standard input script for WC13
s4dvar.in 4D-Var standard input script template
wc13.h WC13 header with CPP options

Instructions

To run this application you need to take the following steps:

  1. We need to run the model application for a period that is long enough to compute meaningful circulation statistics, like mean and standard deviations for all prognostic state variables (zeta, u, v, T, and S). The standard deviations are written to NetCDF files and are read by the 4D-Var algorithm to convert modeled error correlations to error covariances. The error covariance matrix, D=diag(Bx, Bb, Bf, Q), is very large and not well known. B is modeled as the solution of a diffusion equation as in Weaver and Courtier (2001). Each covariance matrix is factorized as B = K Σ C ΣT KT, where C is a univariate correlation matrix, Σ is a diagonal matrix of error standard deviations, and K is a multivariate balance operator.
     
    In this application, we need standard deviations for initial conditions, surface forcing (ADJUST_WSTRESS and ADJUST_STFLUX), and open boundary conditions (ADJUST_BOUNDARY). If the balance operator is activated (BALANCE_OPERATOR and ZETA_ELLIPTIC), the standard deviations for the initial and boundary conditions error covariance are in terms of the unbalanced error covariance (K Bu KT). The balance operator imposes a multivariate constraint on the error covariance such that the unobserved variable information is extracted from observed data by establishing balance relationships (i.e., T-S empirical formulas, hydrostatic balance, and geostrophic balance) with other state variables (Weaver et al., 2005). The balance operator is not used in the tutorial.
     
    The standard deviations for WC13 have already been created for you:
    ../Data/wc13_std_i.nc initial conditions
    ../Data/wc13_std_m.nc model error (if weak constraint)
    ../Data/wc13_std_b.nc open boundary conditions
    ../Data/wc13_std_f.nc surface forcing (wind stress and net heat flux)
  2. Since we are modeling the error covariance matrix, D, we need to compute the normalization coefficients to ensure that the diagonal elements of the associated correlation matrix C are equal to unity. There are two methods to compute normalization coefficients: exact and randomization (an approximation).
     
    The exact method is very expensive on large grids. The normalization coefficients are computed by perturbing each model grid cell with a delta function scaled by the area (2D state variables) or volume (3D state variables), and then by convolving with the squared-root adjoint and tangent linear diffusion operators.
     
    The approximate method is cheaper: the normalization coefficients are computed using the randomization approach of Fisher and Courtier (1995). The coefficients are initialized with random numbers having a uniform distribution (drawn from a normal distribution with zero mean and unit variance). Then, they are scaled by the inverse squared-root of the cell area (2D state variable) or volume (3D state variable) and convolved with the squared-root adjoint and tangent diffusion operators over a specified number of iterations, Nrandom.
     
    Check following parameters in the 4D-Var input script s4dvar.in (see input script for details):
    Nmethod == 0  ! normalization method: 0=Exact (expensive) or 1=Approximated (randomization)
    Nrandom == 5000  ! randomization iterations

    LdefNRM == T T T T  ! Create a new normalization files
    LwrtNRM == T T T T  ! Compute and write normalization

    CnormM(isFsur) = T  ! model error covariance, 2D variable at RHO-points
    CnormM(isUbar) = T  ! model error covariance, 2D variable at U-points
    CnormM(isVbar) = T  ! model error covariance, 2D variable at V-points
    CnormM(isUvel) = T  ! model error covariance, 3D variable at U-points
    CnormM(isVvel) = T  ! model error covariance, 3D variable at V-points
    CnormM(isTvar) = T T  ! model error covariance, NT tracers

    CnormI(isFsur) = T  ! IC error covariance, 2D variable at RHO-points
    CnormI(isUbar) = T  ! IC error covariance, 2D variable at U-points
    CnormI(isVbar) = T  ! IC error covariance, 2D variable at V-points
    CnormI(isUvel) = T  ! IC error covariance, 3D variable at U-points
    CnormI(isVvel) = T  ! IC error covariance, 3D variable at V-points
    CnormI(isTvar) = T T  ! IC error covariance, NT tracers

    CnormB(isFsur) = T  ! BC error covariance, 2D variable at RHO-points
    CnormB(isUbar) = T  ! BC error covariance, 2D variable at U-points
    CnormB(isVbar) = T  ! BC error covariance, 2D variable at V-points
    CnormB(isUvel) = T  ! BC error covariance, 3D variable at U-points
    CnormB(isVvel) = T  ! BC error covariance, 3D variable at V-points
    CnormB(isTvar) = T T  ! BC error covariance, NT tracers

    CnormF(isUstr) = T  ! surface forcing error covariance, U-momentum stress
    CnormF(isVstr) = T  ! surface forcing error covariance, V-momentum stress
    CnormF(isTsur) = T T  ! surface forcing error covariance, NT tracers fluxes
    These normalization coefficients have already been computed for you (../Normalization) using the exact method since this application has a small grid (54x53x30):
    ../Data/wc13_nrm_i.nc initial conditions
    ../Data/wc13_nrm_m.nc model error (if weak constraint)
    ../Data/wc13_nrm_b.nc open boundary conditions
    ../Data/wc13_nrm_f.nc surface forcing (wind stress and
    net heat flux)
    Notice that the switches LdefNRM and LwrtNRM are all false (F) since we already computed these coefficients.
     
    The normalization coefficients need to be computed only once for a particular application provided that the grid, land/sea masking (if any), and decorrelation scales (HdecayI, VdecayI, HdecayB, VdecayV, and HdecayF) remain the same. Notice that large spatial changes in the normalization coefficient structure are observed near the open boundaries and land/sea masking regions.
  3. Customize your preferred build script and provide the appropriate values for:
    • Root directory, MY_ROOT_DIR
    • ROMS source code, MY_ROMS_SRC
    • Fortran compiler, FORT
    • MPI flags, USE_MPI and USE_MPIF90
    • Path of MPI, NetCDF, and ARPACK libraries according to the compiler are set in my_build_paths.sh. Notice that you need to provide the correct places of these libraries for your computer. If you want to ignore this section, set USE_MY_LIBS value to no.
  4. Notice that the most important CPP options for this application are specified in the build script instead of wc13.h:
    setenv MY_CPP_FLAGS "${MY_CPP_FLAGS} -DW4DPSAS"
    setenv MY_CPP_FLAGS "${MY_CPP_FLAGS} -DANA_SPONGE"
    setenv MY_CPP_FLAGS "${MY_CPP_FLAGS} -DBGQC"
    setenv MY_CPP_FLAGS "${MY_CPP_FLAGS} -DMINRES"
    setenv MY_CPP_FLAGS "${MY_CPP_FLAGS} -DRPCG"
    setenv MY_CPP_FLAGS "${MY_CPP_FLAGS} -DPOSTERIOR_EOFS"
    setenv MY_CPP_FLAGS "${MY_CPP_FLAGS} -DPOSTERIOR_ERROR_I"
    setenv MY_CPP_FLAGS "${MY_CPP_FLAGS} -DTIME_CONV"
    This is to allow flexibility with different CPP options.
     
    For this to work, however, any #undef directives MUST be avoided in the header file wc13.h since it has precedence during C-preprocessing.
  5. You MUST use the build script to compile.
  6. Customize the ROMS input script roms_wc13.in and specify the appropriate values for the distributed-memory partition. It is set by default to:
    NtileI == 2  ! I-direction partition
    NtileJ == 4  ! J-direction partition
    Notice that the adjoint-based algorithms can only be run in parallel using MPI. This is because of the way that the adjoint model is constructed.
  7. Customize the configuration script job_psas.sh and provide the appropriate place for the substitute Perl script:
    set SUBSTITUTE=${ROMS_ROOT}/ROMS/Bin/substitute
    This script is distributed with ROMS and it is found in the ROMS/Bin sub-directory. Alternatively, you can define ROMS_ROOT environmental variable in your .cshrc login script. For example, I have:
    setenv ROMS_ROOT /home/arango/ocean/toms/repository/trunk
  8. Execute the configuration job_psas.sh before running the model. It copies the required files and creates psas.in input script from template s4dvar.in. This has to be done every time that you run this application. We need a clean and fresh copy of the initial conditions and observation files since they are modified by ROMS during execution.
  9. Run ROMS with data assimilation:
    mpirun -np 8 romsM roms_wc13.in > & log &
  10. We recommend creating a new subdirectory for each Case in this exercise (EX3_CONGRAD, EX3_MINRES, EX3_RPCG, and EX4_WEAK), and saving the solution in it for analysis and plotting to avoid overwriting solutions when playing with different parameters. For example
    mkdir EX3_CONGRAD
    mv Build_roms psas.in *.nc log EX3_CONGRAD
    cp -p romsM roms_wc13.in EX3_CONGRAD
    where log is the ROMS standard output specified in the previous step.

Plotting your Results

Several Matlab scripts are provided in the directory WC13/plotting which will allow you to plot some of the 4D-PSAS output.

Recall that 4D-PSAS minimizes the cost function given by:

Plot first the 4D-PSAS cost function and its components , and the theoretical minimum value using the Matlab script plot_psas_cost.m.

Next, plot the surface initial conditions increments and the surface forcing increments at initial time using Matlab script plot_psas_increments.m or ROMS plotting package script ccnt_psas_increments.in for horizontal plots at 100 m or csec_psas_increments.in for cross-sections along 37°N.

Results

The 4D-PSAS cost function value for each inner loop iteration is shown below for various minimization algorithms:

a) 4D-PSAS Cost Function
prior saved daily
b) 4D-PSAS Cost Function, Background Quality Control
prior saved daily
c) 4D-PSAS Cost Function, Weak Constraint
prior saved daily

The convergence of various minimization algorithms (PSAS default, MINRES, and RPCG) is compared with the primal formulation I4DVAR.

a) 4D-PSAS Cost Function
prior saved every 2 hours
b) 4D-PSAS Cost Function, Background Quality Control
prior saved every 2 hours
c) 4D-PSAS Cost Function, Weak Constraint
prior saved every 2 hours

In the above plots, the prior nonlinear trajectory is saved every 2 hours instead of daily (NHIS = NTLM = NSFF = NOBC = 4). It shows the convergence of various minimization algorithms (PSAS default, MINRES, and RPCG) is compared with the primal formulation I4DVAR.



The 4D-PSAS initial conditions increments for free-surface (m), surface wind stress components (Pa), and surface net heat flux (W/m2) are shown below:

a) Free-surface
b) τx
c) τy
d) Net Heat Flux

The 4D-PSAS initial conditions increments at the surface for temperature (°C), salinity, and momentum components (m/s) are shown below:

a) Temperature
b) Salinity
c) U-Momentum
d) V-Momentum

The 4D-PSAS initial conditions increments at 100m for temperature (°C), salinity, and momentum components (m/s) are shown below:

a) Temperature
b) Salinity
c) U-Momentum
d) V-Momentum

A cross-section along 37°N for the 4D-PSAS initial conditions increments is shown below.

a) Temperature
b) Salinity
c) U-Momentum
d) V-Momentum