Hi Everybody,

I have coded up a new sigma formulation in to ROMS where theta_b is no longer a constant (but theta_s, Tcline, hC are all constant as before) but is a function of depth (h): theta_b=A*tanh(c_cut*(h-h_cut))+B where A, B, h_cut, c_cut are all constants. This means that over the grid, if the bottom bathymetry (h) is varying then so will the sigma parameter Cs_r (and Cs_w). I like this formulation because in the deep, it refines the grid both at the surface and the bottom but in the shallow regions, it refines mainly at the bottom thereby resolving the bottom boundary layer; furthermore, this formulation also prevents excessive heating (due to heat fluxes) of the shallow regions of the model domain.

A colleague of mine who put a similar sigma formulation in to his ocean model informed me that when Cs_r (and Cs_w) is no longer constant in x-y (or i-j) space, the flow equations coded up in the model also need to be modified accordingly. Is this true - in particular for ROMS? Do theta_s, theta_b, Tcline in ROMS need to be global constants?

Thank you.

## Will new sigma formulation give problems?

### RE: Will new sigma formulation give problems?

No, changes of this kind will not cause any conflict.

All ROMS codes are written in such a way that any physical code interacts only with arrays z_r and z_w and makes no assumption about where these arrays are coming from. In fact, the only places in the code which are aware of the S-coordinate are set_scoord and set_depth, which compute z_r and z_w. In principle, these two subroutines can be modified to be virtually anything, and the rest of the code is still OK with it.

THE ONLY DELICATE issue here is distrurbance of vertical coordinate system due to free-surface elevation. In ROMS codes (unlike SCRUM, this is different) it is done in two stages: first set "unperturbed" coordinate, which corresponds to zero free surface; then the whole column is perturbed in a very specific way, see set_depth:

the second line must be exactly as above.

Inconsistency here results in wrong computation of vertical velocity, and loss of constancy preservation for tracers.

All ROMS codes are written in such a way that any physical code interacts only with arrays z_r and z_w and makes no assumption about where these arrays are coming from. In fact, the only places in the code which are aware of the S-coordinate are set_scoord and set_depth, which compute z_r and z_w. In principle, these two subroutines can be modified to be virtually anything, and the rest of the code is still OK with it.

THE ONLY DELICATE issue here is distrurbance of vertical coordinate system due to free-surface elevation. In ROMS codes (unlike SCRUM, this is different) it is done in two stages: first set "unperturbed" coordinate, which corresponds to zero free surface; then the whole column is perturbed in a very specific way, see set_depth:

Code: Select all

```
z_w0= ....... <--- whatever coordinate transformation
z_w(i,j,k)=z_w0+Zt_avg1(i,j)*(1.0_r8+z_w0/h(i,j))
```

Inconsistency here results in wrong computation of vertical velocity, and loss of constancy preservation for tracers.

### Re: Will new sigma formulation give problems?

Hi,

I want to change my theta_b with depth just as "lanerolle" did it in the past,

by chance could you send me your code or explain how you did it?

Thanks,

Mehmet

I want to change my theta_b with depth just as "lanerolle" did it in the past,

by chance could you send me your code or explain how you did it?

Thanks,

Mehmet

lanerolle wrote:Hi Everybody,

I have coded up a new sigma formulation in to ROMS where theta_b is no longer a constant (but theta_s, Tcline, hC are all constant as before) but is a function of depth (h): theta_b=A*tanh(c_cut*(h-h_cut))+B where A, B, h_cut, c_cut are all constants. This means that over the grid, if the bottom bathymetry (h) is varying then so will the sigma parameter Cs_r (and Cs_w). I like this formulation because in the deep, it refines the grid both at the surface and the bottom but in the shallow regions, it refines mainly at the bottom thereby resolving the bottom boundary layer; furthermore, this formulation also prevents excessive heating (due to heat fluxes) of the shallow regions of the model domain.

A colleague of mine who put a similar sigma formulation in to his ocean model informed me that when Cs_r (and Cs_w) is no longer constant in x-y (or i-j) space, the flow equations coded up in the model also need to be modified accordingly. Is this true - in particular for ROMS? Do theta_s, theta_b, Tcline in ROMS need to be global constants?

Thank you.

MI

### identification standard for vertical coordinates

It reminds me that we, as a community, should come up with an

identification standard for different types of vertical coordinates

in order for pre- and post-processing software to recognize it

correctly and automatically.

I personally no longer use the original S-coordinate for quite

a while, more than one year, and consider that obsolete because

of inherent limitation of "hc < hmin", where "hc" is intended

to be thermocline depth, "hmin" is minimum topographic depth.

Instead I use
which is subsequently perturbed by free surface in a standard way
Above: h = h(x,y) is bottom topography (positive); z^{(0)} is

unperturbed depth (negative) as function of (x,y,s); "s" is

sigma-coordinate, -1 < s < 0, where, as usual s=0 means surface,

and s=-1 bottom; "C_s(s)" is nonlinear stretching function,

typically Cs(s) = sinh(theta_s*s)/sinh(theta_s), but consider

others as well; "hc" is expected thermocline depth, typically

chosen as 100 to 250 meters. Note that the minimum depth

limitation no longer applies, and it is OK to have hmin = 25

meters, while hc = 250 meters in Pacific grid.

The above results in nearly uniform grid spacing above "hc" with

mild topographic dependency of top-most grid boxes in deep areas.

Since we want to keep the vertical transformation open,

I introduced the following set of rules for netCDF files:

1. Introduce global attribute called "VertCoordType" to identify

transformation (implying that a 3-letter code is assigned to

each transform), so that post processing software can read it

and interpret the data correctly.

2. Keep only minimal information sufficient to identify

transformation, for example, ncdump on history file reads

where:

(i) it is my preference to store things like "Cs_w" and "Cs_r" as

global attributes rather than variables because it requires

fewer netCDF calls to accomplish (AGRIF/RomsTools by Pierrick

follow the same convention, and it is all dated back to 2001,

but Rutgers ROMS stores "Cs_r,w" as variables);

(ii) Non-stretched coordinates "s_r" and "s_w" are no longer

stored because it silly;

(iii) "hc" is stored because it is necessary for transformation

above.

(iv) "theta_s" and "theta_b" which are used to construct "Cs_w"

and "Cs_r" may be, or may not be stored for human information

purposes only: but the post-processing software MUST NOT RELY

on them. Instead, it must read "Cs_w" and "Cs_r" as arrays

of 0:N and N values and use them [NOT "theta_s,b"(!)] to

reconstruct the transformation. The reason for this is that

the particular functional form of "Cs_r,w" may be changed

without notice at any moment, but if post-processing software

reads them, rather than trying to reproduce via "theta_s,b"

(hence relying on specific formulas), the transformation

can still be correctly interpreted.

identification standard for different types of vertical coordinates

in order for pre- and post-processing software to recognize it

correctly and automatically.

I personally no longer use the original S-coordinate for quite

a while, more than one year, and consider that obsolete because

of inherent limitation of "hc < hmin", where "hc" is intended

to be thermocline depth, "hmin" is minimum topographic depth.

Instead I use

Code: Select all

```
z^{(0)} = h * [s*hc + Cs(s)*h] / [h + hc]
```

Code: Select all

```
z = z^{(0)} + zeta * [1 + z^{(0)}/h].
```

unperturbed depth (negative) as function of (x,y,s); "s" is

sigma-coordinate, -1 < s < 0, where, as usual s=0 means surface,

and s=-1 bottom; "C_s(s)" is nonlinear stretching function,

typically Cs(s) = sinh(theta_s*s)/sinh(theta_s), but consider

others as well; "hc" is expected thermocline depth, typically

chosen as 100 to 250 meters. Note that the minimum depth

limitation no longer applies, and it is OK to have hmin = 25

meters, while hc = 250 meters in Pacific grid.

The above results in nearly uniform grid spacing above "hc" with

mild topographic dependency of top-most grid boxes in deep areas.

Since we want to keep the vertical transformation open,

I introduced the following set of rules for netCDF files:

1. Introduce global attribute called "VertCoordType" to identify

transformation (implying that a 3-letter code is assigned to

each transform), so that post processing software can read it

and interpret the data correctly.

2. Keep only minimal information sufficient to identify

transformation, for example, ncdump on history file reads

Code: Select all

```
global attributes:
...
:VertCoordType = "NEW" ;
:hc = 250.f ;
:Cs_w = -1.f, -0.8290268f, ...., 0.f ;
:Cs_r = -0.9105092f, ...., -0.00046545f ;
....
```

(i) it is my preference to store things like "Cs_w" and "Cs_r" as

global attributes rather than variables because it requires

fewer netCDF calls to accomplish (AGRIF/RomsTools by Pierrick

follow the same convention, and it is all dated back to 2001,

but Rutgers ROMS stores "Cs_r,w" as variables);

(ii) Non-stretched coordinates "s_r" and "s_w" are no longer

stored because it silly;

(iii) "hc" is stored because it is necessary for transformation

above.

(iv) "theta_s" and "theta_b" which are used to construct "Cs_w"

and "Cs_r" may be, or may not be stored for human information

purposes only: but the post-processing software MUST NOT RELY

on them. Instead, it must read "Cs_w" and "Cs_r" as arrays

of 0:N and N values and use them [NOT "theta_s,b"(!)] to

reconstruct the transformation. The reason for this is that

the particular functional form of "Cs_r,w" may be changed

without notice at any moment, but if post-processing software

reads them, rather than trying to reproduce via "theta_s,b"

(hence relying on specific formulas), the transformation

can still be correctly interpreted.