Nesting + MPI parallel

Bug reports, work arounds and fixes

Moderators: arango, robertson

Post Reply
Message
Author
Tsubono
Posts: 4
Joined: Fri Aug 11, 2006 10:58 pm
Location: Criepi

Nesting + MPI parallel

#1 Unread post by Tsubono »

Hi,

I tried a test case using nesting model + mpi parallel.
But, Blowing up at the first step.

I found that it may be a bug in Nesting.F of ROMS revison 674, if parallel application is on.
It is around calculating cff1 for mass flux at the physical boundaries in SUBROUTINE put_refine2d.

=====cff1: Lines 3829, 3853, 3877 and 3901 in nesting.F:=====

western_edge: cff1=GRID(ng)%on_u(Istr,j)/GRID(dg)%on_u(Ic,Jc)
eastern_edge : cff1=GRID(ng)%on_u(Iend+1,j)/GRID(dg)%on_u(Ic,Jc)
southern_edge: cff1=GRID(ng)%om_v(i,Jstr)/GRID(dg)%om_v(Ic,Jc)
northern_edge: cff1=GRID(ng)%om_v(i,Jend+1)/GRID(dg)%om_v(Ic,Jc)
========================================================

The calculating these cff1 is O.K. if single cpu.

But I thought that the tile of GRID(ng)%on_u(Istr,j) ,.,.. is not same as that of GRID(dg)%on_u(Ic,Jc),...., if using MPI parallel.
The tile of Do-loop calculating cff1 has the refine GRID variables e.g. GRID(ng)%on_u(Istr,j)......, but usually does not have the variables (GRID(dg)%on_u(Ic,Jc)) which is in the domain GRID tile.

I tried finding an easy way to solve it, but that is so difficult and complicate for me.
Would you please suggest me how to solve it?
Thank you, if you can take a look at them.

P.S.
I just tried a test case rewriting cff1=1/RefineScale(dg) instead of an actual code, because the domain has an almost Cartesian coordinate with a same grid width.
The short steps calculation goes well. But I know this change is not correct.

Post Reply