Opened 6 years ago

Closed 6 years ago

Last modified 6 years ago

#778 closed upgrade (Done)

Important: Improved efficiency issues

Reported by: arango Owned by:
Priority: major Milestone: Release ROMS/TOMS 3.7
Component: Nonlinear Version: 3.7
Keywords: Cc:

Description (last modified by arango)

Several computational aspects affecting efficiency were addressed:

  • Modified routine netcdf_create to set the NOFILL mode to improve performance in serial I/O applications involving a high number of CPUs:
    !
    !  Set NOFILL mode to enhance performance.
    !
          IF (exit_flag.eq.NoError) THEN
            status=nf90_set_fill(ncid, nf90_nofill, OldFillMode)
            IF (FoundError(status, nf90_noerr, __LINE__,                    &
         &                 __FILE__ //                                      &
         &                 ", netcdf_create")) THEN
              IF (Master) WRITE (stdout,10) TRIM(ncname), TRIM(SourceFile)
              exit_flag=3
              ioerror=status
            END IF
          END IF
    
    which changes the default behavior of prefilling variables with fill values that are overwritten when the NetCDF library writes data into the file.

Notice that the use of this feature may not be available (or even needed) in future releases of the NetCDF library.

Many thanks to Jiangtao Xu at NOAA for bringing this to my attention.

  • Restricted the allocation of 4D-Var variable TLmodVal_S(Mobs,Ninner,Nouter) in mod_fourdvar.F to the master node:
         IF (Master) THEN                                ! Needed only
           allocate ( TLmodVal_S(Mobs,Ninner,Nouter) )   ! by master. It
           TLmodVal_S = IniVal                           ! is a memory hog
         END IF                                          ! in other nodes
    

It used by the 4D-Var minimization solver (congrad.F or rpcg_lanczos.F) which is operated only by the master processor. If the number of observations is large, this array becomes a memory hug and bottleneck in fine resolution grids running on multi-node MPI applications.

Many thanks to Brian Powell for reporting this problem

  • Removed the MPI broadcasting of the TLmodVal_S arrays in congrad.F and rpcg_lanczos.F since the array is unallocated in other MPI nodes.
  • The reading of the array TLmodVal_S in the observation impacts and observation sensitivities (obs_sen_w4dpsas.h, obs_sen_w4dvar.h) array modes (array_modes_w4dvar.h) were modified to avoid the MPI broadcasting of its values to other nodes in the communicator group since the vatiable is unallocated:
            CALL netcdf_get_fvar (ng, iTLM, LCZ(ng)%name, 'TLmodVal_S',     &
         &                        TLmodVal_S,                               &
         &                        broadcast = .FALSE.)   ! Master use only
            IF (FoundError(exit_flag, NoError, __LINE__,                    &
         &                 __FILE__)) RETURN
    
    
  • Modified module procedure netcdf_get_fvar in mod_netcdf.F to include the optional broadcast argument for broadcasting or not its value to all member of the communicator in distributed-memory applications. For example, we now have:
          SUBROUTINE netcdf_get_fvar_1d (ng, model, ncname, myVarName, A,   &
         &                               ncid, start, total, broadcast,     &
         &                               min_val, max_val)
    !
    !=======================================================================
    !                                                                      !
    !  This routine reads requested floating-point 1D-array variable from  !
    !  specified NetCDF file.                                              !
    !                                                                      !
    !  On Input:                                                           !
    !                                                                      !
    !     ng           Nested grid number (integer)                        !
    !     model        Calling model identifier (integer)                  !
    !     ncname       NetCDF file name (string)                           !
    !     myVarName    Variable name (string)                              !
    !     ncid         NetCDF file ID (integer, OPTIONAL)                  !
    !     start        Starting index where the first of the data values   !
    !                    will be read along each dimension (integer,       !
    !                    OPTIONAL)                                         !
    !     total        Number of data values to be read along each         !
    !                    dimension (integer, OPTIONAL)                     !
    !     broadcast    Switch to broadcast read values from root to all    !
    !                    members of the communicator in distributed-       !
    !                    memory applications (logical, OPTIONAL,           !
    !                    default=TRUE)                                     !
    !                                                                      !
    !  On Ouput:                                                           !
    !                                                                      !
    !     A            Read 1D-array variable (real)                       !
    !     min_val      Read data minimum value (real, OPTIONAL)            !
    !     max_val      Read data maximum value (real, OPTIONAL)            !
    !                                                                      !
    !  Examples:                                                           !
    !                                                                      !
    !    CALL netcdf_get_fvar (ng, iNLM, 'file.nc', 'VarName', fvar)       !
    !    CALL netcdf_get_fvar (ng, iNLM, 'file.nc', 'VarName', fvar(0:))   !
    !    CALL netcdf_get_fvar (ng, iNLM, 'file.nc', 'VarName', fvar(:,1))  !
    !                                                                      !
    !=======================================================================
    !
    
  • Modified the fine-tuning of input parameter nRST in read_phypar.F to allow the writing of various initialization records in lagged data assimilation windows:
    #if defined IS4DVAR || defined W4DPSAS || defined W4DVAR
    !
    !  Ensure that restart file is written only at least at the end.  In
    !  sequential data assimilation the restart file can be used as the
    !  first guess for the next assimilation cycle.  Notice that we can
    !  also use the DAINAME file for such purpose. However, in lagged
    !  data assimilation windows, "nRST" can be set to a value less than
    !  "ntimes" (say, daily) and "LcycleRST" is set to false. So, there
    !  are several initialization record possibilities for the next
    !  assimilation cycle.
    !
            IF (nRST(ng).gt.ntimes(ng)) THEN
              nRST(ng)=ntimes(ng)
            END IF
     #endif
    

Many thanks to Jiangtao Xu at NOAA for bringing this to my attention.

  • If ROMS_STDOUT is activated, make sure the standard output file is only opened by the master processor in inp_par.F:
    ifdef ROMS_STDOUT
    !
    !  Change default Fortran standard out unit, so ROMS run information is
    !  directed to a file. This is advantageous in coupling applications to
    !  ROMS information separated from other models.
    !
           stdout=20                      ! overwite Fortran default unit 6
    !
           IF (Master) THEN
             OPEN (stdout, FILE='log.roms', FORM='formatted',               &
         &         STATUS='replace')
           END IF
    #endif
    
    It also requires to modify clock_on and clock_off critial regions in timers.F:
    # ifdef ROMS_STDOUT
              IF (.not.allocated(Pids)) THEN
                allocate ( Pids(numthreads) )
                Pids=0
              END IF
              Pids(MyRank+1)=proc(0,MyModel,ng)
              CALL mp_collect (ng, model, numthreads, Pspv, Pids)
              IF (Master) THEN
                DO node=1,numthreads
                  WRITE (stdout,10) ' Node #', node-1,                      &
         &                      ' (pid=',Pids(node),') is active.'
                END DO
              END IF
              IF (allocated(Pids)) deallocate (Pids)
    # else
              WRITE (stdout,10) ' Node #', MyRank,                          &
         &                      ' (pid=',proc(0,MyModel,ng),') is active.'
    # endif
    
    and
    # ifdef ROMS_STDOUT
              IF (.not.allocated(Tend)) THEN
                allocate ( Tend(numthreads) )
                Tend=0.0_r8
              END IF
              Tend(MyRank+1)=Cend(region,MyModel,ng)
              CALL mp_collect (ng, model, numthreads, Tspv, Tend)
              IF (Master) THEN
                DO node=1,numthreads
                   WRITE (stdout,10) ' Node   #', node-1,                   &
         &                           ' CPU:', Tend(node)
                END DO
              END IF
              IF (allocated(Tend)) deallocate (Tend)
    # else
              WRITE (stdout,10) ' Node   #', MyRank, ' CPU:',               &
         &                      Cend(region,MyModel,ng)
    # endif
    
    

Change History (3)

comment:1 by arango, 6 years ago

Resolution: Done
Status: newclosed

comment:2 by arango, 6 years ago

Description: modified (diff)

comment:3 by arango, 6 years ago

Description: modified (diff)
Note: See TracTickets for help on using tickets.