diff --git a/doc/source/user_guide/ug_implementation.rst b/doc/source/user_guide/ug_implementation.rst index b8414f8fd..949d6c34f 100644 --- a/doc/source/user_guide/ug_implementation.rst +++ b/doc/source/user_guide/ug_implementation.rst @@ -132,11 +132,11 @@ This is shown in Figure :ref:`fig-Cgrid`. The user has several ways to initialize the grid: *popgrid* reads grid lengths and other parameters for a nonuniform grid (including tripole and regional grids), and *rectgrid* creates a regular rectangular grid. -The input files **global\_gx3.grid** and **global\_gx3.kmt** contain the +The input files **global_gx3.grid** and **global_gx3.kmt** contain the :math:`\left<3^\circ\right>` POP grid and land mask; -**global\_gx1.grid** and **global\_gx1.kmt** contain the -:math:`\left<1^\circ\right>` grid and land mask, and **global\_tx1.grid** -and **global\_tx1.kmt** contain the :math:`\left<1^\circ\right>` POP +**global_gx1.grid** and **global_gx1.kmt** contain the +:math:`\left<1^\circ\right>` grid and land mask, and **global_tx1.grid** +and **global_tx1.kmt** contain the :math:`\left<1^\circ\right>` POP tripole grid and land mask. These are binary unformatted, direct access, Big Endian files. @@ -183,7 +183,7 @@ block distribution are ``nx_block`` :math:`\times`\ ``ny_block``. The physical portion of a subdomain is indexed as [``ilo:ihi``, ``jlo:jhi``], with nghost “ghost” or “halo" cells outside the domain used for boundary conditions. These parameters are illustrated in :ref:`fig-grid` in one -dimension. The routines *global\_scatter* and *global\_gather* +dimension. The routines *global_scatter* and *global_gather* distribute information from the global domain to the local domains and back, respectively. If MPI is not being used for grid decomposition in the ice model, these routines simply adjust the indexing on the global @@ -215,7 +215,7 @@ four subdomains. The user sets the ``NTASKS`` and ``NTHRDS`` settings in **cice.settings** and chooses a block size ``block_size_x`` :math:`\times`\ ``block_size_y``, ``max_blocks``, and decomposition information ``distribution_type``, ``processor_shape``, -and ``distribution_type`` in **ice\_in**. That information is used to +and ``distribution_type`` in **ice_in**. That information is used to determine how the blocks are distributed across the processors, and how the processors are distributed across the grid domain. The model is parallelized over blocks @@ -223,18 +223,18 @@ for both MPI and OpenMP. Some suggested combinations for these parameters for best performance are given in Section :ref:`performance`. The script **cice.setup** computes some default decompositions and layouts but the user can overwrite the defaults by manually changing the values in -`ice\_in`. At runtime, the model will print decomposition +`ice_in`. At runtime, the model will print decomposition information to the log file, and if the block size or max blocks is inconsistent with the task and thread size, the model will abort. The code will also print a warning if the maximum number of blocks is too large. Although this is not fatal, it does use extra memory. If ``max_blocks`` is set to -1, the code will compute a tentative ``max_blocks`` on the fly. -A loop at the end of routine *create\_blocks* in module -**ice\_blocks.F90** will print the locations for all of the blocks on +A loop at the end of routine *create_blocks* in module +**ice_blocks.F90** will print the locations for all of the blocks on the global grid if the namelist variable ``debug_blocks`` is set to be true. Likewise, a similar loop at -the end of routine *create\_local\_block\_ids* in module -**ice\_distribution.F90** will print the processor and local block +the end of routine *create_local_block_ids* in module +**ice_distribution.F90** will print the processor and local block number for each block. With this information, the grid decomposition into processors and blocks can be ascertained. This ``debug_blocks`` variable should be used carefully as there may be hundreds or thousands of blocks to print @@ -242,7 +242,7 @@ and this information should be needed only rarely. ``debug_blocks`` can be set to true using the ``debugblocks`` option with **cice.setup**. This information is much easier to look at using a debugger such as Totalview. There is also -an output field that can be activated in `icefields\_nml`, ``f_blkmask``, +an output field that can be activated in `icefields_nml`, ``f_blkmask``, that prints out the variable ``blkmask`` to the history file and which labels the blocks in the grid decomposition according to ``blkmask = my_task + iblk/100``. @@ -277,14 +277,14 @@ the namelist variable ``ns_boundary_type``, ‘tripole’ for the U-fold and ‘tripoleT’ for the T-fold grid. In the U-fold tripole grid, the poles have U-index -:math:`nx\_global/2` and :math:`nx\_global` on the top U-row of the -physical grid, and points with U-index :math:`i` and :math:`nx\_global-i` +:math:`nx_global/2` and :math:`nx_global` on the top U-row of the +physical grid, and points with U-index :math:`i` and :math:`nx_global-i` are coincident. Let the fold have U-row index :math:`n` on the global grid; this will also be the T-row index of the T-row to the south of the fold. There are ghost (halo) T- and U-rows to the north, beyond the fold, on the logical grid. The point with index i along the ghost T-row of index :math:`n+1` physically coincides with point -:math:`nx\_global-i+1` on the T-row of index :math:`n`. The +:math:`nx_global-i+1` on the T-row of index :math:`n`. The ghost U-row of index :math:`n+1` physically coincides with the U-row of index :math:`n-1`. In the schematics below, symbols A-H represent grid points from 1:nx_global at a given j index and the setup of the @@ -310,15 +310,15 @@ tripole seam is depicted within a few rows of the seam. In the T-fold tripole grid, the poles have T-index :math:`1` and and -:math:`nx\_global/2+1` on the top T-row of the physical grid, and -points with T-index :math:`i` and :math:`nx\_global-i+2` are +:math:`nx_global/2+1` on the top T-row of the physical grid, and +points with T-index :math:`i` and :math:`nx_global-i+2` are coincident. Let the fold have T-row index :math:`n` on the global grid. It is usual for the northernmost row of the physical domain to be a U-row, but in the case of the T-fold, the U-row of index :math:`n` is “beyond” the fold; although it is not a ghost row, it is not physically independent, because it coincides with U-row :math:`n-1`, and it therefore has to be treated like a ghost row. Points i on U-row -:math:`n` coincides with :math:`nx\_global-i+1` on U-row +:math:`n` coincides with :math:`nx_global-i+1` on U-row :math:`n-1`. There are still ghost T- and U-rows :math:`n+1` to the north of U-row :math:`n`. Ghost T-row :math:`n+1` coincides with T-row :math:`n-1`, and ghost U-row :math:`n+1` coincides with U-row @@ -427,7 +427,7 @@ restoring timescale ``trestore`` may be used (it is also used for restoring ocean sea surface temperature in stand-alone ice runs). This implementation is only intended to provide the “hooks" for a more sophisticated treatment; the rectangular grid option can be used to test -this configuration. The ‘displaced\_pole’ grid option should not be used +this configuration. The ‘displaced_pole’ grid option should not be used unless the regional grid contains land all along the north and south boundaries. The current form of the boundary condition routines does not allow Neumann boundary conditions, which must be set explicitly. This @@ -470,7 +470,7 @@ The logical masks ``tmask``, ``umask``, ``nmask``, and ``emask`` respectively) are useful in conditional statements. In addition to the land masks, two other masks are implemented in -*dyn\_prep* in order to reduce the dynamics component’s work on a global +*dyn_prep* in order to reduce the dynamics component’s work on a global grid. At each time step the logical masks ``iceTmask`` and ``iceUmask`` are determined from the current ice extent, such that they have the value “true” wherever ice exists. They also include a border of cells around @@ -842,7 +842,7 @@ is the step count at the start of a long multi-restart run, and is continuous across model restarts. In general, the time manager should be advanced by calling -*advance\_timestep*. This subroutine in **ice\_calendar.F90** +*advance_timestep*. This subroutine in **ice_calendar.F90** automatically advances the model time by ``dt``. It also advances the istep numbers and calls subroutine *calendar* to update additional calendar data. @@ -912,7 +912,7 @@ may vary with each run depending on several factors including the model timestep, initial date, and value of ``istep0``. The model year is limited by some integer math. In particular, calculation -of elapsed hours in **ice\_calendar.F90**, and the model year is +of elapsed hours in **ice_calendar.F90**, and the model year is limited to the value of ``myear_max`` set in that file. Currently, that's 200,000 years. @@ -927,10 +927,10 @@ set the namelist variables ``year_init``, ``month_init``, ``day_init``, ``sec_init``, and ``dt`` in conjuction with ``days_per_year`` and ``use_leap_years`` to initialize the model date, timestep, and calendar. To overwrite the default/namelist settings in the coupling layer, -set the **ice\_calendar.F90** variables ``myear``, ``mmonth``, ``mday``, +set the **ice_calendar.F90** variables ``myear``, ``mmonth``, ``mday``, ``msec`` and ``dt`` after the namelists have been read. Subroutine *calendar* should then be called to update all the calendar data. -Finally, subroutine *advance\_timestep* should be used to advance +Finally, subroutine *advance_timestep* should be used to advance the model time manager. It advances the step numbers, advances time by ``dt``, and updates the calendar data. The older method of manually advancing the steps and adding ``dt`` to ``time`` should @@ -945,11 +945,11 @@ Initialization and Restarts The ice model’s parameters and variables are initialized in several steps. Many constants and physical parameters are set in -**ice\_constants.F90**. Namelist variables (:ref:`tabnamelist`), -whose values can be altered at run time, are handled in *input\_data* +**ice_constants.F90**. Namelist variables (:ref:`tabnamelist`), +whose values can be altered at run time, are handled in *input_data* and other initialization routines. These variables are given default values in the code, which may then be changed when the input file -**ice\_in** is read. Other physical constants, numerical parameters, and +**ice_in** is read. Other physical constants, numerical parameters, and variables are first set in initialization routines for each ice model component or module. Then, if the ice model is being restarted from a previous run, core variables are read and reinitialized in @@ -1038,12 +1038,12 @@ An additional namelist option, ``restart_coszen`` specifies whether the cosine of the zenith angle is included in the restart files. This is mainly used in coupled models. -MPI is initialized in *init\_communicate* for both coupled and +MPI is initialized in *init_communicate* for both coupled and stand-alone MPI runs. The ice component communicates with a flux coupler or other climate components via external routines that handle the variables listed in the `Icepack documentation `_. For stand-alone runs, -routines in **ice\_forcing.F90** read and interpolate data from files, +routines in **ice_forcing.F90** read and interpolate data from files, and are intended merely to provide guidance for the user to write his or her own routines. Whether the code is to be run in stand-alone or coupled mode is determined at compile time, as described below. @@ -1104,7 +1104,7 @@ next time step. The subcycling time step (:math:`\Delta t_e`) is thus .. math:: - dte = dt\_dyn/ndte. + dte = dt_dyn/ndte. A second parameter, :math:`E_\circ` (``elasticDamp``), defines the elastic wave damping timescale :math:`T`, described in Section :ref:`dynam`, as @@ -1239,10 +1239,10 @@ and ``history_suffix`` namelist setting. The settings for history files are set in the **setup_nml** section of **ice_in** (see :ref:`tabnamelist`). The history filenames will have a form like -**history_file//history_suffix[_freq].[timeID].[nc,da]** +**[history_file][history_suffix][_freq].[timeID].[nc,da]** depending on the namelist options chosen. With binary files, a separate header file is written with equivalent information. Standard fields are output -according to settings in the **icefields\_nml** section of **ice\_in** +according to settings in the **icefields_nml** section of **ice_in** (see :ref:`tabnamelist`). The user may add (or subtract) variables not already available in the namelist by following the instructions in section :ref:`addhist`. @@ -1250,14 +1250,14 @@ namelist by following the instructions in section :ref:`addhist`. The history implementation has been divided into several modules based on the desired formatting and on the variables themselves. Parameters, variables and routines needed by multiple -modules is in **ice\_history\_shared.F90**, while the primary routines +modules is in **ice_history_shared.F90**, while the primary routines for initializing and accumulating all of the history variables are in -**ice\_history.F90**. These routines call format-specific code in the -**io\_binary**, **io\_netcdf** and **io\_pio2** directories. History +**ice_history.F90**. These routines call format-specific code in the +**io_binary**, **io_netcdf** and **io_pio2** directories. History variables specific to certain components or parameterizations are -collected in their own history modules (**ice\_history\_bgc.F90**, -**ice\_history\_drag.F90**, **ice\_history\_mechred.F90**, -**ice\_history\_pond.F90**). +collected in their own history modules (**ice_history_bgc.F90**, +**ice_history_drag.F90**, **ice_history_mechred.F90**, +**ice_history_pond.F90**). The history modules allow output at different frequencies. Five output options (``1``, ``h``, ``d``, ``m``, ``y``) are available simultaneously for ``histfreq`` @@ -1269,16 +1269,15 @@ Each stream can be instantaneous or time averaged data over the frequency internal. The ``hist_avg`` namelist turns on time averaging for each stream individually. The same model variable can be written to multiple history streams (ie. daily ``d`` and -monthly ``m``) via its namelist flag, `f\_` :math:`\left<{var}\right>`, while ``x`` +monthly ``m``) via its namelist flag, `f_` :math:`\left<{var}\right>`, while ``x`` turns that history variable off. For example, ``f_aice = 'md'`` will write aice to the monthly and daily streams. Grid variable history output flags are logicals and written to all stream files if turned on. If there are no namelist flags with a given ``histfreq`` value, or if an element of ``histfreq_n`` is 0, then no file will be written at that frequency. The history filenames are set in -the subroutine **construct_filename** in **ice_history_shared.F90**. The stream -filename is a function of the output frequency, and the ``hist_avg`` and ``hist_suffix`` -values. In cases where two streams produce the same identical filename, the model will +the subroutine **construct_filename** in **ice_history_shared.F90**. +In cases where two streams produce the same identical filename, the model will abort. Use the namelist ``hist_suffix`` to make stream filenames unique. More information about how the frequency is computed is found in :ref:`timemanager`. Also, some @@ -1328,14 +1327,14 @@ above, ``meltb`` is called ``meltb`` in the monthly file (for backward compatibility with the default configuration) and ``meltb_h`` in the 6-hourly file. -If ``write_ic`` is set to true in **ice\_in**, a snapshot of the same set +If ``write_ic`` is set to true in **ice_in**, a snapshot of the same set of history fields at the start of the run will be written to the history -directory in **iceh\_ic.[timeID].nc(da)**. Several history variables are +directory in **iceh_ic.[timeID].nc(da)**. Several history variables are hard-coded for instantaneous output regardless of the ``hist_avg`` averaging flag, at the frequency given by their namelist flag. The normalized principal components of internal ice stress (``sig1``, ``sig2``) are computed -in *principal\_stress* and written to the history file. This calculation +in *principal_stress* and written to the history file. This calculation is not necessary for the simulation; principal stresses are merely computed for diagnostic purposes and included here for the user’s convenience. @@ -1343,7 +1342,7 @@ convenience. Several history variables are available in two forms, a value representing an average over the sea ice fraction of the grid cell, and another that is multiplied by :math:`a_i`, representing an average over -the grid cell area. Our naming convention attaches the suffix “\_ai" to +the grid cell area. Our naming convention attaches the suffix “_ai" to the grid-cell-mean variable names. Beginning with CICE v6, history variables requested by the Sea Ice Model Intercomparison @@ -1353,9 +1352,9 @@ Project (SIMIP) :cite:`Notz16` have been added as possible history output variab `daily `_ requested SIMIP variables provide the names of possible history fields in CICE. However, each of the additional variables can be output at any temporal frequency -specified in the **icefields\_nml** section of **ice\_in** as detailed above. +specified in the **icefields_nml** section of **ice_in** as detailed above. Additionally, a new history output variable, ``f_CMIP``, has been added. When ``f_CMIP`` -is added to the **icefields\_nml** section of **ice\_in** then all SIMIP variables +is added to the **icefields_nml** section of **ice_in** then all SIMIP variables will be turned on for output at the frequency specified by ``f_CMIP``. It may also be helpful for debugging to increase the precision of the history file @@ -1368,7 +1367,7 @@ Diagnostic files Like ``histfreq``, the parameter ``diagfreq`` can be used to regulate how often output is written to a log file. The log file unit to which diagnostic -output is written is set in **ice\_fileunits.F90**. If ``diag_type`` = +output is written is set in **ice_fileunits.F90**. If ``diag_type`` = ‘stdout’, then it is written to standard out (or to **ice.log.[ID]** if you redirect standard out as in **cice.run**); otherwise it is written to the file given by ``diag_file``. @@ -1382,7 +1381,7 @@ useful for checking global conservation of mass and energy. ``print_points`` writes data for two specific grid points defined by the input namelist ``lonpnt`` and ``latpnt``. By default, one point is near the North Pole and the other is in the Weddell Sea; these -may be changed in **ice\_in**. +may be changed in **ice_in**. The namelist ``debug_model`` prints detailed debug diagnostics for a single point as the model advances. The point is defined @@ -1395,16 +1394,16 @@ namelist, the point associated with ``lonpnt(1)`` and ``latpnt(1)`` is used. in detail at a particular (usually failing) grid point. Memory use diagnostics are controlled by the logical namelist ``memory_stats``. -This feature uses an intrinsic query in C defined in **ice\_memusage\_gptl.c**. +This feature uses an intrinsic query in C defined in **ice_memusage_gptl.c**. Memory diagnostics will be written at the the frequency defined by diagfreq. -Timers are declared and initialized in **ice\_timers.F90**, and the code -to be timed is wrapped with calls to *ice\_timer\_start* and -*ice\_timer\_stop*. Finally, *ice\_timer\_print* writes the results to +Timers are declared and initialized in **ice_timers.F90**, and the code +to be timed is wrapped with calls to *ice_timer_start* and +*ice_timer_stop*. Finally, *ice_timer_print* writes the results to the log file. The optional “stats" argument (true/false) prints additional statistics. The "stats" argument can be set by the ``timer_stats`` -namelist. Calling *ice\_timer\_print\_all* prints all of +namelist. Calling *ice_timer_print_all* prints all of the timings at once, rather than having to call each individually. Currently, the timers are set up as in :ref:`timers`. Section :ref:`addtimer` contains instructions for adding timers. @@ -1416,8 +1415,8 @@ the code, including the dynamics and advection routines. The Dynamics, Advection, and Column timers do not overlap and represent most of the overall model work. -The timers use *MPI\_WTIME* for parallel runs and the F90 intrinsic -*system\_clock* for single-processor runs. +The timers use *MPI_WTIME* for parallel runs and the F90 intrinsic +*system_clock* for single-processor runs. .. _timers: