Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

202305 Release of GFDL_atmos_cubed_sphere #273

Merged
merged 23 commits into from
Jun 8, 2023

Conversation

laurenchilutti
Copy link
Contributor

@laurenchilutti laurenchilutti commented Jun 1, 2023

Description

This PR publishes GFDL_atmos_cubed_sphere 202305 release. This release coincides with the release of SHiELD_physics 202305.

The changes included in this PR are from the GFDL FV3 Team. Full description of changes can be seen in the RELEASE.md

PRs that are all related and should be merged with this one:
NOAA-GFDL/SHiELD_physics#22
NOAA-GFDL/SHiELD_build#22
NOAA-GFDL/atmos_drivers#22

Fixes # (issue)

How Has This Been Tested?

Tested with the regression tests in SHiELD_build

Checklist:

Please check all whether they apply or not

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • Any dependent changes have been merged and published in downstream modules

lharris4 and others added 23 commits June 1, 2023 08:57
Introducing idealized TC test case with SHiELD physics

See merge request fv3team/atmos_cubed_sphere!91
remove the abandoned file fv_diagnostics.F90.epv

See merge request fv3team/atmos_cubed_sphere!97
bug fix to prevent model crash when using domain_deg = 0. in debug mode

See merge request fv3team/atmos_cubed_sphere!99
Fix to avoid reproducibility issue.

See merge request fv3team/atmos_cubed_sphere!100
A number of updates

See merge request fv3team/atmos_cubed_sphere!101
New test cases

See merge request fv3team/atmos_cubed_sphere!102
Improved Rayleigh Damping on w

See merge request fv3team/atmos_cubed_sphere!103
Update namelist reading code to avoid model crash because of the absence of naemlist.

See merge request fv3team/atmos_cubed_sphere!104
Experimental 2D Smagorinsky damping and tau_w

See merge request fv3team/atmos_cubed_sphere!107
Add the option to disable intermediate physics.

See merge request fv3team/atmos_cubed_sphere!108
Add the options to sub-cycling condensation evaporation, control the time scale of evaporation, and delay condensation and evaporation.

See merge request fv3team/atmos_cubed_sphere!109
Remove grid size in energy and mass calculation

See merge request fv3team/atmos_cubed_sphere!110
2023/03 Jan-Huey Chen

See merge request fv3team/atmos_cubed_sphere!112
FV3 Solver updates 202305

See merge request fv3team/atmos_cubed_sphere!114
Pass the namelist variables from the dycore to the physics during the initialization

See merge request fv3team/atmos_cubed_sphere!117
Rolling back smag damping

See merge request fv3team/atmos_cubed_sphere!118
Revised vertical remapping operators

See merge request fv3team/atmos_cubed_sphere!116
Removed dudz and dvdz arrays that are not currently used.

See merge request fv3team/atmos_cubed_sphere!119
Add nest to DP cartesian config

See merge request fv3team/atmos_cubed_sphere!121
fix nesting in solo mode and add a new idealized test case for multiple nests

See merge request fv3team/atmos_cubed_sphere!120
fix square_domain logic for one tile grids

See merge request fv3team/atmos_cubed_sphere!122
@laurenchilutti
Copy link
Contributor Author

The CI is expected to fail until I merge in NOAA-GFDL/SHiELD_build#22 because idealized test cases now require a new namelist flag

Copy link
Contributor

@lharris4 lharris4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, Lauren. This is really great. Thank you for working so hard to get this done.

id_dynam = mpp_clock_id ('FV dy-core', flags = clock_flag_default, grain=CLOCK_SUBCOMPONENT )
id_subgridz = mpp_clock_id ('FV subgrid_z',flags = clock_flag_default, grain=CLOCK_SUBCOMPONENT )
id_fv_diag = mpp_clock_id ('FV Diag', flags = clock_flag_default, grain=CLOCK_SUBCOMPONENT )
id_dycore = mpp_clock_id ('---FV Dycore',flags = clock_flag_default, grain=CLOCK_SUBCOMPONENT )
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Glad to see that the timers are nicely cleaned up. We have been needing to do this for a long time.

@@ -684,6 +702,10 @@ subroutine remap_restart(Atm)

fname = 'INPUT/fv_core.res'//trim(stile_name)//'.nc'
if (open_file(Fv_tile_restart_r, fname, "read", fv_domain, is_restart=.true.)) then
if (Atm(1)%flagstruct%is_ideal_case) then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good! This should only be necessary for the ideal case.

Copy link
Contributor

@bensonr bensonr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing I've noted should stop this PR from being merged, but we should address the issues noted sooner rather than later.

@@ -57,6 +58,8 @@ module dyn_core_mod
#ifdef SW_DYNAMICS
use test_cases_mod, only: test_case, case9_forcing1, case9_forcing2
#endif
use test_cases_mod, only: w_forcing
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not a big fan of using the parameter from test_cases_mod. If it is something needed by the simulations, it might be better included in flagstruct and defined in fv_control/fv_arrays.

@@ -895,6 +905,8 @@ module fv_arrays_mod
real(kind=R_GRID) :: deglat=15. !< Latitude (in degrees) used to compute the uniform f-plane
!< Coriolis parameter for doubly-periodic simulations
!< (grid_type = 4). The default value is 15.
real(kind=R_GRID) :: domain_deg = 0.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should get a description for this variable.

@@ -194,7 +194,7 @@ subroutine grid_utils_init(Atm, npx, npy, npz, non_ortho, grid_type, c2l_order)
if (.not. Atm%flagstruct%external_eta) then
call set_eta(npz, Atm%ks, Atm%ptop, Atm%ak, Atm%bk, Atm%flagstruct%npz_type, Atm%flagstruct%fv_eta_file)
if ( is_master() ) then
write(*,*) 'Grid_init', npz, Atm%ks, Atm%ptop
!write(*,*) 'Grid_init', npz, Atm%ks, Atm%ptop
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should slate this for removal.

@@ -586,7 +585,7 @@ subroutine init_grid(Atm, grid_name, grid_file, npx, npy, npz, ndims, nregions,
if (Atm%flagstruct%grid_type>3) then
if (Atm%flagstruct%grid_type == 4) then
call setup_cartesian(npx, npy, Atm%flagstruct%dx_const, Atm%flagstruct%dy_const, &
Atm%flagstruct%deglat, Atm%bd)
Atm%flagstruct%deglat, Atm%flagstruct%domain_deg, Atm%bd, Atm)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because this is sending in elements of the Atm structure and the whole Atm, the interface should be simplified and remove the individual Atm elements as unexpected things can happen with scope INOUT variables.

Comment on lines +908 to +916
if (gid == sending_proc) then !crazy logic but what we have for now
do p=1,size(Atm(1)%pelist)
call mpp_send(g_dat,size(g_dat),Atm(1)%pelist(p))
enddo
endif
endif
if (ANY(Atm(1)%pelist == gid)) then
call mpp_recv(g_dat, size(g_dat), sending_proc)
endif
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be able to use mpp_broadcast with the pelist

Comment on lines +1004 to +1012
if (gid == sending_proc) then
do p=1,size(Atm(1)%pelist)
call mpp_send(g_dat,size(g_dat),Atm(1)%pelist(p))
enddo
endif
endif
if (ANY(Atm(1)%pelist == gid)) then
call mpp_recv(g_dat, size(g_dat), sending_proc)
endif
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be able to use mpp_broadcast here with the given pelist

Comment on lines +1099 to +1107
if (gid == sending_proc) then
do p=1,size(Atm(1)%pelist)
call mpp_send(g_dat,size(g_dat),Atm(1)%pelist(p))
enddo
endif
endif
if (ANY(Atm(1)%pelist == gid)) then
call mpp_recv(g_dat, size(g_dat), sending_proc)
endif
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be able to use mpp_broadcast here with the given pelist

Comment on lines +1125 to +1133
if (gid == sending_proc) then
do p=1,size(Atm(1)%pelist)
call mpp_send(g_dat,size(g_dat),Atm(1)%pelist(p))
enddo
endif
endif
if (ANY(Atm(1)%pelist == gid)) then
call mpp_recv(g_dat, size(g_dat), sending_proc)
endif
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be able to use mpp_broadcast here with the given pelist

Comment on lines +1160 to +1168
if (gid == sending_proc) then
do p=1,size(Atm(1)%pelist)
call mpp_send(g_dat,size(g_dat),Atm(1)%pelist(p))
enddo
endif
endif
if (ANY(Atm(1)%pelist == gid)) then
call mpp_recv(g_dat, size(g_dat), sending_proc)
endif
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be able to use mpp_broadcast here with the given pelist

Comment on lines +1191 to +1199
if (gid == sending_proc) then
do p=1,size(Atm(1)%pelist)
call mpp_send(g_dat,size(g_dat),Atm(1)%pelist(p))
enddo
endif
endif
if (ANY(Atm(1)%pelist == gid)) then
call mpp_recv(g_dat, size(g_dat), sending_proc)
endif
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be able to use mpp_broadcast here with the given pelist

@lharris4
Copy link
Contributor

lharris4 commented Jun 3, 2023 via email

@laurenchilutti laurenchilutti merged commit be0a6a6 into NOAA-GFDL:main Jun 8, 2023
laurenchilutti added a commit to laurenchilutti/GFDL_atmos_cubed_sphere that referenced this pull request Jun 26, 2023
This is the 202305 public release.  This release is the work of the GFDL FV3 development team.
* Merge branch 'TC20_test_case' into 'main'

Introducing idealized TC test case with SHiELD physics

See merge request fv3team/atmos_cubed_sphere!91

* Merge branch 'user/lnz/shield2022_gfdlmp' into 'main'

remove the abandoned file fv_diagnostics.F90.epv

See merge request fv3team/atmos_cubed_sphere!97

* Merge branch 'domain_deg_fix' into 'main'

bug fix to prevent model crash when using domain_deg = 0. in debug mode

See merge request fv3team/atmos_cubed_sphere!99

* Merge branch 'user/lnz/shield2022_gfdlmp' into 'main'

Fix to avoid reproducibility issue.

See merge request fv3team/atmos_cubed_sphere!100

* Merge branch 'user/lnz/shield2022_gfdlmp' into 'main'

A number of updates

See merge request fv3team/atmos_cubed_sphere!101

* Merge branch 'main' into 'main'

New test cases

See merge request fv3team/atmos_cubed_sphere!102

* Merge branch 'tau_w_202210' into 'main'

Improved Rayleigh Damping on w

See merge request fv3team/atmos_cubed_sphere!103

* Merge branch 'user/lnz/shield2022_gfdlmp' into 'main'

Update namelist reading code to avoid model crash because of the absence of naemlist.

See merge request fv3team/atmos_cubed_sphere!104

* Merge branch 'smag2d_202211' into 'main'

Experimental 2D Smagorinsky damping and tau_w

See merge request fv3team/atmos_cubed_sphere!107

* Merge branch 'user/lnz/shield2022_gfdlmp' into 'main'

Add the option to disable intermediate physics.

See merge request fv3team/atmos_cubed_sphere!108

* Merge branch 'user/lnz/shield2022_gfdlmp' into 'main'

Add the options to sub-cycling condensation evaporation, control the time scale of evaporation, and delay condensation and evaporation.

See merge request fv3team/atmos_cubed_sphere!109

* Merge branch 'user/lnz/shield2023' into 'main'

Remove grid size in energy and mass calculation

See merge request fv3team/atmos_cubed_sphere!110

* Merge branch 'user/lnz/shield2023' into 'main'

2023/03 Jan-Huey Chen

See merge request fv3team/atmos_cubed_sphere!112

* Merge branch 'lmh_public_release_202205' into 'main'

FV3 Solver updates 202305

See merge request fv3team/atmos_cubed_sphere!114

* gnu updates

* Merge branch 'user/lnz/shield2023' into 'main'

Pass the namelist variables from the dycore to the physics during the initialization

See merge request fv3team/atmos_cubed_sphere!117

* Merge branch 'main_mayrelease_smag_rollback' into 'main'

Rolling back smag damping

See merge request fv3team/atmos_cubed_sphere!118

* Merge branch 'lmh_revised_mapz' into 'main'

Revised vertical remapping operators

See merge request fv3team/atmos_cubed_sphere!116

* Merge branch 'main_upstream_mayrelease_dudz' into 'main'

Removed dudz and dvdz arrays that are not currently used.

See merge request fv3team/atmos_cubed_sphere!119

* Merge branch 'dp+nest' into 'main'

Add nest to DP cartesian config

See merge request fv3team/atmos_cubed_sphere!121

* Merge branch 'solonest' into 'main'

fix nesting in solo mode and add a new idealized test case for multiple nests

See merge request fv3team/atmos_cubed_sphere!120

* Merge branch 'fixsquare' into 'main'

fix square_domain logic for one tile grids

See merge request fv3team/atmos_cubed_sphere!122

* updating release notes.

---------

Co-authored-by: Lucas Harris <[email protected]>
Co-authored-by: Linjiong Zhou <[email protected]>
Co-authored-by: Joseph Mouallem <[email protected]>
@laurenchilutti laurenchilutti deleted the mayrelease branch December 23, 2024 14:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants