From d7056b3e45f09d8003f3c13212c3247c5ed5837e Mon Sep 17 00:00:00 2001 From: Travis Sluka Date: Mon, 11 Mar 2024 21:23:01 -0600 Subject: [PATCH] update docs to address feedback --- README.md | 4 ++++ init/README.md | 6 ++++-- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index d2de588..8fd2468 100644 --- a/README.md +++ b/README.md @@ -127,6 +127,10 @@ make -j 5 Assuming SOCA compiled correctly, you should be able to run the ctests, which are simple tests using a 5 degree ocean grid. See the notes [here](https://jointcenterforsatellitedataassimilation-jedi-docs.readthedocs-hosted.com/en/7.0.0/using/running_skylab/HPC_users_guide.html) about obtaining a compute node before running the tests. Assuming you are within the `build/soca` directory, running `ctest` will only run the tests for SOCA (there are hundreds of other tests for the other JEDI components that you probably don't care about) +If for some reason a test fails, you can rerun a given test and view the output with `ctest -R -V`. + +If for some reason ALL of the tests fail, it's possible that the data files were not downloaded correctly with git lfs, double check to make sure git lfs was setup correctly. (Look at the netCDF files in `./soca/test/Data/` they should be actual netCDF files, not tet files describing which data file git lfs should download.) + ## Tutorial Experiments The files need for a single cycle of several DA methods are provided (observations, background, static files, and yaml configurations). To get the binary data, download the input data from our [Google drive here](https://drive.google.com/uc?export=download&id=15dpIwXWXU72hYQy-wGLuYnrVB-J0eIb4) . Unpack the file with the following command and you should now have a `soca-tutorial/input_data` directory. diff --git a/init/README.md b/init/README.md index cbe5906..569ca43 100644 --- a/init/README.md +++ b/init/README.md @@ -68,7 +68,7 @@ The first step in calibrating the correlation operator is to generate the desire > ./calc_scales.py diffusion_setscales.yaml > ``` -You can look at the resulting `scales.nc` file. You can notice that the vertical scales are deeper in the southern hemisphere, and shallow in the Northern hemisphere, which is appropriate for the date of the initial conditions (Aug 1). +You can look at the resulting `scales.nc` file. You should notice that the vertical scales are deeper in the southern hemisphere, and shallow in the Northern hemisphere, which is appropriate for the date of the initial conditions (Aug 1). (Note: your vertical scales will look different. The plot shown is without any clipping to the size of the vertical scales. However, the resulting values of >50 are too large for explicit diffusion to be efficient, so they are clipped to 10 levels in the given configuration file. ) | hz scales (0-300km) | vt scales (0-50 lvls) | | :--: | :--: | @@ -94,11 +94,13 @@ The operator is split in this way so that the calculation of the horizontal diff Open the configuration file, `diffusion_parameters.yaml`, to see the structure of the yaml file. You'll notice that the vertical and horizontal parameters are specified and calculated separately as two distinct `group` items, and they use the scales that were generated in the previous step. > [!IMPORTANT] -> Run the diffusion operator calibration, replace `-n 10` with the actual number of cores you have available: +> Run the diffusion operator calibration, replace `-n 10` with the actual number of cores you have available: > > ```bash > mpirun -n 10 ./soca_error_covariance_toolbox.x diffusion_parameters.yaml > ``` +> +> (Note, if you run with too few cores, you'll have to adjust `domains_stack_size` in the `mom_input.nml` configuration file to something larger) For each group, the log file will display some important information. One important thing to note is how many iterations of the diffusion operator will be required. This is a function of the length scale and grid size, and the number of iterations required will be kept large enough in order to keep the system stable.