Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Differences in All-sky #489

Open
rtodling opened this issue Dec 19, 2024 · 2 comments
Open

Differences in All-sky #489

rtodling opened this issue Dec 19, 2024 · 2 comments
Assignees

Comments

@rtodling
Copy link
Contributor

I am still trying to reconcile differences in the all-sky analysis between GSI and JEDI.

I am looking at the yaml parameter setting Cloud_Fraction=1.0 ... I ask that those of you w/ knowledge of this chime in and try to help me out here.

I did 4 experiments: (i) GSI (black; q2 opt); (ii) GSI (red; q1 opt) (iii) default JEDI (green); and (iv) JEDI with the Cloud_Fraction parameter set to zero (blue).

gmi-cldfra

The figure should globally averaged RMS of increment.

Obviously I am not claiming that I think the parameter should be set to zero. What I am saying is that this parameter plays a rather significant role in the differences I am seeing. I am working if someone can explain the role this parameter plays in JEDI and how it maps onto an equivalent role that something of this type might play in GSI.

@mjkagnes123
Copy link

mjkagnes123 commented Dec 19, 2024

The comparison between green (smaller RMS) and blue (larger RMS) makes sense:
The default value for the cloud_fraction should be set to 1. If the background atmospheric profile contains clouds or precipitation at certain levels, the CRTM will read the clouds.bin file, calculate the optical properties, and then compute the cloudy radiance values.

However, if the cloud_fraction is set to zero, it doesn't matter whether the background atmospheric profile contains clouds or precipitation. The CRTM will treat the atmosphere as clear-sky, ignoring the clouds.bin file and calculating clear-sky radiance values. As a result, the Observation - Forecast (O-F) values would be much larger (causing larger increments, i.e., the blue line), because many of the observation points (O) would actually be cloudy, but the model would treat them as clear.

@rtodling
Copy link
Contributor Author

The comparison between green (smaller RMS) and blue (larger RMS) makes sense: The default value for the cloud_fraction should be set to 1. If the background atmospheric profile contains clouds or precipitation at certain levels, the CRTM will read the clouds.bin file, calculate the optical properties, and then compute the cloudy radiance values.

However, if the cloud_fraction is set to zero, it doesn't matter whether the background atmospheric profile contains clouds or precipitation. The CRTM will treat the atmosphere as clear-sky, ignoring the clouds.bin file and calculating clear-sky radiance values. As a result, the Observation - Forecast (O-F) values would be much larger (causing larger increments, i.e., the blue line), because many of the observation points (O) would actually be cloudy, but the model would treat them as clear.

@mjkagnes123 I appreciate the feedback, and I have to say that I certainly understand what you say. My point again is not to get an explanation for what happens when clouds are ignored in the all-sky analysis; my point is to find out what and how is the implementation in JEDI associated w/ what is in GSI. My point in showing the green curve is that its difference with the red curve seems suggestive of the differences seen between red and black, and thus lead me to think that the differences between JEDI are GSI - for the GMI case here - might be due to differences in implementation of all-sky features in these two systems.

What I have difficulty conveying to the observation group at GMAO is that however nice the UFO testing results might look, they don't test: (i) how JEDI handles a case w/ its own geovals (as opposed to GSI-provide geovals); and (ii) how the terms going into the linear operator and the outputs of it look like. These two items are paramount for the analysis to work, but they haven't been tested ... my tests are basically an indirect test for those, and I have already found a number of problems w/ the linear model, setting of configurations, etc, I suspect there are more things of this nature that are not know and that are reflected in the analysis JEDI ends up producing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants