Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ZeroMeanNormal distribution and change GaussianEmission to use it. #44

Merged
merged 7 commits into from
Jun 12, 2024

Conversation

ThomasColthurst
Copy link

No description provided.

@ThomasColthurst ThomasColthurst requested a review from srvasude June 10, 2024 22:02
cxx/distributions/zero_mean_normal_test.cc Show resolved Hide resolved
std::mt19937 prng;
ZeroMeanNormal nd(&prng);

BOOST_TEST(nd.logp(6.0) == -5.4563792395895785, tt::tolerance(1e-6));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd replace these tests with marginalization tests (see normal_test.cc to test against a boost computed logp and logp_score via approximate integration).

I find the testing exact value tests brittle and have the possibility of hiding bugs, and strongly prefer marginalization tests.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that marginalization tests are great. They are also a lot more work. My preference would be to have these minimal tests now, and then add the marginalization tests in a later pull request.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The recommendation here is to copy the tests from normal_test.cc. I've added marginalization tests for logp and logp_score. Those should work the exact same here.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BOOST_AUTO_TEST_CASE(test_log_prob) {
and the following test_posterior_pred should work here (with some renaming of distributions).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I tried your recommendation and it didn't work. The latest commit of this pull request has the test_log_prob and it fails.

This shouldn't be surprising, because as previously noted, this distribution has different marginal likelihood and predictive posterior functions than the Normal. For example, when we do add a marginalization test for this distribution, it will only need to do a single integral over variances rather than the double integral done in normal_test.cc.

cxx/distributions/zero_mean_normal.hh Show resolved Hide resolved
public:
// We use an Inverse gamma conjugate prior, so our hyperparameters are
double alpha = 1.0;
double beta = 1.0;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could these be renamed the same was as normal? Admittedly I don't love the single letter naming scheme or alpha and beta since I find these opaque, but would prefer the naming for both to be the same.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They could, but I think the most important thing is that they match the math. Since https://www.cs.ubc.ca/~murphyk/Papers/bayesGauss.pdf and https://j-zin.github.io/files/Conjugate_Bayesian_analysis_of_common_distributions.pdf both use alpha and beta, I would strongly prefer that we keep those as the variable names.

@ThomasColthurst ThomasColthurst merged commit d9f36cb into master Jun 12, 2024
1 of 2 checks passed
@ThomasColthurst ThomasColthurst deleted the 061024-thomaswc-ssize branch June 12, 2024 21:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants