You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**This is the issue encountered in Google Colab- under GPU setting T4 and High-RAM
When we run the function- ExactGP.fit()-- it produces NAN values for standard deviation calculation. The error can be repo in all the below modifications
These things I have already tried
Normalize the data in fitting the model
Tried with all the kernels: RBF, Matern and Periodic
Tried with diff prior function: LogNormal, Normal, HalfNormal which are mostly used anyway in GP/BO
Tried with diff noise priors
Also tried with diff sets of training data
With current workaround it seems with reducing the number of total samples in MCMC setting to num_warmup=500, num_samples=500 (Default is num_warmup=1000, num_samples=3000), it is able to provide reasonable outputs.
The text was updated successfully, but these errors were encountered:
This is due to the peculiar behavior of jax.vmap when approaching a memory limit. There are three ways to deal with this:
Draw multiple random smaller batches of samples (see gpax.acquisition.qEI, .qUCB, etc; you can specify the batch size using the subsample_size argument) and average them.
Assume that the acquisition function is continuous and use gpax.acquisition.optimize_acq to optimize it with num_initial_guesses << total_number_of_points.
**This is the issue encountered in Google Colab- under GPU setting T4 and High-RAM
When we run the function- ExactGP.fit()-- it produces NAN values for standard deviation calculation. The error can be repo in all the below modifications
These things I have already tried
With current workaround it seems with reducing the number of total samples in MCMC setting to num_warmup=500, num_samples=500 (Default is num_warmup=1000, num_samples=3000), it is able to provide reasonable outputs.
The text was updated successfully, but these errors were encountered: