Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

start article on variable precision #230

Draft
wants to merge 2 commits into
base: develop
Choose a base branch
from
Draft

start article on variable precision #230

wants to merge 2 commits into from

Conversation

venpopov
Copy link
Owner

@venpopov venpopov commented Aug 14, 2024

I've began the variable precision article. I'm opening a draft pull request so that changes are tracked over time. My plan is:

  1. Brief intro to variable precision. How it was done before (special distributions), and note that it's much easier with bmm. Can be applied to any model

  2. Introduce different levels of variable precision

    • same variability for all participants (complete pooling for variability parameter)
    • each participant gets their own variability, no pooling at all
    • hierarchical pooling in which the variability itself is drawn from a distribution over participants
  3. Dig down into hierarchical pooling and how it can be implemented within the formula

  4. Illustrate an example fit with hierarchical pooling

  5. Miscellaneous

    • explain that in the original model variability followed a gamma distribution, whereas with this approach it follows a log-normal distribution. Explain that this depends on the link function

@GidonFrischkorn
Copy link
Collaborator

Generally, I like the structure.

I wondered if we should frame this more general, as in a general way to implement trial-to-trial variation in model parameters. This could also be useful in SDT models that assume variable memory strength over trials, and evidence accumulation models. Granted these are not yet implemented in bmm, so we could also think about generalizing the article once these models have been added to the package.

@GidonFrischkorn
Copy link
Collaborator

One thing, we would then need to also explain is that the distribution of trial-to-trial variability on the native scale depends on the link function, as it will always be estimated as a normal-distribution on the parameter space. We shortly discussed this for the variable precision model and how assuming a gaussian on the parameter space with a log-link function results in a log-normal that is reasonably similar to the gamma originally assumed by van den Berg.

@venpopov
Copy link
Owner Author

Generally, I like the structure.

I wondered if we should frame this more general, as in a general way to implement trial-to-trial variation in model parameters. This could also be useful in SDT models that assume variable memory strength over trials, and evidence accumulation models. Granted these are not yet implemented in bmm, so we could also think about generalizing the article once these models have been added to the package.

I was wondering about the same. One idea I had is to split this into two shorter articles. One is about the general trick of including trial-by-trial variability where the variability itself has a random effect over subjects. Then the variable precision article can just link to that for the final step and show how to do it without reexplaining the logic and details

@venpopov
Copy link
Owner Author

One thing, we would then need to also explain is that the distribution of trial-to-trial variability on the native scale depends on the link function, as it will always be estimated as a normal-distribution on the parameter space. We shortly discussed this for the variable precision model and how assuming a gaussian on the parameter space with a log-link function results in a log-normal that is reasonably similar to the gamma originally assumed by van den Berg.

Great point, I forgot about that. Will add it to the structure

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants