Skip to content

Commit

Permalink
Merge branch 'develop'
Browse files Browse the repository at this point in the history
  • Loading branch information
venpopov committed Feb 28, 2024
2 parents 6d844bd + 3258219 commit b4ae4f8
Show file tree
Hide file tree
Showing 10 changed files with 70 additions and 61 deletions.
2 changes: 1 addition & 1 deletion DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Package: bmm
Title: Easy and Accesible Bayesian Measurement Models using 'brms'
Version: 0.4.0
Version: 0.4.0.9000
Authors@R: c(
person("Vencislav", "Popov", , "[email protected]", role = c("aut", "cre", "cph")),
person("Gidon", "Frischkorn", , "[email protected]", role = c("aut", "cph")),
Expand Down
2 changes: 2 additions & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# bmm 0.4.0+

# bmm 0.4.0

### New features
Expand Down
2 changes: 1 addition & 1 deletion R/zzz.R
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@

startUpMsg <- c(
paste0("A short introduction to package is available by calling help(\"bmm\"). \n",
"More detailed articles on how to fit different models are available via vignettes(\"bmm\").\n",
"More detailed articles on how to fit different models are available via vignette(package=\"bmm\").\n",
"You can view the list of currently available models by calling supported_models().\n")
)

Expand Down
8 changes: 3 additions & 5 deletions README.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ knitr::opts_chunk$set(
)
```
# bmm <!-- badges: start -->
[![bmm status badge](https://popov-lab.r-universe.dev/badges/bmm)](https://popov-lab.r-universe.dev/bmm)
[![R-CMD-check](https://github.com/venpopov/bmm/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/venpopov/bmm/actions/workflows/R-CMD-check.yaml)
[![test-coverage](https://github.com/venpopov/bmm/actions/workflows/test-coverage.yaml/badge.svg)](https://github.com/venpopov/bmm/actions/workflows/test-coverage.yaml)
<!-- badges: end -->
Expand Down Expand Up @@ -97,16 +98,13 @@ compiler. If you have not used `brms` before, you will need to first install the
If you are already using `brms`, you are good to go and can
install the package as described in one of the options below:

<details>
<details open>
<summary><b>Install the latest beta release of bmm</b></summary>

</br>

``` r
if (!requireNamespace("remotes")) {
install.packages("remotes")
}
remotes::install_github("venpopov/bmm@*release")
install.packages('bmm', repos = c('https://popov-lab.r-universe.dev'))
```

This does not install the vignettes, which take a long time to build,
Expand Down
17 changes: 8 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@

# bmm <!-- badges: start -->

[![bmm status
badge](https://popov-lab.r-universe.dev/badges/bmm)](https://popov-lab.r-universe.dev/bmm)
[![R-CMD-check](https://github.com/venpopov/bmm/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/venpopov/bmm/actions/workflows/R-CMD-check.yaml)
[![test-coverage](https://github.com/venpopov/bmm/actions/workflows/test-coverage.yaml/badge.svg)](https://github.com/venpopov/bmm/actions/workflows/test-coverage.yaml)
<!-- badges: end -->
Expand Down Expand Up @@ -65,11 +67,11 @@ view the latest list of supported models by running:
bmm::supported_models()
#> The following models are supported:
#>
#> - IMMabc(resp_err, nt_features, setsize)
#> - IMMbsc(resp_err, nt_features, nt_distances, setsize)
#> - IMMfull(resp_err, nt_features, nt_distances, setsize)
#> - IMMabc(resp_err, nt_features, setsize, regex)
#> - IMMbsc(resp_err, nt_features, nt_distances, setsize, regex)
#> - IMMfull(resp_err, nt_features, nt_distances, setsize, regex)
#> - mixture2p(resp_err)
#> - mixture3p(resp_err, nt_features, setsize)
#> - mixture3p(resp_err, nt_features, setsize, regex)
#> - sdmSimple(resp_err)
#>
#> Type ?modelname to get information about a specific model, e.g. ?IMMfull
Expand Down Expand Up @@ -108,18 +110,15 @@ this step.
If you are already using `brms`, you are good to go and can install the
package as described in one of the options below:

<details>
<details open>
<summary>
<b>Install the latest beta release of bmm</b>
</summary>

</br>

``` r
if (!requireNamespace("remotes")) {
install.packages("remotes")
}
remotes::install_github("venpopov/bmm@*release")
install.packages('bmm', repos = c('https://popov-lab.r-universe.dev'))
```

This does not install the vignettes, which take a long time to build,
Expand Down
2 changes: 1 addition & 1 deletion man/bmm-package.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Binary file modified man/figures/README-unnamed-chunk-4-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 6 additions & 0 deletions vignettes/IMM.Rmd
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
---
title: "The Interference Measurement Model (IMM)"
output: bookdown::html_document2
author:
- Gidon Frischkorn
- Ven Popov
bibliography: REFERENCES.bib
header-includes:
- \usepackage{amsmath}
Expand All @@ -21,6 +24,9 @@ p {
margin-top: 1.5em ;
margin-bottom: 1.5em ;
}
.author{
display: none;
}
</style>

```{r, include = FALSE}
Expand Down
39 changes: 24 additions & 15 deletions vignettes/mixture_models.Rmd
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
---
title: "Mixture models for visual working memory"
output: bookdown::html_document2
author:
- Ven Popov
- Gidon Frischkorn
bibliography: REFERENCES.bib
vignette: >
%\VignetteIndexEntry{Mixture models for visual working memory}
Expand All @@ -20,6 +23,9 @@ p {
margin-top: 1.5em ;
margin-bottom: 1.5em ;
}
.author{
display: none;
}
</style>
```

Expand Down Expand Up @@ -50,31 +56,34 @@ knitr::include_graphics("mixture_models_illustration.jpg")

Responses based on a noisy memory representation of the correct feature come from a circular normal distribution (i.e., von Mises) centered on the correct feature value, while guessing responses come from a uniform distribution along the entire circle:

\begin{align}
p(\theta) &= p_{mem} \cdot \text{vM}(\theta; \mu, \kappa) + (1-p_{mem}) \cdot \text{Uniform}(\theta; -\pi, \pi) \\
\\
p_{guess} &= 1-p_{mem} \\
\\
vM(\theta; \mu, \kappa) &= \frac{e^{\kappa \cos(\theta - \mu)}}{2\pi I_0(\kappa)}
\end{align}
$$
p(\theta) = p_{mem} \cdot \text{vM}(\theta; \mu, \kappa) + (1-p_{mem}) \cdot \text{Uniform}(\theta; -\pi, \pi) \\
$$
$$
p_{guess} = 1-p_{mem} \\
$$
$$
vM(\theta; \mu, \kappa) = \frac{e^{\kappa \cos(\theta - \mu)}}{2\pi I_0(\kappa)}
$$

where $\theta$ is the response angle, $p_{mem}$ is the probability that responses come from memory of target feature, $\mu$ is the mean of the von Mises distribution representing the target feature, and $\kappa$ is the concentration parameter of the von Mises distribution, representing the precision of the target memory representation.

The *three-parameter mixture model* (`?mixture3p`) adds a third state: confusing the cued object with another object shown during encoding and thus reporting the feature of the other object (long dashed green distribution in Figure \@ref(fig:mixture-models)). Responses from this state are sometimes called non-target responses or swap errors. The non-target responses also come from a von Mises distribution centered on the feature of the non-target object. The probability of non-target responses is represented by the parameter $p_{nt}$, and the complete model is:

\begin{align}
p(\theta) &= p_{mem} \cdot \text{vM}(\theta; \mu_t, \kappa) + p_{nt} \cdot \frac{\sum_{i=1}^{n} \text{vM}(\theta; \mu_{i}, \kappa)}{n} + (1-p_{mem}-p_{nt}) \cdot \text{Uniform}(\theta; -\pi, \pi) \\
\\
p_{guess} &= 1-p_{mem}-p_{nt}
\end{align}
$$
p(\theta) = p_{mem} \cdot \text{vM}(\theta; \mu_t, \kappa) + p_{nt} \cdot \frac{\sum_{i=1}^{n} \text{vM}(\theta; \mu_{i}, \kappa)}{n} + (1-p_{mem}-p_{nt}) \cdot \text{Uniform}(\theta; -\pi, \pi) \\
$$
$$
p_{guess} = 1-p_{mem}-p_{nt}
$$

where $\mu_{t}$ is the location of the target feature, $\mu_{i}$ is the location of the i-th non-target feature, $n$ is the number of non-target features.

In most applications of the model, the responses are coded as the angular error relative to the target feature. The same is true for the non-target memory representations, which are assumed to be centered on the target feature, and the precision of the non-target memory representation is assumed to be the same as the precision of the target memory representation. This is the version of the model implemented in the `bmm` package:

\begin{align}
p(\theta) &= p_{mem} \cdot \text{vM}(\theta; 0, \kappa) + p_{nt} \cdot \frac{\sum_{i=1}^{n} \text{vM}(\theta; \mu_{i}-\mu_t, \kappa)}{n} + (1-p_{mem}-p_{nt}) \cdot \text{Uniform}(\theta; -\pi, \pi)
\end{align}
$$
p(\theta) = p_{mem} \cdot \text{vM}(\theta; 0, \kappa) + p_{nt} \cdot \frac{\sum_{i=1}^{n} \text{vM}(\theta; \mu_{i}-\mu_t, \kappa)}{n} + (1-p_{mem}-p_{nt}) \cdot \text{Uniform}(\theta; -\pi, \pi)
$$

# The data

Expand Down
53 changes: 24 additions & 29 deletions vignettes/sdm-simple.Rmd
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
---
title: "The Signal Discrimination Model (SDM)"
output: bookdown::html_document2
author:
- Ven Popov
- Gidon Frischkorn
bibliography: REFERENCES.bib
header-includes:
- \usepackage{amsmath}
Expand All @@ -20,6 +23,9 @@ p {
margin-top: 1.5em ;
margin-bottom: 1.5em ;
}
.author{
display: none;
}
</style>

```{r, include = FALSE}
Expand All @@ -37,74 +43,63 @@ The model assumes that when a test probe appears, all possible responses on the
feature stored in memory ($\mu$) and the response options. Formally, this is given by the following activation function:


\begin{align}
S(\theta) &= c \cdot \frac{\exp(\kappa \cdot \cos(y-\mu))}{2\pi I_0(\kappa)}
(\#eq:activation-function)
\end{align}
$$
S(\theta) = c \cdot \frac{\exp(\kappa \cdot \cos(y-\mu))}{2\pi I_0(\kappa)}
$$


where $c$ is the memory strength parameter, $\kappa$ is the precision parameter, and $I_0$ is the modified Bessel function of the first kind of order 0. Thus, the activation function follows a von Mises distribution, weigthed by a memory strength parameter.

The activation of response options is corrupted by noise, which is assumed to follow a Gumbel distribution. Then the response is the option with the highest activation value:

\begin{align}
$$
Pr(\theta) = argmax(S(\theta) + \epsilon) \\ \epsilon \sim Gumbel(0,1)
(\#eq:argmax)
\end{align}
$$
This is equivalent to the following softmax function (also known as the exponentiated Luce's choice rule):

\begin{align}
$$
Pr(\theta) = \frac{\exp(S(\theta)}{\sum_{i=1}^{n} \exp(S(\theta_i))}
(\#eq:ptheta-exps)
\end{align}
$$
where n is the number of response options, most often 360 in typical visual working memory experiments.

In summary, the model assumes that response errors come from the following distribution, where $\mu = 0$:

\begin{align}
$$
\Large{f(\theta\ |\ \mu,c,\kappa) = \frac{e^ {c \ \frac{e^{k\ cos(\theta-\mu)}}{2\pi I_o(k)}}}{Z}}
(\#eq:dsdm-oberauer)
\end{align}
$$
and Z is the normalizing constant to ensure that the probability mass sums to 1.

# Parametrization in the `bmm` package

In the `bmm` package we use a different parametrization of Equation \@ref(eq:dsdm-oberauer). The parametrization is chosen for numerical stability and efficiency. Three features of \@ref(eq:dsdm-oberauer) make it difficult to work with in practice. First, the modified bessel function $I_0$ increases rapidly, often leading to numerical overflow. Second, the bessel function is expensive to compute, and estimating this model with MCMC methods can be slow. Third, the normalizing constant in the denominator requires summing 360 terms, which is also slow.
In the `bmm` package we use a different parametrization. The parametrization is chosen for numerical stability and efficiency. Three features of the parametrization above make it difficult to work with in practice. First, the modified bessel function $I_0$ increases rapidly, often leading to numerical overflow. Second, the bessel function is expensive to compute, and estimating this model with MCMC methods can be slow. Third, the normalizing constant in the denominator requires summing 360 terms, which is also slow.

To address these issues, we use the following parametrization of the SDM distribution:

\begin{align}

$$
\Large{f(\theta\ |\ \mu,c,\kappa) = \frac{
e^{c \ \sqrt{\frac{k}{2\pi}} e^{k \ (cos(\theta-\mu)-1)}}
}{Z}}
(\#eq:dsdm-bmm)
\end{align}
$$

This parametrization is derived from the known approximation of the modified bessel function for large $k$ (@abramowitz1988handbook):

\begin{align}
$$
I_0(\kappa) \sim ~ \frac{e^{\kappa}}{\sqrt{2\pi \kappa}}, \ \ \ \ \kappa \rightarrow \infty
(\#eq:bessel-approx)
\end{align}
$$

If needed, the $c$ parameter of the original formulation by Oberauer (2023) can be computed by:

\begin{align}

$$
c_{oberauer} = c_{bmm} \ e^{-\kappa} I_0(\kappa)\sqrt{2 \pi \kappa}

(\#eq:parametrization)
\end{align}
$$

This parametrization does not change the predicted shape of the distribution, but it produces slightly different values of $c$ for small values of $kappa$. The parametrization is the default in the `bmm` package.

A second optimization concerns the calculation of the normalizing constant $Z$. The original model assumed that responses can only be one of 360 discrete values, resulting in a probability mass function. In `bmm` we treat the response variable as continuous, which makes $f(\theta)$ a probability density function. This means that we can calculate the normalizing constant $Z$ by integrating $f(\theta)$ over the entire circle:

\begin{align}
$$
Z = \int_{-\pi}^{\pi} f(\theta) d\theta
(\#eq:z-integral)
\end{align}
$$

This integral cannot be expressed in closed form, but it can be approximated using numerical integration methods. The results between the discrete and continuous formulations are nearly identical, for large number of response options (as in typical applications), but not when the number of response options is small, for example in 4-AFC tasks.

Expand Down

0 comments on commit b4ae4f8

Please sign in to comment.