Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix observable predictions shape in sinter's MWPF and FussionBlossom decoder #864

Merged
merged 2 commits into from
Dec 4, 2024

Conversation

inmzhang
Copy link
Contributor

@inmzhang inmzhang commented Dec 3, 2024

The packed obs predictions should be of shape (num_shots, (self.num_obs + 7) // 8) instead of (num_shots, self.num_obs), which results in error ValueError: predictions.shape[1] > actual_obs.shape[1] + 1 when simulating codes with multiple logical observables.

@Strilanc
Copy link
Collaborator

Strilanc commented Dec 4, 2024

LGTM; would probably be good to add a unit test to the standard battery of tests that exercises this

@Strilanc Strilanc enabled auto-merge (squash) December 4, 2024 08:14
@Strilanc Strilanc merged commit d70b206 into quantumlib:main Dec 4, 2024
57 checks passed
Strilanc pushed a commit that referenced this pull request Jan 31, 2025
…al observable (#873)

This PR fixed two bugs in MWPF decoder

## 1. Supporting decomposed detector error model

While MWPF expects a decoding hypergraph, the input detector error model
from sinter is by default decomposed. The decomposed DEM may contain the
same detector or logical observable multiple times, which is not
considered by the previous implementation.

The previous implementation assumes that each detector and logical
observable only appears once, thus, I used
```python
frames: List[int] = []
...
frames.append(t.val)
```

However, this no longer works if the same frame appears in multiple
decomposed parts. In this case, the DEM actually means that "the
hyperedge contributes to the logical observable iff count(frame) % 2 ==
1". This is fixed by
```python
frames: set[int] = set()
...
frames ^= { t.val }
```

## 2. Supporting multiple logical observables

Although a previous [PR
#864](#864) has fixed the panic
issue when multiple logical observables are encountered, the returned
value is actually problematic and causes significantly higher logical
error rate.

The previous implementation converts a `int` typed bitmask to a
bitpacked value using `np.packbits(prediction, bitorder="little")`.
However, this doesn't work for more than one logical observables.
For example, if I define an observable using `OBSERVABLE_INCLUDE(2)
...`, supposedly the bitpacked value should be `[4]` because $1<<2 = 4$.
However, `np.packbits(4, bitorder="little") = [1]`, which is incorrect.

The correct procedure is first generate the binary representation with
`self.num_obs` bits using `np.binary_repr(prediction,
width=self.num_obs)`, in this case, `'100'`, and then revert the order
of the bits to `['0', '0', '1']`, and then run the packbits which gives
us the correct value `[4]`.

The full code is below:
```python
predictions[shot] = np.packbits(
    np.array(list(np.binary_repr(prediction, width=self.num_obs))[::-1],dtype=np.uint8),
    bitorder="little",
)
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants