-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/base qcvv framework #992
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
bc42a9e
to
0de866f
Compare
47de35b
to
0aa0d41
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved qcvv folder to supermarq following discussion at dev meeting
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(adding a few comments/questions)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks great, just a couple more questions
not necessarily in this pr, but we'll probably also want to add a third path for experiments on local hardware: in this case we would still use superstaq to compile (but not submit) all the circuits, and then lets the user run the compiled circuits locally and pass in their counts back in to the experiment for analysis
Definitely a good shout. A hacky way to do this at the moment would be for the user to use the |
^ sounds good! we'll definitely need something like this for testbed users (e.g. aqt and qscout) who use superstaq to compile but don't have cloud-accessible machines. having a slick interface for this would be really nice |
I've implemented this by separating out the |
7f35711
to
86d2afb
Compare
86d2afb
to
5c49765
Compare
probabilities = { | ||
format(idx, f"0{self.num_qubits}b"): 0.0 for idx in range(2**self.num_qubits) | ||
} | ||
|
||
for key, count in counts.items(): | ||
probabilities[key] = count / total |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we just use
probabilities = { | |
format(idx, f"0{self.num_qubits}b"): 0.0 for idx in range(2**self.num_qubits) | |
} | |
for key, count in counts.items(): | |
probabilities[key] = count / total | |
probabilities = {key: count / total for key, count in counts.items()} |
?
we shouldn't need to include all the keys with no counts in the dictionary (/if we do we should just use an array)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because of they way these probability dictionaries are treated as records in a dataframe in the various implementations of actual experiments (see for example the qcvv_css.ipynb
), to avoid NaNs I prefer enforcing that all the bitstrings are present in the dictionary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok sounds good. in that case i think we also need to fill in the missing values in the simulator output?
if sorted(probability.keys()) != sorted( | ||
f"{i:0{self.num_qubits}b}" for i in range(2**self.num_qubits) | ||
): | ||
raise RuntimeError("Returned probabilities are missing bitstrings.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
similarly here: can we remove this restriction? possibly with some validation, e.g.
if sorted(probability.keys()) != sorted( | |
f"{i:0{self.num_qubits}b}" for i in range(2**self.num_qubits) | |
): | |
raise RuntimeError("Returned probabilities are missing bitstrings.") | |
probability = circuit_eval_func(sample.circuit, **kwargs) | |
if not all(len(key) == self.num_qubits for key in probability.keys()): | |
raise RuntimeError("Returned probabilities include an incorrect number of bits.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this way of checking that the bit strings are correct, and rather than enforcing that all probabilities are included I've manually set any missing bitstrings to have zero probability. However, as above (for purposes of making into a dataframe nicely, I prefer to make sure that the probabilities are stored in a dictionary with all the possible bitstrings as keys
job: css.Job | None = None | ||
"""The superstaq job corresponding to the sample. Defaults to None if no job is | ||
associated with the sample.""" | ||
raw_circuit: cirq.Circuit = field(init=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why don't we make this
raw_circuit: cirq.Circuit = field(init=False) | |
compiled_circuit: cirq.Circuit = field(init=False) |
so we're not redefining circuit
in some cases (and to make it more consistent with css.Job
)
we can also get the compiled circuits for jobs submitted to hardware via job.compiled_circuits()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've gone for a mix of both here. The Sample object now has explicitly as raw_circuit
(used in the init) and a compiled_circuit
attribute. By default we want to use the compiled circuit if it exists, so the circuit
property returns exactly this.
In the run_on_device()
method, I've also now added a line that updates the compiled circuit for each sample by retrieving it from the css.Job
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cool, makes sense to me
e944a1d
to
574095e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
awesome, this looks great to me % that simulator output fix (if necessary) :)
Thanks for all your help @richrines1 - I think we've got something really nice here! |
Implement the XEB routine within the QCVV experiment framework. Blocked by #992
Implement the interleaved randomized benchmarking routine within the qcvv framework. Blocked by #992
In preparation for the XEB/IRB benchmarking routines, this PR implements some common functionality that can form the basis of a wide QCVV within superstaq.