-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Refactor] Expose a higher-level API #30
Conversation
We expect that our users will be more interested in machine-learning than in quantum computers, at least at first, so we expose a new, higher-level, API, that hides most of the quantum details. With this API, switching between the QutipEmulator, emu-mps or QPU is just a few lines of code (well, one line of code + the connection details, username, password, project id). This results in a tutorial that spends less time on the quantum aspects and more on the machine-learning. Also, more tests.
Cc @MatthieuMoreau0 for the use of the QPU. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've not looked at all the code in detail, since I don't consider myself a contributor to the package. Regarding the API, I think at some point we should add the ability to configure the backends. @Yoric suggests doing that in a separate PR, which is fine by me, but it really should be done at some point, because at larger qubit numbers, emu-mps becomes increasingly dependent on good config values.
As a side-note, my assumption is that, at some point, as we publish more open-source packages, this class hierarchy will move to another library and will progressively grow into something quite generic. So we will definitely want more configuration. On the other hand, we may want to wait until we have several applications before we make it overly generic. |
Yep, but that can be in the scope of some longer term roadmap. I am absolutely interested in moving to a qadence2 back-end once that is available, yes! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @Yoric few minor comments from my side otherwise LGTM.
username: str, | ||
password: str | None = None, | ||
device_name: str = "FRESNEL", | ||
batch_id: list[str] | None = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this is a list so I suggest naming this batch_ids
# At least one job is pending, let's wait. | ||
await sleep(2) | ||
logger.debug("Job %s is still incomplete") | ||
waiting = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless I'm missing something, this is going to loop forever as we are not refreshing the batch data at each iteration
This may be a sign that we are missing a test for the qpu extractor; where we simulate the sequences execution taking a few iterations to run to completion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, yes, we're absolutely missing a test.
I'll try and find time to write one.
logger.debug("Executing compiled graph #%s", id) | ||
batch = self._sdk.create_batch( | ||
compiled.sequence.to_abstract_repr(), | ||
jobs=[{"runs": 1000}], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
currently max runs for Fresnel is 500, so the batch creation fails. Either create two jobs with 500 runs each, or lower this value to 500 if that's sufficient
We expect that our users will be more interested in machine-learning than in quantum computers, at least at first, so we expose a new, higher-level, API, that hides most of the quantum details. With this API, switching between the QutipEmulator, emu-mps or QPU is just a few lines of code (well, one line of code + the connection details, username, password, project id).
This results in a tutorial that spends less time on the quantum aspects and more on the machine-learning.
Also:
More details
This change comes from a conversation with @ferrulli1pasqal, who suggested that we do not want to overwhelm our users with details on sequences, devices, etc. during the tutorial. So we now have an API that handles all those details, filters out graphs that cannot be compiled to sequences or executed on the device, and also handles saving the processed data while we're at it.
Before this change, to run the QutipEmulator, we executed
after this change, to run it, we execute
to use emu-mps, just replace
QutipExtractor
withEmuMPSExtractor
. To use a QPU, just replace it withQPUExtractor
(and specify username, password, project id and optionally, the batch_ids if you're resuming from a previous computation).