-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run / test the notebooks from the command line automatically #334
Comments
Will this still be possible if we move the notebooks to a different repository? |
Anything is possible? I would like to have historical as well as modern notebooks working. I guess that means putting version requirements at the top and bumping ImageD11 versions more often. It would go into testing against several venvs. At least the recent ones on pypi. Some kind of a matrix to run. The "reproducible" part of FAIR is worth aiming for. So far as I am aware, the code can still be backward compatible except for cases where it had a bug. |
Going over to parameterised notebooks, we could try: |
currently working on this with papermill |
I worked in the past with testbook, very good project but not that much activity lately. For reaching files/notebooks, there are several possibilities. For example, we have something in Don't hesitate to contact DAU if you need help for this. |
Thanks @loichuder ! We already have a little function to fetch some test data from Zenodo which is working well: One of our goals anyway is to parameterise the notebooks so that they can be run as ewoks tasks. Ewoks uses the same cell tagging system for notebook parameterization as papermill, so it makes sense for us to use that going forward: Parameterising the notebooks isn't actually as bad as I thought it would be, so I'm halfway done with that already. Regarding running the tests - for now the tests need SLURM access (for Astra reconstruction) so they can't run on the Github CI yet, but should run locally in a git checkout on an ESRF machine with SLURM deployment access. Not sure how we could integrate the ESRF Gitlab runner into our existing CI pipeline here? |
Good to know. That can be useful to us to test our tasks!
Indeed. Great to read 🙂
Hm, a potential strategy would be that GitHub CI could trigger a pipeline on a Gitlab runner that could have the needed access to launch the tests and provide the results. We could do it either via GitHub webhooks or requests to the GitLab API. I don't have the details fully fleshed out but I am sure it is possible. I'll ask my colleagues and come back to you. |
@loichuder many thanks! For now the tests can be manually run by myself and @jonwright on our ESRF machines in the meantime. I've excluded them from the Github CI and flake8 linting. |
@jadball : I do not like testing numerical results in testcases, unless the numbers came from a simulation or theory and you test against a "truth". Getting the same answer as last time is not a priority: the new answer in the future is almost always more accurate or "better" in some other way. To finish up: can you add rendered notebooks somewhere (docs?), then we can get these out for users (e.g. https://imaged11.readthedocs.io/en/latest/) and only need to review them when doing a release. @haixing0a is motivated to find and use some testcase datasets. Maybe we choose a folder on /data/id11/inhouseN for now. With the peak segmentation, I still need to pick out a test dataset to debug that monkeypatch thing. |
@jonwright makes sense to me, thanks! |
On github it looks good as it renders the ipynb. For readthedocs (etc) I wonder whether we are better with HTML so that the source (ipynb) is distinct from the output, and it does not try to render again? |
will do! |
Progress made - end-to-end tomo test for multiple layers with multiple phases now on my fork. master...jadball:ImageD11:master todo:
|
Little update on this: we did manage to have a Gitlab runner running our CI on data present in We could meet shortly next week for me to gather your exact CI requirements and I can work on setting this up. |
We accumulated a lot of code in notebooks, and so I guess I will sometimes break them. Here are some notes on how to test notebooks. ToDo's seem to be:
Some links:
https://github.com/nteract/testbook
https://nbconvert.readthedocs.io/en/latest/execute_api.html
https://github.com/jupyter/nbconvert/blob/main/docs/api_examples/template_path/make_html.py
https://stackoverflow.com/questions/70671733/testing-a-jupyter-notebook
The text was updated successfully, but these errors were encountered: