-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove headless-torch14 and headless-tf2 #314
Conversation
Move ocrd_detectron2 to headless-tf1 to avoid conflicts in the main virtual environment. Signed-off-by: Stefan Weil <[email protected]>
Move ocrd_cis to headless-tf1 to avoid a conflict with ocrd_calamari. Signed-off-by: Stefan Weil <[email protected]>
The docker image size is reduced from 32.8 GB to 29.9 GB. I also expect a reduced build time. |
Beside the main virtual environment now only a single 2nd virtual environment
|
A test run with |
CircleCI now finished successfully in 38:15 min. Previous CI runs for pull requests used more than 42 min, so it looks like the build time was reduced by about 10 %. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, I think the clean
rule needs an update. Especially since the paths have been renamed a few times recently. (So the update must include both the old defaults and the current variables.)
@@ -432,9 +428,16 @@ OCRD_CIS += $(BIN)/ocrd-cis-ocropy-resegment | |||
OCRD_CIS += $(BIN)/ocrd-cis-ocropy-segment | |||
#OCRD_CIS += $(BIN)/ocrd-cis-ocropy-train | |||
OCRD_CIS += $(BIN)/ocrd-cis-postcorrect | |||
$(call multirule,$(OCRD_CIS)): ocrd_cis $(BIN)/ocrd | |||
$(call multirule,$(OCRD_CIS)): ocrd_cis $(BIN)/ocrd $(SUB_VENV_TF1)/bin/activate |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you pull the current cisocrgroup/ocrd_cis#87, this change is already obsolete.
Moving Ocropy to a sub-venv is something I would avoid at all cost.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That PR is from March 12. I don't want to wait for it with the PR here. We can easily revert the change for ocrd_cis
as soon as it is updated in ocrd_all
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nevertheless, it's the currently recommended version of ocrd_cis, and the same PR is already the basis of ocrd_all master.
I insist: changing venvs unnecessarily must be avoided. Users will likely then have two conflicting versions of ocrd_cis.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current version of ocrd_cis
in the latest ocrd_all
still requires calamari_ocr==0.3.5
. Therefore it cannot be in the same virtual environment as ocrd_calamari
which is now in the main venv.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current version of
ocrd_cis
in the latestocrd_all
still requirescalamari_ocr==0.3.5
. Therefore it cannot be in the same virtual environment asocrd_calamari
which is now in the main venv.
You still don't understand: the latest ocrd_all contains the same (PR) branch of ocrd_cis, but not that very last commit. That's why I said you needed to pull from that PR again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to wait for merges. The whole purpose of the submodule logic is to be able to reference individual commits when needed. Development of ocrd_cis is slow and needs collaboration. Hence the practise of using my feature branches. (This has also been used for core in the past BTW.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was speaking of a pull request / merge for ocrd_all. That is required to get your changes for ocrd_cis.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was speaking of a pull request / merge for ocrd_all. That is required to get your changes for ocrd_cis.
Ok, but the whole point was about updating ocrd_cis here in this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updating submodules should be done in separate pull requests which usually get merged only by @kba (at least that's how I have understood the workflow). I would like to merge the PR here and can prepare another PR which updates ocrd_cis and puts it back into the main venv later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That does not make sense at all. Placing it in a venv in this PR, only to require making a new PR where it can be removed again.
Once more: I am against placing ocrd_cis into a sub-venv, and there is no need for it.
$(call multirule,$(OCRD_DETECTRON2)): ocrd_detectron2 $(SUB_VENV_TORCH14)/bin/activate | ||
$(call multirule,$(OCRD_DETECTRON2)): ocrd_detectron2 $(SUB_VENV_TF1)/bin/activate |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I fail to see the need to use the TF1 venv for Detectron2, though. If we already have a Pytorch in the main venv, then let's reuse it here. Or did you experience any conflicts with ocrd_typegroups_classifier?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The conflict comes from torch
, as I have explained above (I updated that comment). The Python module detectron2
is not distributed by PyPI and requires a special torch
module which depend on NVIDIA libraries and which we would have to use for all other OCR-D processors, too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's all covered by ocrd_detectron2's makefile though. I can run both modules fine in the same venv. I still don't know what Nvidia libraries you are referring to – please open an issue with ocrd_detectron2, as stated earlier.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that's an ocrd_detectron2
issue. Extract from https://github.com/stweil/ocrd_all/runs/6178981637:
ImportError: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory
No broken requirements found.
make[1]: Leaving directory '/build'
Makefile:672: recipe for target '/usr/local/bin/ocrd-detectron2-segment-check' failed
make: *** [/usr/local/bin/ocrd-detectron2-segment-check] Error 1
libtorch_cuda_cu.so
is only included in the torch
module from Facebook Research pytorch.org.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is that not an ocrd_detectron2 issue?
Perhaps the CUDA version detection failed. It found 11.3.1 earlier, but Pytorch seems to be linked against 10.
libtorch_cuda_cu.so is only included in the torch module from Facebook Research.
Again: our Pytorch is from Pytorch.org or PyPI, not Nvidia or Facebook – you are confusing this.
I also wonder why this does not surface in our CircleCI Docker builds (which use the same core-cuda base image)...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also wonder why this does not surface in our CircleCI Docker builds (which use the same core-cuda base image)...
It simply depends on the order of torch
installations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bertsky, I now created an issue which shows the same details as the GitHub action log, see bertsky/ocrd_detectron2#10.
And you are right, the Python torch
module was fetched from pytorch.org, not from Facebook Research. But the rest of my conclusion still applies: that version is not compatible with the one from PyPI, and we have either to use it for all OCR-D modules or use a separate venv for ocrd_detectron2
which requires it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: with Pytorch 1.11 out, which is incompatible with Detectron2 v0.6 (built against 1.10.1), you need to update ocrd_detectron2 module once again to avoid runtime failure (std::bad_alloc
during import).
I have tested a CPU-only installation with ocrd_detectron2 inside headless-tf1, and it does indeed work. (Also, the other modules there still seem to work.) 🤞
First of all, Detectron2 is from Facebook Research, not Nvidia. Also, ocrd_detectron2 does not install a non-standard Pytorch distribution, but tries to fetch the wheels matching the system's CUDA version from Pytorch.org. (That's just a soft dependency – if no matching wheels are found there, then Note also that this is not by accident: just as with TF, you must get a Pytorch matching your system's CUDA version, otherwise it won't run (not even in CPU mode). Because we want to support a wide range of systems (i.e. of CUDA versions, as well as CPU-only), we invest some effort here to determine the current CUDA version. Lastly, wheels for Detectron2 depend on both the Pytorch and CUDA version. There simply is no PyPI release at all (by design). Building from source seems a waste of resources to me – but I might add that as a fallback option to ocrd_detectron2's Makefile in the future. Regarding your build failure, please open an issue with ocrd_detectron2, providing details like logs. |
... and will fail because of a missing shared NVIDIA library as soon as you run the executable. The installation of
Then it is impossible to use the NVIDIA libraries which are provided with Debian / Ubuntu with |
This is unrelated to the PR here. We discussed already whether the |
It is perfectly related. You change venvs here. Thus,
I don't recall you were opposed to that notion. On what grounds? Obviously we previously decided to do it this way, nevertheless.
You can hardly call a few more shell globs "blowing up the Makefile".
The point is simply that users do not know what to delete, or forget about details. True even for advanced users (you cannot think of everything all the time). |
I removed two virtual environments. |
No description provided.