-
Notifications
You must be signed in to change notification settings - Fork 247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow running cibuildwheel inside of a previously configured docker image. #676
Comments
Do you really need to refactor it in this way? It seems like we could just write a new context manager that simply wraps subprocess.run and otherwise does pretty much nothing, and then using that instead would perform the work in process instead of in a docker container. Then you could have some escape hatch that uses that instead if cibuildwheel was running inside the manylinux image - say maybe even with pipx! :) (See pypa/manylinux#1055 (comment)). I actually would really like to have a |
I started hacking on this on https://github.com/Erotemic/cibuildwheel/tree/dev/flow I'm not sure what the best way to refactor my stuff to use cibuildwheel is yet. I'm not tied to any one solution. In my branch I'm also checking to see if replacing "docker" with "podman" (which seems to be as easy as making a docker_exe variable that is set to either "docker" or "podman" and passing it to subprocess.run). If that works that might be the simpler route. |
podman or a rootless docker daemon should probably work. If you've been able to try any of those, it might be interesting to know what works and what doesn't. Refactoring the run command might be challenging for linux but, I agree, this would probably be the best way to handle this. |
@mayeut I've actually been able to use podman with some success, but it did require modifications, which are currently in my branch: I had to add these "common docker flags"
The storage driver needed to change to vfs for whatever reason, which also meant I needed to change the root, and I also think I needed to add a I added There was a weird issue on the The workaround for The diff is definitely bigger than it needs to be because I played around a lot with it, but ultimately podman did work (and probably should be added as an option to the main cibuildwheel). Note, I'm also not sure if all of the changes I made were 100% necessary. I know the
But again, with these modifications podman does work. |
Thanks for the detailed feedback. I'll probably have a look at your fork given I'm currently trying to debug some tests issues on Travis CI using podman and, as your patches suggest, it's not as easy as "replace docker by podman". |
As an update, I've been using my patched version here https://github.com/Erotemic/cibuildwheel/tree/dev/flow to build all of my wheels on machines where podman is available but docker isn't. If there is interest I can clean it up and submit it as a PR. |
An alias is not enough?
|
@fedelibre an alias is not enough on the gitlab CI machines I use. There are specific args I have to add in order to make podman work. In my fork the differences are in cibuildwheel/docker_container.py In if self.oci_exe == 'podman':
self.common_oci_args += [
# https://stackoverflow.com/questions/30984569/error-error-creating-aufs-mount-to-when-building-dockerfile
"--cgroup-manager=cgroupfs",
"--storage-driver=vfs",
]
if self.oci_root == "":
# https://github.com/containers/podman/issues/2347
self.common_oci_args += [
f"--root={os.environ['HOME']}/.local/share/containers/vfs-storage/",
]
else:
self.common_oci_args += [
f"--root={self.oci_root}",
] if self.oci_exe == 'podman':
oci_create_args.extend([
#https://github.com/containers/podman/issues/4325
"--events-backend=file",
"--privileged",
])
oci_start_args.extend([
"--events-backend=file",
]) I also have to add some hacky sleeps in the if self.oci_exe == 'podman':
time.sleep(0.01) The if self.oci_exe == 'podman':
command = f"{self.oci_exe} exec {self.common_oci_args_join} -i {self.name} tar -cC {shell_quote(from_path)} -f /tmp/output-{self.name}.tar ."
subprocess.run(
command,
shell=True,
check=True,
cwd=to_path,
)
command = f"{self.oci_exe} cp {self.common_oci_args_join} {self.name}:/tmp/output-{self.name}.tar output-{self.name}.tar"
subprocess.run(
command,
shell=True,
check=True,
cwd=to_path,
)
command = f"tar -xvf output-{self.name}.tar"
subprocess.run(
command,
shell=True,
check=True,
cwd=to_path,
)
os.unlink(to_path / f"output-{self.name}.tar")
elif self.oci_exe == 'docker':
command = f"{self.oci_exe} exec {self.common_oci_args_join} -i {self.name} tar -cC {shell_quote(from_path)} -f - . | cat > output-{self.name}.tar"
subprocess.run(
command,
shell=True,
check=True,
cwd=to_path,
)
else:
raise KeyError(self.oci_exe) |
Personally, I'd be fine adding podman support, doesn't look too hard. I don't know much about podman, we'd want some way to test it. Is it available on public CI, like GitLab CI? It could be We could also support "native", which would run on the host system directly, and would ignore images - you would be expected to run cibuldwheel from the manylinux image. Might be harder to support, though. (that's the original idea of this issue) |
Ok, I see. |
FYI: I've updated my fork of cibuildwheel with a breaking change. I needed to have finer grained control over the extra options I pass to podman. So instead of detecting if podman is the OCI driver, and then adding extra flags to the commands, I'm currently forcing the user (me) to explicitly set the extra flags podman needs. To get the previous behavior these environment variables that need setting are now:
Also note, that if running podman inside of docker it is important that the parent Lastly, it seemed important for me to update from podman 3.0.1 to 3.2.1 in order for my CI scripts to work on newer linux kernels (5.4 worked but 5.8 and 5.11 failed). |
Podman support was merged a year ago, so I think the motivation for the initial request of invoking cibuildwheel inside the container is gone. Podman is the solution for environments where root isn't available. |
On my internal gitlab-ci runners docker-inside-docker is disabled. This seems to be causing cibuildwheel to fail.
As a workaround, I would like to be able to use cibuildwheel inside of a base manylinux image. That is, I want to give it the information that it is already inside of a manylinux image like
quay.io/pypa/manylinux2010_x86_64
, and then I want it to do its thing.I do something similar in this script, if you are inside of docker image (you have to give it this information), it executes the script, but if you tell it you want to run in docker, then it executes itself inside inside of the docker image.
https://gitlab.kitware.com/computer-vision/kwimage/-/blob/master/run_manylinux_build.sh
And my gitlab yaml explicitly has to call out when I use the base quay.io/pypa/manylinux2010_x86_64 image:
https://gitlab.kitware.com/computer-vision/kwimage/-/blob/master/.gitlab-ci.yml
I was poking around in cibuildwheel.linux, and I see that it loops over several configurations and then executes a block of code in a DockerContainer context manager. If I was to write a PR that refactored that inner part into a function the CLI could invoke directly (where the user likely has to provide some information that the looped settings are currently providing), would that be of interest to the maintainers?
(A lot of the inner loop actually looks like a more sophisticated version of what I'm doing in that shell script, I think it would be nice to have that as a callable function)
The text was updated successfully, but these errors were encountered: