-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update dockerfiles to do staged builds #19952
base: master
Are you sure you want to change the base?
Conversation
5287e97
to
0115b4d
Compare
{% else %} | ||
FROM {{ prefix }}{{DOCKER_BASE_ARCH}}/debian:bookworm | ||
ARG BASE={{ prefix }}{{DOCKER_BASE_ARCH}}/debian:bookworm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@saiarcot895
You can use debian slim images to reduce final image size as it was suggested here: #19008.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. I want to keep the focus of this PR on unblocking Docker upgrades, but I had to bring in some space optimization stuff (see the COPY
at the end of this file) to get things to work.
On newer versions of Docker, only the buildkit builder is supported, and cannot be disabled by setting DOCKER_BUILDKIT to 0. The side effect of this is that the behavior of `--squash` is different (see moby/moby#38903). This will result in the container sizes being significantly higher. To work around this, make all of our builds two-stage builds, with the `--squash` flag entirely removed. The way this works is in the first stage, whatever new files/packages need to be added are added (along with files/packages that need to be removed). Then, in the second stage, all of the files from the final state of the first stage are copied to the second stage. As part of this, also consolidate the container cleanup code into `post_run_cleanup`, and remove it from the individual containers (for consistency). Also experiment a bit with not needing to explcitly install library dependencies, and let apt install it as necessary. This will help during upgrades in the case of ABI changes for packages. Signed-off-by: Saikrishna Arcot <[email protected]>
This shouldn't be committed Signed-off-by: Saikrishna Arcot <[email protected]>
Signed-off-by: Saikrishna Arcot <[email protected]>
Signed-off-by: Saikrishna Arcot <[email protected]>
Signed-off-by: Saikrishna Arcot <[email protected]>
Signed-off-by: Saikrishna Arcot <[email protected]>
Signed-off-by: Saikrishna Arcot <[email protected]>
Signed-off-by: Saikrishna Arcot <[email protected]>
Signed-off-by: Saikrishna Arcot <[email protected]>
Signed-off-by: Saikrishna Arcot <[email protected]>
The docker root cleanup is removing the contents of the docker root directory we create from within a container. However, this container isn't using the container registry variable, which menas it may fail depending on the network environment. Fix this by prefixing the container registry variable The docker root directory creation is missing the `shell` at the beginning, which means the directory doesn't actually get created. While the docker command later will still create the directory automatically, fix this and make sure it gets created here. Signed-off-by: Saikrishna Arcot <[email protected]>
It seems that on the Bullseye slave container (not sure about Buster), the nofile ulimit is set to 1048576:1048576 (as in, 1048576 for both the soft and hard limit). However, the Docker startup script in version 25 and newer sets the hard limit to 524288 (because of moby/moby#c8930105b), which fails because then the soft limit will be higher than the hard limit, which doesn't make sense. However, on a Bookworm slave container, the nofile ulimit is set to 1024:1048576, and the startup script's ulimit command goes through. A simple workaround would be to explicitly set the nofile ulimit to be 1024:1048576 for all slave containers. However, sonic-swss's tests needs more than 1024 file descriptors open, because the test code doesn't clean up file descriptors at the end of each test case/test suite. This results in FD leaks. Therefore, set the ulimit to 524288:1048576, so that Docker's startup script can lower it to 524288 and swss can open file descriptors. Signed-off-by: Saikrishna Arcot <[email protected]>
With the new approach of building the images (where the entire final rootfs is copied into the second stage), if the system building the containers is using the overlay2 storage driver (which is the default) and is able to use native diffs (which could be true if CONFIG_OVERLAY_FS_REDIRECT_DIR isn't enabled in the kernel), then the final result of the image will be different than if naive diffs (where Docker compares the metadata of each file and, if needed, the contents to find out if something has changed) were used. Specifically, with native diffs, each container would be much larger, since technically speaking, the whole rootfs is being written to, even if the content ends up the same. This appears to be a known issue (in some form), and workarounds are being thought of in moby/moby#35280. As a workaround, install rsync into the base container, copy the entirely of that into an empty base image, and use rsync to copy only the changed files into the layer in one shot. This does mean that rsync will remain installed in the final built containers, but hopefully this is fine. Signe-off-by: Saikrishna Arcot <[email protected]>
7968803
to
0b85785
Compare
/azp run Azure.sonic-buildimage |
Azure Pipelines successfully started running 1 pipeline(s). |
Signed-off-by: Saikrishna Arcot <[email protected]>
Signed-off-by: Saikrishna Arcot <[email protected]>
/azp run Azure.sonic-buildimage |
Azure Pipelines successfully started running 1 pipeline(s). |
Signed-off-by: Saikrishna Arcot <[email protected]>
Signed-off-by: Saikrishna Arcot <[email protected]>
Signed-off-by: Saikrishna Arcot <[email protected]>
/azp run Azure.sonic-buildimage |
Azure Pipelines successfully started running 1 pipeline(s). |
Signed-off-by: Saikrishna Arcot <[email protected]>
/azp run Azure.sonic-buildimage |
Azure Pipelines successfully started running 1 pipeline(s). |
Signed-off-by: Saikrishna Arcot <[email protected]>
/azp run Azure.sonic-buildimage |
Azure Pipelines successfully started running 1 pipeline(s). |
Why I did it
On newer versions of Docker, only the buildkit builder is supported, and
cannot be disabled by setting DOCKER_BUILDKIT to 0. The side effect of
this is that the behavior of
--squash
is different (seemoby/moby#38903). This will result in the container sizes being
significantly higher.
Work item tracking
How I did it
To work around this, make all of our builds two-stage builds, with the
--squash
flag entirely removed. The way this works is in the firststage, whatever new files/packages need to be added are added (along
with files/packages that need to be removed). Then, in the second stage,
use rsync to copy over the changed files as a single command/layer. In the
case of the base layer for each Debian version, the final result of the first
stage will be copied into an empty base layer.
As part of this, also consolidate the container cleanup code into
post_run_cleanup
, and remove it from the individual containers (forconsistency). Also experiment a bit with not needing to explicitly
install library dependencies, and let apt install it as necessary. This
will help during upgrades in the case of ABI changes for packages.
Also, remove the
SONIC_USE_DOCKER_BUILDKIT
option, and don'tset
DOCKER_BUILDKIT
option. This option will eventually have no impact.This also means that builds will now use buildkit, as that is the default now.
How to verify it
Which release branch to backport (provide reason below if selected)
Tested branch (Please provide the tested image version)
Description for the changelog
Link to config_db schema for YANG module changes
A picture of a cute animal (not mandatory but encouraged)