-
Notifications
You must be signed in to change notification settings - Fork 466
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make the azure pipelines really test new contaiers. #224
Conversation
docker build -t ${DOCKER}:${VERSION}-${VARIANT} --build-arg MAKEFLAGS="-j$(($(grep -c ^processor /proc/cpuinfo) + 1))" docker-images/${VERSION}/${VARIANT} | ||
docker run --rm ${DOCKER}:${VERSION}-${VARIANT} -buildconf | ||
# Create a uniquely tagged version of the container to avoid pulling a version that was already on docker hub |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sold on that, if you have a local image with the exact tag to run, docker won't pull from the registry
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But when the build actually fails you don't have a local image with that tag to run, which is why it pulls from the registry when the build fails.
If you don't use an image name that exists in the registry (ie. something that is unique and local) then you can validate that it runs with a normal docker run without risking it pulling the image from the internet because the same tag already exists.
There's a proposal to make it possible to use a --pull=never
flag here: https://github.com/docker/cli/issues/2253 but it's probably not going to make it into a distro version of docker anytime soon.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this is the sort of error case that doesn't get picked up if you don't use a unique name for the container to test:
https://dev.azure.com/video-tools/ffmpeg/_build/results?buildId=249&view=logs&j=6c6ac11e-9aaf-54db-7582-26c7d72f5185&t=c27d90b2-ebe3-5e65-dd76-20de7046977f&l=13782
If you look at the lines below that, it simply ends up just pulling the image from dockerhub and is obviously successfuly, because the version that was published obviously did build successfully.
azure-pipelines.yml
Outdated
@@ -252,8 +312,14 @@ jobs: | |||
ISPARENT: True | |||
steps: | |||
- bash: | | |||
uuid="$(tr -dc '[:alnum:]' </dev/urandom| head -c 16)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there might be simpler : ${BUILD_BUILDNUMBER}
https://docs.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&tabs=yaml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've been burned by using build numbers before... probably not so risky here given that the build number is unlikely to ever get reused given the size of the project and number of builds actually occuring.
But maybe something like Build.BuildId
which is unique for every attempt at a job? (ie. I think if you reran the exact build again, the Build.BuildNumber
would be the same, but the Build.BuildId
would be different.)
Obviously the real goal here is to just make sure that whatever the token is, it needs to not exist on docker hub... but I'm just personally always wary of using Build numbers 😕
azure-pipelines.yml
Outdated
# Run -buildconf from the unique version we just tagged to make sure the build was ok | ||
docker run --rm ${DOCKER}:${VERSION}-${VARIANT}-${uuid} -buildconf | ||
# remove the unnecessary uuid tagged image | ||
docker rmi ${DOCKER}:${VERSION}-${VARIANT}-${uuid} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the whole /var/lib/docker is cleaned after, that shouldn't necessary
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will ditch this line then 🙂
"parent": "ubuntu" | ||
}, | ||
{ | ||
"name": "alpine311", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
even more variants ;-p
LTS ubuntu I guess, hard to argue, but two versions of alpine 3 ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm just about to push up a change to not build the oldver variants most of the time, ie, replacing te logic which currently rewrites the file in python by just replacing strings.
Seems more stable to have a declaration of, say, how a alpine 3.8 image should be built, and if alpine 3.11 doesn't work then just use that, as opposed to trying to basically sed and grep through the file to update it, so I am trimming down the number of variants :)
So with the latest changes, basically we have these images:
So the only extra images are the CentOS and Ubuntu LTS versions. Otherwise the templates have changed name to be explicit about what each template is, because replacing seems dangerous as some of the package names and the like may have changed. |
And of course, kind of goes without saying, that there will still be tags for |
Ugh, god damn typos... |
Currently, in the case of a failure to build an image,the "test" to check buildconf just causes the pipeline to download the latest successful version from dockerhub If we create a dummy named container just for the purposes of running that sanity check then there won't be a container on dockerhub which might not wor I expect this PR will expose quite a few (likely ffmpeg v3.2) containers where the build has been broken for a while now but the pipeline _appears_ to succeed. Let's just preempt some of the 3.2 variants that are definitely going to fail as a result of recent ubuntu 16.04 -> 18.04 and alpine 3.8 -> 3.11 version bumps.
watching azure going through the build... will merge once it passed , thank you ! |
It did occur to me about halfway through trying to get this to work that it was a terrible idea because so many of the 3.2 containers have been failing for so long it was almost impractical to go back to a time when they built. but on the upside all the containers and variants now build successfully and if any change breaks it then the pipelines will actually hard fail. |
Currently, in the case of a failure to build an image,the "test" to check buildconf just causes the pipeline to download the latest successful version from dockerhub
If we create a dummy named container just for the purposes of running that sanity check then there won't be a container on dockerhub which might not wor
I expect this PR will expose quite a few (likely ffmpeg v3.2) containers where the build has been broken for a while now but the pipeline appears to succeed.