Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make the azure pipelines really test new contaiers. #224

Merged
merged 2 commits into from
Mar 9, 2020
Merged

Make the azure pipelines really test new contaiers. #224

merged 2 commits into from
Mar 9, 2020

Conversation

hvindin
Copy link
Contributor

@hvindin hvindin commented Mar 8, 2020

Currently, in the case of a failure to build an image,the "test" to check buildconf just causes the pipeline to download the latest successful version from dockerhub

If we create a dummy named container just for the purposes of running that sanity check then there won't be a container on dockerhub which might not wor

I expect this PR will expose quite a few (likely ffmpeg v3.2) containers where the build has been broken for a while now but the pipeline appears to succeed.

docker build -t ${DOCKER}:${VERSION}-${VARIANT} --build-arg MAKEFLAGS="-j$(($(grep -c ^processor /proc/cpuinfo) + 1))" docker-images/${VERSION}/${VARIANT}
docker run --rm ${DOCKER}:${VERSION}-${VARIANT} -buildconf
# Create a uniquely tagged version of the container to avoid pulling a version that was already on docker hub
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sold on that, if you have a local image with the exact tag to run, docker won't pull from the registry

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But when the build actually fails you don't have a local image with that tag to run, which is why it pulls from the registry when the build fails.

If you don't use an image name that exists in the registry (ie. something that is unique and local) then you can validate that it runs with a normal docker run without risking it pulling the image from the internet because the same tag already exists.

There's a proposal to make it possible to use a --pull=never flag here: https://github.com/docker/cli/issues/2253 but it's probably not going to make it into a distro version of docker anytime soon.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this is the sort of error case that doesn't get picked up if you don't use a unique name for the container to test:
https://dev.azure.com/video-tools/ffmpeg/_build/results?buildId=249&view=logs&j=6c6ac11e-9aaf-54db-7582-26c7d72f5185&t=c27d90b2-ebe3-5e65-dd76-20de7046977f&l=13782

If you look at the lines below that, it simply ends up just pulling the image from dockerhub and is obviously successfuly, because the version that was published obviously did build successfully.

@@ -252,8 +312,14 @@ jobs:
ISPARENT: True
steps:
- bash: |
uuid="$(tr -dc '[:alnum:]' </dev/urandom| head -c 16)"
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been burned by using build numbers before... probably not so risky here given that the build number is unlikely to ever get reused given the size of the project and number of builds actually occuring.

But maybe something like Build.BuildId which is unique for every attempt at a job? (ie. I think if you reran the exact build again, the Build.BuildNumber would be the same, but the Build.BuildId would be different.)

Obviously the real goal here is to just make sure that whatever the token is, it needs to not exist on docker hub... but I'm just personally always wary of using Build numbers 😕

# Run -buildconf from the unique version we just tagged to make sure the build was ok
docker run --rm ${DOCKER}:${VERSION}-${VARIANT}-${uuid} -buildconf
# remove the unnecessary uuid tagged image
docker rmi ${DOCKER}:${VERSION}-${VARIANT}-${uuid}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the whole /var/lib/docker is cleaned after, that shouldn't necessary

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will ditch this line then 🙂

"parent": "ubuntu"
},
{
"name": "alpine311",
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

even more variants ;-p

LTS ubuntu I guess, hard to argue, but two versions of alpine 3 ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm just about to push up a change to not build the oldver variants most of the time, ie, replacing te logic which currently rewrites the file in python by just replacing strings.

Seems more stable to have a declaration of, say, how a alpine 3.8 image should be built, and if alpine 3.11 doesn't work then just use that, as opposed to trying to basically sed and grep through the file to update it, so I am trimming down the number of variants :)

@hvindin
Copy link
Contributor Author

hvindin commented Mar 8, 2020

So with the latest changes, basically we have these images:

previously new image unversioned
3.2
centos7 centos7 centos
ubuntu ubuntu1604 ubuntu
alpine (broken) alpine38 alpine
nvidia nvidia1604 nvidia
vaapi vaapi1604 vaapi
scratch scratch38 scratch
3.3
centos8 centos8 centos
N/A centos7 N/A
ubuntu ubuntu1804 ubuntu
N/A ubuntu1604 N/A
alpine alpine311 alpine
nvidia nvidia1804 nvidia
vaapi vaapi1804 vaapi
scratch scratch311 scratch
4.0
centos8 centos8 centos
N/A centos7 N/A
ubuntu ubuntu1804 ubuntu
N/A ubuntu1604 N/A
alpine alpine311 alpine
nvidia nvidia1804 nvidia
vaapi vaapi1804 vaapi
scratch scratch311 scratch
4.1
centos8 centos8 centos
N/A centos7 N/A
ubuntu ubuntu1804 ubuntu
N/A ubuntu1604 N/A
alpine alpine311 alpine
nvidia nvidia1804 nvidia
vaapi vaapi1804 vaapi
scratch scratch311 scratch
4.2
centos8 centos8 centos
N/A centos7 N/A
ubuntu ubuntu1804 ubuntu
N/A ubuntu1604 N/A
alpine alpine311 alpine
nvidia nvidia1804 nvidia
vaapi vaapi1804 vaapi
scratch scratch311 scratch
snapshot
centos8 centos8 centos
N/A centos7 N/A
ubuntu ubuntu1804 ubuntu
N/A ubuntu1604 N/A
alpine alpine311 alpine
nvidia nvidia1804 nvidia
vaapi vaapi1804 vaapi
scratch scratch311 scratch

So the only extra images are the CentOS and Ubuntu LTS versions.

Otherwise the templates have changed name to be explicit about what each template is, because replacing seems dangerous as some of the package names and the like may have changed.

@hvindin
Copy link
Contributor Author

hvindin commented Mar 8, 2020

And of course, kind of goes without saying, that there will still be tags for ubuntu, centos, alpine, nvidia, vaapi... whatever, but there will also be more specific tags with the actual version in them.

@hvindin
Copy link
Contributor Author

hvindin commented Mar 8, 2020

Ugh, god damn typos...

hvindin added 2 commits March 8, 2020 10:07
Currently, in the case of a failure to build an image,the "test" to check buildconf just causes the pipeline to download the latest successful version from dockerhub

If we create a dummy named container just for the purposes of running that sanity check then there won't be a container on dockerhub which might not wor

I expect this PR will expose quite a few (likely ffmpeg v3.2) containers where the build has been broken for a while now but the pipeline _appears_ to succeed.

Let's just preempt some of the 3.2 variants that are definitely going to fail as a result of recent ubuntu 16.04 -> 18.04 and alpine 3.8 -> 3.11 version bumps.
@jrottenberg
Copy link
Owner

watching azure going through the build... will merge once it passed , thank you !

@jrottenberg jrottenberg merged commit aa3612f into jrottenberg:master Mar 9, 2020
@hvindin
Copy link
Contributor Author

hvindin commented Mar 9, 2020

It did occur to me about halfway through trying to get this to work that it was a terrible idea because so many of the 3.2 containers have been failing for so long it was almost impractical to go back to a time when they built.

but on the upside all the containers and variants now build successfully and if any change breaks it then the pipelines will actually hard fail.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants