-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ZIP file is not recreated if running in a separate stage #37
Comments
I'm not sure if we should try to work around what is clearly a bug in the upstream provider. Given that, we should go with option 3. Plan can still be run in its own context, but you should run a full |
Actually, even when doing a To trigger the recreation of the archive, I have to either:
Workaround 1 Workaround 2 The following code computes the hash of all files under the folder passed, and adds it to the zip filepath to force archive recreation: data "external" "hash" {
program = ["bash", "${path.module}/scripts/shasum.sh", "${path.module}/configs", "${timestamp()}"]
}
data "archive_file" "main" {
type = "zip"
output_path = pathexpand("archive-${data.external.hash.result.shasum}.zip")
source_dir = pathexpand("${path.module}/configs")
} The #!/bin/bash
FOLDER_PATH=${1%/}
SHASUM=$(shasum $FOLDER_PATH/* | shasum | awk '{print $1}')
echo -n "{\"shasum\":\"${SHASUM}\"}" Workaround 3 resource "random_uuid" "main" {
keepers = {
for filename in fileset("${path.module}/configs", "**/*"):
filename => filemd5("${path.module}/${filename}")
}
}
data "archive_file" "main" {
type = "zip"
output_path = pathexpand("archive-${random_uuid.main.result}.zip")
source_dir = pathexpand("${path.module}/configs")
} |
I think workaround 3 seems reasonable. We need to keep in mind it should be based on |
@ocervell I tried to reproduce this but having a hard time reproducing the same scenario. Using our simple examples the archive is re-created for me doing either a plan or apply if any files in the It's happening as one of the first steps during an apply/plan, prior to any resources being evaluated for changes - as expected since that's when data providers typically happen. Are you by chance having files in the I see you mention using this module from within If that is the case - it makes sense to me. Order during any apply/plan would be
Since the One option I could see us doing is adding an optional variable I also tried It feels like ultimately |
Looking a bit further into hashicorp/terraform-provider-archive#11 (comment) Sounds like a valid solution that can solve the various problems, especially if we have a I'll give it a try tomorrow, as well as in the slo module and see how it goes |
When I modify a file in the
source_directory
, I find that the zip file is not recreated when one of the files in the source directory has changed.It seems we fixed a part of this when fixing #32 with PR #35 by forcing update on the object in the GCS bucket - but the
archive_file
(zip) itself is not being updated, i.e in the end not forcing the update of the Cloud Function.The upstream issue for failing to update the
archive_file
is here: hashicorp/terraform-provider-archive#39. It seems it's linked to runningplan
andapply
in a different context (think a Cloud Build pipeline composed of two configs that effectively 'forgets' about the.terraform/
folder betweenplan
andapply
).I see 3 ways we can work-around this in this module (while waiting for an upstream fix):
compute the hash of all files in the source directory and force the recomputing of the
archive_file
resource based on if any hash in thesource_directory
has changed; but I'm not certain how to do this for all files in thesource_directory
(for one file, it would be straightforward - for multiple, we'll need a custom shell script to list out all the files in the directory).have a random id appended to the zip filename that would be forced to regenerate when we need it, i.e by passing e.g a
force_update
variable to the module.advertise that
plan
andapply
should be run in the same context, which would make some users struggle.Any help is appreciated here :)
The text was updated successfully, but these errors were encountered: