-
Notifications
You must be signed in to change notification settings - Fork 988
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubernetes namespace already created fails #1406
Comments
Good day @katlimruiz, I do not think this is a bug. You asked terraform to create a resource and it failed because it was already created. The common practice is to import that already created resource into terraform state so that terraform can manage it. Thanks! |
However, Terraform whole idea is to be declarative, if the object exist and no diff is made, then it doesn't make sense to recreate it (hence no error). Every other object in TF works like this, why would a k8s namespace differ from it?. Even the k8s cluster work like that. It is this specific object that is not. |
Are you positive that all terraform resources are ignored if they already exist? I ask this because there have been dozens of times where we have broken the terraform state and I was forced to import buckets, databases, kubernetes services, etc before an apply worked correctly again... |
if they are in the state and they are already there, then yes, all terraform scripts work like that otherwise this would just never work. This is my experience. When you already have a whole platform in place, and you want to use terraform, then yes, it is a pain in the b*tt because it tells you to import a lot of resources, and importing them is very slow and problematic. When you create a platform from scratch in terraform, then things go much much easier. There are some objects like Kubernetes cluster that do not communicate their full creation to the cloud provider therefore (even documentation says) you have to separate the creation from the apply of more kub resources, otherwise it would not work. The only times I've seen the state broken is when 1) you use git to store the state 2) you made changes to the cloud manually and therefore discrepancies occur. |
I definitely agree, getting everything that you already created into terraform state is a big pain in the b*tt! I would really like to see an --auto-import or --ignore-exists flag added to terraform. One tool that I haven't tried yet but looks interesting is terraformer. It might help with those missing resources. I've had two cases of major terraform state corruption:
|
@katlimruiz This is not a bug with the kubernetes provider. If you think terraform should magically do |
I'm not saying that.
The namespace was created with tf and it was on the state.
On a second run it tried to recreate it which is wrong.
…Sent from my iPhone12
On 12 Oct 2021, at 06:29, Jasper ***@***.***> wrote:
@katlimruiz This is not a bug with the kubernetes provider. If you think terraform should magically do terraform import when an object doesn't exist in the state, that would be a feature request on terraform itself.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
|
Did you confirm that the object was in the state after the first apply? Did the first apply complete successfully? |
I'm seeing the same issue. |
The same here, any progress? |
an option should be there to bypass the namespace creation if found already existent. |
Same thing happened to me just now. |
Same here. Basically we have to comment out namespace.tf file after first apply, otherwise it's failing. |
Does your first apply fail? If the first apply is successful then the namespace is added to the state and creation of the resource shouldn't be attempted by tf the next time. On the other hand if something prevents the resource from being added to the state in the first apply then it will indeed try to create it again on the next run. |
it appears that if the resource (e.g. namespace) was created by terraform provider, then it remains declarative. Since it is registered in the tf state, attempting to create it again doesn't fail. However, if the resource had been created outside of terraform, then the kubernetes provider will fail on it. |
That is normal, and is just how Terraform works. If you create a resource outside of Terraform, then you need to import it into the state before you try to manage it with Terraform. |
@jbg that's not true, helm_release for example, doesn't work the same way. If I installed some chart bypass terraform with helm install and then I added it to terraform with no changes, terraform apply will do the job, no error, and this is how things must be and this is not magic, it is programming and there is a definition for "declarative" manner, whether you like it or not |
That's not "how things must be". Read up on the design of terraform and why it uses state. The helm_release behaviour you describe is not how most terraform resources work. If you don't want it to be that way, then what you really want is a different tool. |
@jbg you are literally the only one who thinks this open ticket shouldn't raise a fix. And since the ticket is not closed apparently you are not in charge of this provider anyhow, so it would be nice if you will stop posting here about "magic" and other nonsense without even providing any links to prove your words |
Just trying to help you understand the TF model and why this won't be "fixed". Feel free to keep waiting though! All the best! |
@jbg I don't need to wait for anything, because there is a properly working Kubernetes "kubectl" Provider |
That's an unfortunate side-effect of TF's view of the world is the state. If the object doesn't exist in the state, the provider is asked to create it, and that operation is supposed to fail if the object already exists in the "real world". This is how almost every provider works, and it's the reason why the |
I came across this issue as well, applied the following workaround. IMO this is not a bug and I also don't think a feature is justified for something that can be worked around.
|
@oferchen I don't agree with you, Terraform is a declarative language & it is supposed to ignore a resource that already exists in a state that is wanted. So an error because a resource exists is against Terraform. |
It all comes down to the behaviour one would like. For instance:
So it comes down to if you want the create or apply behaviour but it appears that this provider only support create. I was hoping to use the cleaner language features of terraform to deploy our applications however, these limitations do not make it a good alternative to tools like helm or kustomize. There doesn't seem to be clean way in terraform to deploy your app to say |
The big difference between ad-hoc applying of yaml to your cluster and Terraform is Terraform's state. Learning about it will explain why it works the way it does, and why all correctly-written providers work the same way as this one. Try creating a S3 bucket that already exists with the AWS provider, for example. Start here to learn about state in Terraform: https://developer.hashicorp.com/terraform/language/state If a resource is added to your TF config and it does not exist in the state, it will be created. If you desire to work with an existing object, you simply need to https://developer.hashicorp.com/terraform/cli/import
Of course there is, this is a common use case. If you want to duplicate your whole config, look into workspaces. If you want to duplicate only part of it, define that part in a module and then use the module twice in your config, with a variable for namespace and whatever else needs to vary. |
We provision the GKE cluster in the root module and deploy our app using a child module. We'd like to use the exact same module for all pre-production environments. Changing any variable in the child module (e.g. changing the namespace from develop to staging) triggers a
works but we'd like to do this for any feature branch so the app environment is not deterministic. However, variables are not allowed in setting up the state so that seems to block us from reusing the same module to provision different kubernetes environments within namespaces. I haven't explored workspaces so will check that out now. |
@jbg thank you for the RTFM tip. Just tried out workspaces and that was exactly what I was missing. We have our terraform broken down into modules so for anyone trying this out, be sure to add a
|
If you want to keep Instead, you would add a second Or, as you've found, you can use workspaces to have multiple separate states in the same backend. This is more appropriate when you want your whole configuration to be duplicated N times. |
What about leaving this resource and behavior as is, later renaming it to I don't think there is currently a workaround to check if a namespace exists before creating one with just the kubernetes provider? |
Terraform resources represent things, not operations.
As of Terraform 1.5.0 you could use import blocks to do this without the separate import step. You do still have to know whether the NS exists or not in order to know whether you need the import block. I'm curious to understand better the use case where you don't know? |
@oferchen , regarding the workaround you proposed, can you please confirm that when you run apply a 2nd time, it doesn't try to remove the namespace (and thus creating/recreating every other execution) due to the fact that the kubernetes_all_namespaces will start containing the new namespace once it is created by terraform? |
Regarding the broader discussion, I feel that tools should focus in allowing people to do what they need to do, and in some of the points above, it looks like the need perform this simple operation is seen as less important than the purity of terraform perspective on things, but without providing a simple workaround to perform this simple but common operation. In the points above, just would like to point out that In our case, we need to "create a namespace if one doesn't yet exist", as we are deploying terraform resources and gitops/flux resources, and sometimes some are deployed before others. So, using
|
the snippet was specifically designed to omit namespaces that already exists, while I do agree that this is an issue that should be mitigated by Terraform this issue is already almost two years old and there is no progress so a workaround makes sense here. |
Hi @oferchen . Would it make more sense to reopen the ticket as something more focused on adding the ability to have the provider do both In our case, we ended up having to shift to "alekc/terraform-provider-kubectl" where it behaves like apply (and I can select "apply_only"), as the risk of removing a lot of content from k8s if we remove the terraform Having a similar option on |
@joaocc I think this provider behavior is inconsistent with how Terraform is supposed to behave
Thanks |
Terraform Version, Provider Version and Kubernetes Version
Affected Resource(s)
kubernetes_namespace
Terraform Configuration Files
resource "kubernetes_namespace" "kubnss" {
for_each = toset(var.namespaces)
metadata {
name = each.key
}
}
Expected Behavior
If the namespace is already created, it should just omit the statement and move on to the next one
Actual Behavior
Error: namespaces "xxxx-web-xxxxx" already exists
│
│ with module.production.module.kubernetes_web.kubernetes_namespace.kubnss["myns"],
│ on m/kubernetes/main.tf line 69, in resource "kubernetes_namespace" "kubnss":
│ 69: resource "kubernetes_namespace" "kubnss" {
Community Note
The text was updated successfully, but these errors were encountered: