-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"platform_reverse_proxy" always generates a diff #174
Comments
@jervi Thanks for the report! I've added this to our plan to investigate and fix. |
@jervi Have you tried this configuration? resource "platform_reverse_proxy" "containers" {
docker_reverse_proxy_method = "SUBDOMAIN"
server_provider = "DIRECT"
http_port = 80
internal_hostname = "artifacts.my-artifactory-domain.io"
public_server_name = "artifacts.my-artifactory-domain.io"
} (or use This should eliminate the state drift for the time being. |
Thanks for the response and suggestion! Unfortunately it doesn't fix the issue. Terraform code: resource "platform_reverse_proxy" "containers" {
docker_reverse_proxy_method = "SUBDOMAIN"
server_provider = "DIRECT"
http_port = 80
internal_hostname = var.artifactory_url
public_server_name = var.artifactory_url
} Plan: # module.artifactory.platform_reverse_proxy.containers will be updated in-place
! resource "platform_reverse_proxy" "containers" {
! http_port = -1 -> 80
! use_https = true -> false
# (5 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. Apply:
New plan immediately after: # module.artifactory.platform_reverse_proxy.containers will be updated in-place
! resource "platform_reverse_proxy" "containers" {
! http_port = -1 -> 80
! use_https = true -> false
# (5 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. |
I also see that I posted this issue in the wrong project, sorry! Should've been in https://github.com/jfrog/terraform-provider-platform |
@jervi That's odd behavior from TF and provider. This suggests the settings are somehow reverted in Artifactory by another process and thus when TF fetches the current status, it detects the difference between the current state and your configuration. What is the response if you call the reverse proxy API to get the current configuration? In my local dev environment with no changes out of the box, this is what I get: {
"key": "direct",
"webServerType": "DIRECT",
"artifactoryAppContext": "artifactory",
"publicAppContext": "artifactory",
"serverName": "localhost",
"artifactoryServerName": "localhost",
"artifactoryPort": 8081,
"routerPort": 8082,
"dockerReverseProxyMethod": "SUBDOMAIN",
"useHttps": false,
"useHttp": true,
"httpsPort": 443,
"httpPort": 8082
} |
{
"key": "direct",
"webServerType": "DIRECT",
"artifactoryAppContext": "artifactory",
"publicAppContext": "artifactory",
"serverName": "artifacts.my-artifactory-domain.io",
"artifactoryServerName": "artifacts.my-artifactory-domain.io",
"artifactoryPort": 8081,
"routerPort": 8082,
"dockerReverseProxyMethod": "SUBDOMAIN",
"useHttps": true,
"useHttp": false,
"httpsPort": 443,
"httpPort": -1
} |
@jervi That's very odd. Do you have any infrastructure process that maintains the Artifactory system configuration (outside of Terraform)? The API response suggests the changes the provider makes (via Reverse Proxy Configuration API is not being saved or reverted by another process. |
No, not at all. It's a completely new instance, we haven't even started using it yet. It's installed using the official artifactory-ha helm chart. The reverse proxy is working correctly as well, after applying the terraform. Seems to me that the provider should ignore the irrelevant settings when using the SUBDOMAIN/DIRECT combo. Question: have you perhaps saved your settings using another method earlier (nginx/apache)? Maybe those changes are persisted in your instance? |
@jervi This is what I get from a brand/clean new instance of Artifactory, running locally as a Docker container: {
"key": "direct",
"webServerType": "DIRECT",
"artifactoryAppContext": "artifactory",
"publicAppContext": "artifactory",
"serverName": "localhost",
"serverNameExpression": "",
"artifactoryServerName": "localhost",
"artifactoryPort": 8081,
"routerPort": 8082,
"sslCertificate": "",
"sslKey": "",
"dockerReverseProxyMethod": "REPOPATHPREFIX",
"useHttps": false,
"useHttp": true,
"httpsPort": 443,
"httpPort": 8082,
"upStreamName": "artifactory"
} |
@jervi The response you got may be a result of the Do you customize the |
Sorry about the late answer and happy new year! ingress : {
hosts : concat([var.acm_main_domain], var.acm_alternative_domains)
}
postgresql : {
enabled : false # Disable the built-in postgresql database - we will use RDS instead
}
waitForDatabase : false # Disable waiting for the built-in postgresql database - we will use RDS instead
database : {
type : "postgresql"
driver : "org.postgresql.Driver"
url : "jdbc:postgresql://db_write.artifactory.internal:5432/artifactory?sslmode=require"
user : data.aws_secretsmanager_secret_version.db_username.secret_string
}
serviceAccount : {
create : true
name : "artifactory"
annotations : {
"eks.amazonaws.com/role-arn" : aws_iam_role.this.arn
}
}
artifactory : {
service : {
annotations : {
"service.kubernetes.io/topology-aware-hints" : "auto"
}
}
loggers : [
"access-audit.log",
"access-request.log",
"access-security-audit.log",
"access-service.log",
"artifactory-access.log",
"artifactory-event.log",
"artifactory-import-export.log",
"artifactory-request.log",
"artifactory-service.log",
"frontend-request.log",
"frontend-service.log",
"metadata-request.log",
"metadata-service.log",
"router-request.log",
"router-service.log",
"router-traefik.log",
"derby.log"
]
catalinaLoggers : [
"tomcat-catalina.log",
"tomcat-localhost.log"
]
openMetrics : {
enabled : true
}
primary : {
replicaCount : var.artifactory_replica_count
minAvailable : 1
resources : {
requests : {
cpu : "6500m"
memory : "26Gi"
}
limits : {
cpu : "7500m"
memory : "26Gi"
}
}
}
persistence : {
type : "s3-storage-v3-direct"
size : var.artifactory_persistence_size
maxCacheSize : var.artifactory_persistence_max_cache_size_in_GB * 1000 * 1000 * 1000
storageClassName : kubernetes_storage_class.this.metadata.0.name
awsS3V3 : {
testConnection : true
bucketName : aws_s3_bucket.this.id
region : "eu-north-1"
endpoint : "s3.eu-north-1.amazonaws.com"
path : "filestore"
}
}
}
nginx : {
replicaCount : 5
minAvailable : 3
http : {
enabled : false
internalPort : 8443
}
service : {
type : "NodePort"
ssloffload : true
}
loggers : ["access.log", "error.log"]
} |
@jervi I was referring to the JFrog Helm Chart's customization: https://jfrog.com/help/r/jfrog-installation-setup-documentation/jfrog-platform-helm-chart-installation-steps and https://jfrog.com/help/r/jfrog-installation-setup-documentation/helm-charts-for-advanced-users A freshly created Artifactory instance without default settings/configurations don't exhibit the issue you reported. So my suspicion is something to do with your installation. |
These are the Helm chart customizations we are using. They are written in HCL because we install the helm chart using Terraform - the map above is what we pass as values to the I would still consider this a bug even though we don't use the default settings. |
@jervi The resource is showing the correct state diff because the Artifactory API is returning |
I can assure you that Helm isn't reverting this config. No other outside process neither. It's just not sticking when applied. I can't say that it is a problem in the provider, but if not, then it's a problem in Artifactory itself. |
@jervi I encourage you to contact JFrog support to troubleshoot this issue. |
Will do, thanks! |
Describe the bug
Using a minimal configuration for HTTP Docker reverse proxy settings, Terraform will always create a diff on every plan, even when there are no changes.
Artifactory version: 7.98.9 rev 79809900 (self hosted)
Terraform version: 1.9.7
Terraform provider version: 1.18.2
Given this terraform config:
On every plan after the first apply, Terraform will generate the following diff:
The four settings are not required when using the embedded Tomcat reverse proxy, and as can be seen in the UI, they are not possible to change at all.
If I try to set them to what Terraform expect, just to get rid of the diff, it works for three of the config options, but the
use_https
option has some validation built in that makes it impossible to set without adding SSL settings as well (which are completely unnecessary).Example:
Results in:
And if I set
use_https
tofalse
, I'm back to getting a diff on every plan/apply.Requirements for and issue
curl
it at$host/artifactory/api/system/version
Expected behavior
There should be no diff when running the plan/apply a second time.
The text was updated successfully, but these errors were encountered: