Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTP health check not working #40

Open
jpds opened this issue Aug 20, 2023 · 3 comments
Open

HTTP health check not working #40

jpds opened this issue Aug 20, 2023 · 3 comments

Comments

@jpds
Copy link
Contributor

jpds commented Aug 20, 2023

The example job configuration here with the latest package versions in ports:

  • Nomad 1.5.3
  • nomad-pot-driver 0.9.0
  • pot 0.15.5

...results in a tcp health check that simply reports: invalid port "http": port label not found.

I have also tried adapting the job to be:

job "example" {
  datacenters = ["dc1"]
  type        = "service"

  group "group1" {
    network {
      port "http" { to = 80 }
    }

    count = 1

    task "www1" {
      driver = "pot"

      service {
        provider = "nomad"
        tags = ["nginx", "www"]
        name = "nginx-example-service"
        port = "http"

         check {
            name     = "http_probe"
            type     = "http"
            path     = "/"
            interval = "10s"
            timeout  = "3s"
          }
      }

      config {
        image = "https://potluck.honeyguide.net/nginx-nomad"
        pot = "nginx-nomad-amd64-13_1"
        tag = "1.1.13"
        command = "nginx"
        args = ["-g","'daemon off;'"]

        #copy = [
        #  "/mnt/s3/web/nginx.conf:/usr/local/etc/nginx/nginx.conf",
        #]
        #mount = [
        #  "/mnt/s3/web/www:/mnt"
        #]
      }

      resources {
        cpu = 200
        memory = 64

        network {
          mbits = 10
        }
      }
    }
  }
}

But that instead tries to access the port on the host's IPv6 address instead of just going

nomad: Get "http://[2a01:XXXX:AAAA::fe6b:566e]:27713/": dial tcp [2a01:XXXX:AAAA::fe6b:566e]:27713:

Instead of just directly probing the IPv4 address on the pot:

pot name : www1_2ab8deb6_6778e4b0-9bcc-e17c-102d-554d74edd29f
        network : public-bridge
        ip : 10.192.0.8
        active : true
# curl 10.192.0.8
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
@grembo
Copy link
Contributor

grembo commented Sep 15, 2023

I usually use consul based checks, but could you try to change your config block to:

      config {
        image = "https://potluck.honeyguide.net/nginx-nomad"
        pot = "nginx-nomad-amd64-13_1"
        tag = "1.1.13"
        command = "nginx"
        args = ["-g","'daemon off;'"]
        port_map = {
          http = "80"
        }
      }

@pizzamig
Copy link
Contributor

consul cannot (and it shouldn't) know the local pot address, as it's only reachable from the machine itself.
Nomad is registering the service using the IPv6 address and, IIRC, the set up is done in IPv4.

To workaround this problem, you can try to force consul to advertise only the IPv4 address by adding this line to you /etc/rc.conf:

consul_args="-advertise 192.168.xxx.xxx"

and use the IPv4 address of your host.

@grembo
Copy link
Contributor

grembo commented Sep 28, 2023

consul cannot (and it shouldn't) know the local pot address, as it's only reachable from the machine itself. Nomad is registering the service using the IPv6 address and, IIRC, the set up is done in IPv4.

@pizzamig when using provider = "nomad", consul isn't involved (nomad based health checks are a relatively recent feature for setups without consul). I would always recommend using consul though, and yes, health checks should be done using the "public" (inbound NAT) port.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants