You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the past we have observed cases, where an application is running, but does not accept any connections. When we looked into it, the app healthcheck was passing and the envoy proxy was running as well, but no requests were reaching the app. This leads to this loop:
Gorouter unable to open a connection to the diego cell.
Gorouter prunes the endpoint
Since the app healthcheck passes, the endpoint gets re-registered
This is why we started to look into potential ways to do some sort of healthchecking on the proxy. The best option we currently see is modifying the app healthcheck in a way that also checks the proxy. Currently it uses only the app port. We can add a parallel check that also does the same trough the proxy port. The proxy will then redirect the request to the app and we will receive a response. This of course means two times more healthchecking requests to the app, but this should not have any significant impact.
Of course this extra check functionality could be enabled with a flag in the executor, so it can be used only if needed.
Please let me know what you think on the topic. I think this topic has been discussed in the past and maybe someone could give some context why it was never implemented.
Adding envoy proxy liveness check. With this new functionality when the envoy stops accepting TCP connections the health check will fail and the app will be restarted.
With those 2 PR's: cloudfoundry/executor#110 #985
The changes were tested on test environment and it is visible that there are 3 envoy TCP liveness healthchecks:
The setup we tested was on our environment with the newly implemented envoy liveness check and iptable rule on the container side to drop everything with destination port 61001(envoy), which causes timeout on gorouter side.
iptables -A INPUT -p tcp --dport 61001 -j DROP
After the execution of the iptable rule on the container which drop destination port 61001 we've received the correct error message and then the app was restarted. Which proves that the newly implemented logic is working:
Envoy proxy healthchecks
Summary
In the past we have observed cases, where an application is running, but does not accept any connections. When we looked into it, the app healthcheck was passing and the envoy proxy was running as well, but no requests were reaching the app. This leads to this loop:
This is why we started to look into potential ways to do some sort of healthchecking on the proxy. The best option we currently see is modifying the app healthcheck in a way that also checks the proxy. Currently it uses only the app port. We can add a parallel check that also does the same trough the proxy port. The proxy will then redirect the request to the app and we will receive a response. This of course means two times more healthchecking requests to the app, but this should not have any significant impact.
Of course this extra check functionality could be enabled with a flag in the executor, so it can be used only if needed.
Please let me know what you think on the topic. I think this topic has been discussed in the past and maybe someone could give some context why it was never implemented.
Diego repo
https://github.com/cloudfoundry/executor
https://github.com/cloudfoundry/healthcheck
The text was updated successfully, but these errors were encountered: