-
Notifications
You must be signed in to change notification settings - Fork 413
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Add conntrack to gcp route script #2421
WIP: Add conntrack to gcp route script #2421
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: michaelgugino The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
671243a
to
af5ba9b
Compare
sleep 5 | ||
for vip in "${!vips[@]}"; do | ||
echo "Removing stale conntrack connections for ${vip}" | ||
conntrack -D -r "${vip}" || echo "unable to run conntrack" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like /usr/sbin/conntrack
is actually part the same package conntrack-tools
which includes a whole daemon conntrackd.service
that...I don't think we would use in OpenShift.
One option is to change the Dockerfile
here to do yum -y install conntrack-tools
and teach this script to exec back into the MCD container to run this.
This all relates to the larger goal of having more of these scripts migrate to Go code that's part of the MCD instead of injected onto the host. We have systemd-run
as a way for the pod to schedule code on the host, we could invent e.g. /run/bin/machine-config-daemon exec conntrack -D -r "${vip"}
which would use a local socket to exec code in the MCD pod.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another alternative is to yum -y install conntrack-tools
in here, and then change the pod startup to copy the binary out to the host.
Another alternative is to make it a hidden extension we only install where needed.
Or finally, we could just ship it in CoreOS but then it quickly becomes an "API" that we have to maintain ~forever and makes it hard to keep track of who's using it and what version they need etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Or yet another option is to have the MCD use the netlink API to do that directly, but I wouldn't willingly inflict programming netlink API on anyone I like 😉 )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or finally, we could say this whole problem domain should move to the apiserver operator/team and they have a daemonset that runs on the control plane 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now, a PR to add to the host: openshift/os#502
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a fan of 'do whatever is easiest' in this case. It appears that today, installing conntrack-tools is easiest. If this is seen as a negative, we should make a better option the 'easiest' option.
coreos/fedora-coreos-tracker#404 https://bugzilla.redhat.com/show_bug.cgi?id=1925698 openshift/machine-config-operator#2421 This will help us work around a believed kernel bug for OpenShift right now. We may remove this later.
@michaelgugino: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
@openshift-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
No description provided.