Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Outbound routing from containers consistently fails under high traffic load #6086

Closed
3 tasks done
npoczynek opened this issue Dec 9, 2021 · 6 comments
Closed
3 tasks done

Comments

@npoczynek
Copy link

  • I have tried with the latest version of Docker Desktop
  • I have tried disabling enabled experimental features
  • I have uploaded Diagnostics
  • Diagnostics ID: F64F02C2-7344-4D73-8F74-D84E482B2D3B/20211209223739

Expected behavior

Docker should be able to reliably route traffic to and from container networks.

Actual behavior

When generating large amounts of outbound traffic (using nmap SYN scans), I consistently observe that outbound traffic eventually fails to be routed from the scanning container to the external scan target. Normal routing behavior resumes upon Docker and/or host machine restart.

Information

  • macOS Version: 11.6.1
  • Intel chip or Apple chip: Intel
  • Docker Desktop Version: Up to and including 4.3.0

Steps to reproduce the behavior

  1. Configure a scan target with at least one open port that sits on an interface external to docker; can be local to the host machine. python -m http.server
  2. In a loop, run any version of nmap from any container, and scan a range of ports that includes the known open port. For example: while true; do docker run instrumentisto/nmap:7.92 -sS -Pn -p 1024-2048,8000 192.168.1.11; sleep 5; done
  3. Observe that the known open port is eventually reported as filtered, and packet captures show no more SYN packets coming from the docker network. nmap debug output shows that the container begins to receive ICMP network unreachable for all ports:

RCVD (175.0555s) ICMP [192.168.65.3 > x.x.x.x Host x.x.x.x unreachable (type=3/code=1) ] IP [ver=4 ihl=5 tos=0xc0 iplen=72 id=17344 foff=0 ttl=64 proto=1 csum=0x4487]

Host discovery disabled (-Pn). All addresses will be marked 'up' and scan times will be slower.
Starting Nmap 7.91 ( https://nmap.org ) at 2021-12-09 22:25 UTC
Nmap scan report for 192.168.1.11
Host is up (0.0018s latency).
Not shown: 1025 filtered ports
PORT     STATE SERVICE
8000/tcp open  http-alt

Nmap done: 1 IP address (1 host up) scanned in 71.60 seconds
Host discovery disabled (-Pn). All addresses will be marked 'up' and scan times will be slower.
Starting Nmap 7.91 ( https://nmap.org ) at 2021-12-09 22:27 UTC
Nmap scan report for 192.168.1.11
Host is up (3.1s latency).
All 1026 scanned ports on 192.168.1.11 are filtered

Nmap done: 1 IP address (1 host up) scanned in 46.96 seconds
@npoczynek
Copy link
Author

A couple updates. First, it looks like this might be a duplicate of #3448 and/or docker/for-win#8861. After seeing that it was reported on Windows as well, I decided to see if I could reproduce the behavior in a Ubuntu VM. In fact it's even worse; when the issue presents on Ubuntu, I actually lose routing out of the VM itself. So this looks pretty serious...
Screen Shot 2021-12-10 at 1 40 59 PM

@vanpelt
Copy link

vanpelt commented Dec 13, 2021

I'm seeing the same thing on 4.3.0 on an M1 Mac. Simply execing into the container and running curl to http://host.docker.internal:PORT succeeds the first ~10 times and then it hangs 🙃

@majaco
Copy link

majaco commented Dec 27, 2021

Just to let you know: on 4.3.2 on Mac the same for me.

@npoczynek
Copy link
Author

I've just tried testing with --userland-proxy=false with similar results, which makes it seem like a resource leak/bug in the setup and teardown of iptables rules. That would also help explain why when I test in ubuntu my VM loses routing entirely as soon as the docker issues begin.

@docker-robott
Copy link
Collaborator

Issues go stale after 90 days of inactivity.
Mark the issue as fresh with /remove-lifecycle stale comment.
Stale issues will be closed after an additional 30 days of inactivity.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle stale

@docker-robott
Copy link
Collaborator

Closed issues are locked after 30 days of inactivity.
This helps our team focus on active issues.

If you have found a problem that seems similar to this, please open a new issue.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle locked

@docker docker locked and limited conversation to collaborators Jun 3, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants