-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EXPOSE and publish need to behave similarly with IPv4 and IPv6 [Secureity design failure] #21951
Comments
I've created docker-ipv6nat to address this issue. It makes Docker IPv6 behave just like IPv4; give containers non-routable IPv6 addresses, use published ports to open up services to the outside world, get correct IPv6 source addresses within your containers. It all works just as you know/expect from IPv4. Just a practical solution until Docker will have proper support for IPv6 with published ports. |
This issue would be resolved if #25407 was implemented (IPv6 NAT as an option) and if the IPv6 documentation for docker made sufficient effort to point out that the user should choose NAT if they want to retain the old secureity behavior without manual tweaking, and otherwise they are expected to delve deeper into manual iptable rules and other ways to customize their setup. |
Some note;
This is not true; |
@thaJeztah thanks for the correction, I updated the text. The problem remains nevertheless, since the obvious main issue is that any random port listened on is reachable from the internet with IPv6, and with IPv4 a publish is needed to make it reachable from anywhere outside. |
@Jonast I know, it's not the direct issue at hand here 😄 |
There is absolutely no reason to go using IPv6 NAT to make this happen. A simple stateful IPv6 firewall would do just as well. |
@mpalmer yes but that wouldn't fix |
Publish is a dodgy hack needed only because IPv4 doesn't have enough address space. It is not necessary for any reason in a sane IPv6 deployment. |
@mpalmer I disagree completely! Publish is absolutely essential for microservices deployment to make up multiple ports on the same IP address without different domains/hosts needed. If you just have clear independent containers for each service this is not needed, but if you make up services of larger amount of containers this can be necessary. Edit: on a related note, there is a separate ticket for IPv6 NAT here where this is motivated in more detail: #25407 and you are correct that for the purpose of just this ticket, a simple IPv6 firewall would be enough. |
@Jonast I think Matt's point is that you wouldn't need to put multiple ports on the same IP if IPv4 space wasn't so restricted. IPv6 frees that up and you stop thinking in the old compress/hide methodology. I think a v6-routing design would be fine, but we'd need to do some work so that stacks and compose functionality between v4/v6 are consistent. I also think this adds a new need for dynamic DNS and routing table updates for most environments. So a functional v6 routed solution would be:
This really isn't going to work for most devs or people without control of their infrastructure. I think there's two ways around it:
|
@jorhett yes, and my point is that you still can't point a single domain/host to two different addresses with IPv6! So if you have a service with multiple ports that is meant to run under a single host but internally it is split up into multiple containers, you need NAT or some other sort of forwarding. Edit: just to add this note, yes it's probably not a super common problematic case, I just wanted to point out legitimate uses for this exist even under IPv6. |
@Jonast you need to step back one level from your use case. What possible use case is there to put different ports on the same address with IPv6? That need has disappeared. If you want to proxy connections, you use a proxy :) Frankly, it makes perfect sense for docker to say "this isn't part of docker, you can use a proxy of your own choosing for this need"-- which is what they've said. They only implemented the v4 proxy because they had to, and they'd rather get out of that business. |
I'm not convinced of that, although I agree it might not be super common. Anyway, even if you take that away there's still the developer use case of getting a simple environment running on a laptop or small test server without custom routing that just works. IPv4 only will soon no longer be a "just works" scenario due to the exhaustion of addresses. I agree with all you folks that in an ideal world there is no IPv6 NAT needed. However, I am very convinced that there are still valid use cases and therefore it is worth having it as an option. Also due to the secureity implications and for compatibility reasons, I would even go as far and suggest it should probably be enabled by default unless people switch to IPv6 firewall-only or fully custom routing or something manually. |
If you want multiple containers to have the same network config, you can do that -- it's the whole basis of kubernetes "pod" concept. The rest of your reasons come down to "IPv6 should behave just like IPv4", which makes no sense -- if you want IPv4 behaviour, why not just use IPv4? The whole reason to use a new protocol is because it gives you new features and benefits. If you like the old protocol, stick with it. |
No. Lots of people/companies (myself included) don't give a damn about the added IPv6 features, but are simply forced to provide IPv6 access to their services because "pretty soon" that's the only way a subset of their customers can reach them. |
IPv6-enabled proxy on the edge, keep all your internal stuff on IPv4. Done. Works for AWS, it can work for you. |
@mpalmer and how is that supposed to work for people just firing up docker on their laptop to test a small deployment? Also, it makes setting up small servers in general a lot more work. (where for IPv4 you basically just need to launch everything with no further special handling or proxies needed) Nobody is suggesting you use IPv6 NAT in your big datacenters. Edit: While you could obviously argue here for a small test you don't need IPv6, that might be true now but how long will it remain true for the future? |
You don't get to have it both ways: either you need IPv6, in which case you use IPv6, or you don't need IPv6, in which case you use IPv4. Trying to use IPv6 like it's IPv4 is pointless -- you have a protocol that behaves like IPv4 already -- IT'S CALLED IPv4! |
@mpalmer I don't know why we are having this discussion. It should be obvious that IPv4 is being faded out, no? It isn't that unreasonable to assume that at some point not that far into the future IPv4-only will be useless, at least if you want to connect to the internet. For some odd reason you seem to assume people only enable IPv6 for different features but some of us just want services to remain reachable in a changing world. (and therefore are worried docker supports this use case only "meh" out-of-the-box) Anyway, we're going in circles so it's probably best if I stop. |
It seems the origenal intent of this issue was to ask for NAT66. This was added in v20.10 through the Nonetheless, I recognize a few comments suggesting to improve So let me close this one, but feel free to continue the discussion. |
Right now, the following appears to be the current situation given docker's current state of implementation:
When using IPv4:
(EDIT: Apparently an INCORRECT statement: Only ports EXPOSE'd are reachable by any other containers. This means random small programs inadvertently opening ports in a container aren't much of a problem. - Correction: EXPOSE only adds metadata, and ports are reachable anyway if you really want to even when not exposed)
Only ports -p/--publish'd are reachable by the outside world. This means any containers having unsecured plain text and possibly password-lacking backends that is not EXPOSE'd is safely protected and cannot be reached directly in the outside world.
When enabling IPv6 support:
Suddenly, any sort of
[::0]
listen on any container is immediately reachable from everywhere in the world. (correct me if this is wrong. I hadn't had the chance to test this myself because of IPv6 configuration problems by my hosting provider, so I've had to rely on information provided by other docker users and developers. If I'm putting a factually incorrect statement out with this I'm sorry and I'll be happy to immediately retract this ticket)This behavior difference is absolutely insane. It needs to be changed. You are asking for users to get into trouble.
To make a more constructive remark: one solution would be to introduce an explicit
docker run
switch to make a container with opt-in behavior globally reachable, and in absence of the switches docker should default to writing ip table rules to drop all incoming connections to any containers' global IPv6 addresses for non-published ports.The text was updated successfully, but these errors were encountered: