Hi François-Xavier,
replies inline.
On 04/08/2024 16:16, François-Xavier Thomas wrote:
Hello all,
I hope you don't mind me sharing my results here, at least this might
help other people in the same situation as me - but if anybody has a
comment on how things are supposed to be working I would be glad!
Re: container-to-container DNS issues
-------------------------------------
> TL;DR: Internal networks should work for my purposes as Keith said
(while still being able to resolve container names via DNS), but
apparently that doesn't work on my host due to either a bug or
unsupported configuration.
I'm happy to say I solved this!
In the end the DNS issues were all my fault - my local DNS resolver
was configured to run on 0.0.0.0 (all interfaces), and it took me a
while to figure out that it would silently prevent
container-to-container name resolution from working because it *also*
automatically listened on the Podman virtual interfaces before
aardvark-dns had a chance to start.
I changed it to only run on the external and loopback interfaces, and
now container-to-container DNS works just fine with the default port.
It looks like there were two separate things in Podman that made
understanding all this more difficult:
- there is no warning message when aardvark-dns can't start because
the port is already taken by the host (that would have made the issue
very obvious)
I fixed this very recently in netavark/aardvark-dns v1.12.0, so this
already fixed.
- internal networks don't generate DNAT rules when dns_port is set to
anything other than 53 ; containers can access the DNS resolver on the
non-standard port just fine and have /etc/resolv.conf configured to
the correct IP, but the resolv.conf mechanism cannot (to my knowledge)
use a different port and thus DNS fails in practice
Correct this is a problem,
please file a bug on the netavark repo about
it. This is similar to
https://github.com/containers/podman/issues/22807. Right now internal
networks do nothing with the host firewall I think we must reevaluate
that design decision.
Re: restricting container access
--------------------------------
Now back to my original goal.
> For each container that needs to talk to another container, define a
network in the `networks:` top-level element with `internal: true` set
for it. In each container's entry in the `services:` top-level
element, include that network in `networks:`.
> If you only want the containers' exposed ports to be accessible on
the machine running the containers, specify 127.0.0.1 in addition to
the port(s) themselves in each container's `ports:` section (syntax
here:
https://github.com/compose-spec/compose-spec/blob/master/05-services.md#p...).
I retried Keith's advice, here is the current behavior I'm seeing
(with ufw setup to default deny incoming and forwarded connections,
and no extra rules related to containers - just my regular rules for
services on the host):
- With internal: false, I can communicate (ping, mapped ports and
exposed ports) with all the containers from the hsot, and they can
also access external services on the internet or on the local network.
- With internal: true, I can ping the containers from the host, but
the rest of the communications are blocked, including mapped ports.
The last part sounds weird to me, is that the expected behavior or is
it maybe another misconfiguration on my part?
This is expected with our current
design see my point above how we do
nothing with the firewall int he internal case thus no port DNAT rules
are added as well.
However note that this will actually work when running rootless podman
today as it uses a user space forwarder.
I was expecting port mapping to be an allowlist, something like
"internal networks have no communications, except those that are
explicitly allowed one by one.
I unfortunately haven't found any online resources that tell me how
internal networks are supposed to work in detail, other than saying
(paraphrasing) "internal networks are internal"... The podman compose
commands also let me map a port just fine but doesn't tell me it's not
going to work because the network is internal.
It sounds that so far my best bet would be:
* Don't use internal networks (because I want port mapping to work)
* Add extra iptables/ufw rules that default-denies all traffic outside
of the podman network except the one I want (still working on that
one... making it appear on boot without conflicting with the netavark
rules is tricky)
Yes this is the difficult part. I think we should likely expose
some
netavark-user chains where users could hook into to further restrict
traffic at there will. Contributions welcome,
https://github.com/containers/netavark/issues/705.
I'll keep you posted when I find a workable solution.
Take care,
FX
_______________________________________________
Podman mailing list -- podman(a)lists.podman.io
To unsubscribe send an email to podman-leave(a)lists.podman.io
--
Paul Holzinger