Hello all,
I hope you don't mind me sharing my results here, at least this might
help other people in the same situation as me - but if anybody has a
comment on how things are supposed to be working I would be glad!
Re: container-to-container DNS issues
-------------------------------------
TL;DR: Internal networks should work for my purposes as Keith said
(while still being able to resolve container names via DNS), but
apparently that doesn't work on my host due to either a bug or
unsupported configuration.
I'm happy to say I solved this!
In the end the DNS issues were all my fault - my local DNS resolver was
configured to run on 0.0.0.0 (all interfaces), and it took me a while to
figure out that it would silently prevent container-to-container name
resolution from working because it *also* automatically listened on the
Podman virtual interfaces before aardvark-dns had a chance to start.
I changed it to only run on the external and loopback interfaces, and
now container-to-container DNS works just fine with the default port.
It looks like there were two separate things in Podman that made
understanding all this more difficult:
- there is no warning message when aardvark-dns can't start because the
port is already taken by the host (that would have made the issue very
obvious)
- internal networks don't generate DNAT rules when dns_port is set to
anything other than 53 ; containers can access the DNS resolver on the
non-standard port just fine and have /etc/resolv.conf configured to the
correct IP, but the resolv.conf mechanism cannot (to my knowledge) use a
different port and thus DNS fails in practice
Re: restricting container access
--------------------------------
Now back to my original goal.
For each container that needs to talk to another container, define a
network in the `networks:` top-level element with `internal: true` set
for it. In each container's entry in the `services:` top-level element,
include that network in `networks:`.
If you only want the containers' exposed ports to be accessible
on
the machine running the containers, specify 127.0.0.1 in addition to the
port(s) themselves in each container's `ports:` section (syntax here:
https://github.com/compose-spec/compose-spec/blob/master/05-services.md#p...).
I retried Keith's advice, here is the current behavior I'm seeing (with
ufw setup to default deny incoming and forwarded connections, and no
extra rules related to containers - just my regular rules for services
on the host):
- With internal: false, I can communicate (ping, mapped ports and
exposed ports) with all the containers from the hsot, and they can also
access external services on the internet or on the local network.
- With internal: true, I can ping the containers from the host, but the
rest of the communications are blocked, including mapped ports.
The last part sounds weird to me, is that the expected behavior or is it
maybe another misconfiguration on my part?
I was expecting port mapping to be an allowlist, something like
"internal networks have no communications, except those that are
explicitly allowed one by one.
I unfortunately haven't found any online resources that tell me how
internal networks are supposed to work in detail, other than saying
(paraphrasing) "internal networks are internal"... The podman compose
commands also let me map a port just fine but doesn't tell me it's not
going to work because the network is internal.
It sounds that so far my best bet would be:
* Don't use internal networks (because I want port mapping to work)
* Add extra iptables/ufw rules that default-denies all traffic outside
of the podman network except the one I want (still working on that
one... making it appear on boot without conflicting with the netavark
rules is tricky)
I'll keep you posted when I find a workable solution.
Take care,
FX