--root on a separately mounted filesystem - partly not working - ?
by lejeczek
Hi guys.
I have a some bits such as images, containers on a separate
disk which disk mounts at boot time, then 'podman' is called
with '--root' to that mount-point - pretty regular stuff.
All works until system reboots. After reboot I see:
-> $ podman-apps container ls
ERRO[0000] Joining network namespace for container
d00badc4665ca3eae92b720925e6088e534ec81e01153d9d85f81eccf87d6a62:
error retrieving network namespace at
/run/user/2503/netns/netns-77b9d3b5-6522-741a-0d12-34921bf12828:
failed to Statfs
"/run/user/2503/netns/netns-77b9d3b5-6522-741a-0d12-34921bf12828":
no such file or directory
Error: error joining network namespace of container
d00badc4665ca3eae92b720925e6088e534ec81e01153d9d85f81eccf87d6a62:
error retrieving network namespace at
/run/user/2503/netns/netns-77b9d3b5-6522-741a-0d12-34921bf12828:
failed to Statfs
"/run/user/2503/netns/netns-77b9d3b5-6522-741a-0d12-34921bf12828":
no such file or directory
at which point I can:
-> $ podman-apps system reset
I can re-create my containers and all seems to work until,
next reboot when I get errors as above.
'podman-apps' is an alias to 'podman --root /my-mountpoint'
What is failing here? Some dependencies I cannot see which
cause this failure?
many thanks, L.
2 years, 4 months
How could I use IB in container
by ugiwgh@qq.com
When I execute show_gids in container, it shows the IB info, like the following.
[root@1fdf9c840916 ~]# /opt/user/sbin/show_gids
DEV PORT INDEX GID IPv4 VER DEV
--- ---- ----- --- ------------ --- ---
mlx5_0 1 0 fe80:0000:0000:0000:e007:1bff:ff7f:2250 v1
n_gids_found=1
But when I execute ibv_rc_pingpong in container, it shows error.
[root@1fdf9c840916 ~]# /opt/user/bin/ibv_rc_pingpong
No IB devices found
When I execute ibv_rc_pingpong in host, it runs right, like the following.
[rhost@N0505 ~]$ ibv_rc_pingpong
local address: LID 0x00b4, QPN 0x015384, PSN 0xb8046a, GID ::
2 years, 4 months
Fun with uidmap/keep-id/issue 12669
by Robin Lee Powell
OK, so, I have a thing I wrote (https://github.com/lojban/lbcs) that
does its own simple isolated rootless container management. It
starts a pod and then starts a configurable list of containers
within the pod.
https://github.com/containers/podman/issues/12669#issuecomment-998845927
completely broke one of my setups. Which is fine, but I want to
know what I should be doing instead.
I'm sure it's possible that where I'm going wrong is not where I'm
expecting, so I'm going to try to lay out the whole situation.
Here's the situation. I have a pod in which I run exim,
spamassassin, and clamav. I'm running it rootless, as a user made
for this purpose. Let's say the user's UID is 1000, cuz, you know,
tradition.
I have several things mounted into the containers as a method of
persistence, such as exim's spool directory, clamav's definitions
dir, etc.
Because I'm running rootless, all those files are owned by UID 1000,
as you'd expect. I also run with --userns=keep-id, because, well,
that seems cleanest and most secure? Running thigs as root in the
container seems bad? I'm not sure I actually have a strong
principled reason to be doing that, so let me know if it's a bad
plan.
However, daemons tend to want to run as their own user, so my
standard pattern is:
RUN for user in mail clamupdate clamscan ; \
do \
find / -xdev -user $user -print0 | xargs -r -0 chown <%= userid %> ; \
usermod -o -u <%= userid %> $user ; \
done
, where "<%= userid %>" is replaced with "1000" by the templating
thingy. So: change the UID of the system user that the daemon runs
at to 1000, and change all files owned by that user to 1000.
This all works fine, I do it in many places, it's fine.
Here's the problem:
exim will *only* run as UID 93.
It is, I shit you not, baked in at compile time ;_;. (See
https://src.fedoraproject.org/rpms/exim/blob/rawhide/f/exim-4.96-config.p...
and
https://github.com/Exim/exim/blob/cf5f5988102b229ef87bc85ba3f0a9ec265f28a...
). I'm running from the Fedora RPMs. I do not want to roll my own.
I want to pass the network connection between clamav and exim across
localhost, because why have the network connection transit out of
containers?
So what I *used* to have was:
$ podman pod create --share=net --network slirp4netns:mtu=30000,port_handler=slirp4netns --userns=keep-id -n drata -p 20280:20280 -p 20225:20225 -p 20265:20265 --network slirp4netns:outbound_addr=192.168.123.132
$ podman run --pod=drata --log-driver=none --name exim -t --uidmap 0:1:92 --uidmap 93:0:1 --uidmap 94:95:8192 -v /home/spdrata/misc-containers/shared_data/var_spool/:/var/spool -v /home/spdrata/misc-containers/shared_data/srv_lojban:/srv/lojban -i spdrata/drata-exim:1
, and that worked fine. The uidmap maps the user running the
rootless container (UID 1000) on the host to UID
93 in the container.
(Side comment: the documentation for uidmap is *terrible*; coming up
with that uidmap set to do what I want took me *hours* of
experimentation.)
This now simply refuses to work.
So what I'm doing instead is I moved the uidmap onto the pod,
instead of remapping all the system/daemon users to UID 1000, I
remap them to UID 93.
This seems ... icky?, but maybe it's the right way to do it?
Honestly not sure. Looking for advice.
Thanks if you read this far! :D
2 years, 4 months