mqueue msg_max in rootless container
by Michael Ivanov
Hallo!
I'm trying to run my application in podman rootless container and I stumble
on following problem: my program needs /proc/sys/fs/mqueue/msg_max to be at
least 256, but in running container this value is just 10. When I try to
specify this parameter while running the image (--sysctl 'fs.mqueue.msg_max=256')
I get the following error:
Error: open /proc/sys/fs/mqueue/msg_max: Permission denied: OCI permission denied
and container is not created.
My host where container is being run has this parameter set to 256. How can I
expose current host setting for msg_max to my container?
Best regards,
--
\ / | |
(OvO) | Михаил Иванов |
(^^^) | |
\^/ | E-mail: ivans(a)isle.spb.ru |
^ ^ | |
12 months
podman container storage backup
by Michael Ivanov
Greetings,
I make periodic backups of my laptop where I use some podman containers.
To perform a backup I just invoke rsync to copy my /home/xxxx/.local/share/containers
directory to nfs mounted filesystem.
Containers are running, but quiescent, no real activity occurs.
Is this a correct way to back up or is there anything special about
container directory to be taken into account? As far as I understand
some hash-named subdirectories are shared between different containers
and images using special kind of mounts, can this lead to duplicate
copies r inconsistencies?
Underlying fs is btrfs.
Thanks,
--
\ / | |
(OvO) | Михаил Иванов |
(^^^) | |
\^/ | E-mail: ivans(a)isle.spb.ru |
^ ^ | |
2 years, 1 month
Fun with uidmap/keep-id/issue 12669
by Robin Lee Powell
OK, so, I have a thing I wrote (https://github.com/lojban/lbcs) that
does its own simple isolated rootless container management. It
starts a pod and then starts a configurable list of containers
within the pod.
https://github.com/containers/podman/issues/12669#issuecomment-998845927
completely broke one of my setups. Which is fine, but I want to
know what I should be doing instead.
I'm sure it's possible that where I'm going wrong is not where I'm
expecting, so I'm going to try to lay out the whole situation.
Here's the situation. I have a pod in which I run exim,
spamassassin, and clamav. I'm running it rootless, as a user made
for this purpose. Let's say the user's UID is 1000, cuz, you know,
tradition.
I have several things mounted into the containers as a method of
persistence, such as exim's spool directory, clamav's definitions
dir, etc.
Because I'm running rootless, all those files are owned by UID 1000,
as you'd expect. I also run with --userns=keep-id, because, well,
that seems cleanest and most secure? Running thigs as root in the
container seems bad? I'm not sure I actually have a strong
principled reason to be doing that, so let me know if it's a bad
plan.
However, daemons tend to want to run as their own user, so my
standard pattern is:
RUN for user in mail clamupdate clamscan ; \
do \
find / -xdev -user $user -print0 | xargs -r -0 chown <%= userid %> ; \
usermod -o -u <%= userid %> $user ; \
done
, where "<%= userid %>" is replaced with "1000" by the templating
thingy. So: change the UID of the system user that the daemon runs
at to 1000, and change all files owned by that user to 1000.
This all works fine, I do it in many places, it's fine.
Here's the problem:
exim will *only* run as UID 93.
It is, I shit you not, baked in at compile time ;_;. (See
https://src.fedoraproject.org/rpms/exim/blob/rawhide/f/exim-4.96-config.p...
and
https://github.com/Exim/exim/blob/cf5f5988102b229ef87bc85ba3f0a9ec265f28a...
). I'm running from the Fedora RPMs. I do not want to roll my own.
I want to pass the network connection between clamav and exim across
localhost, because why have the network connection transit out of
containers?
So what I *used* to have was:
$ podman pod create --share=net --network slirp4netns:mtu=30000,port_handler=slirp4netns --userns=keep-id -n drata -p 20280:20280 -p 20225:20225 -p 20265:20265 --network slirp4netns:outbound_addr=192.168.123.132
$ podman run --pod=drata --log-driver=none --name exim -t --uidmap 0:1:92 --uidmap 93:0:1 --uidmap 94:95:8192 -v /home/spdrata/misc-containers/shared_data/var_spool/:/var/spool -v /home/spdrata/misc-containers/shared_data/srv_lojban:/srv/lojban -i spdrata/drata-exim:1
, and that worked fine. The uidmap maps the user running the
rootless container (UID 1000) on the host to UID
93 in the container.
(Side comment: the documentation for uidmap is *terrible*; coming up
with that uidmap set to do what I want took me *hours* of
experimentation.)
This now simply refuses to work.
So what I'm doing instead is I moved the uidmap onto the pod,
instead of remapping all the system/daemon users to UID 1000, I
remap them to UID 93.
This seems ... icky?, but maybe it's the right way to do it?
Honestly not sure. Looking for advice.
Thanks if you read this far! :D
2 years, 3 months
WARN[0000] cannot toggle freezer: cgroups not configured for container
by ugiwgh@qq.com
When I run "podman rm -a", there is the warning.
$ podman rm -a
WARN[0000] cannot toggle freezer: cgroups not configured for container
WARN[0000] cannot toggle freezer: cgroups not configured for container
WARN[0000] lstat : no such file or directory
$ podman version
Client: Podman Engine
Version: 4.0.2
API Version: 4.0.2
Go Version: go1.16.13
Git Commit: c99f9f1b6960b98158b7f5fc4e6b5ac1a10b3542
Built: Wed Mar 23 18:33:41 2022
OS/Arch: linux/amd64
2 years, 3 months
Error: could not find "rootlessport"
by ugiwgh@qq.com
When I run "podman run -p 6379:6379 -d redis:7.0", the error as following.
Error: could not find "rootlessport" in one of [/usr/local/libexec/podman /usr/local/lib/podman /usr/libexec/podman /usr/lib/podman]. To resolve this error, set the helper_binaries_dir key in the `[engine]` section of containers.conf to the directory containing your helper binaries.
Any help will be appreciated.
-----------------------------------
$ podman version
Client: Podman Engine
Version: 4.0.2
API Version: 4.0.2
Go Version: go1.16.13
Git Commit: c99f9f1b6960b98158b7f5fc4e6b5ac1a10b3542
Built: Wed Mar 23 18:33:41 2022
OS/Arch: linux/amd64
2 years, 4 months
Newbie network test?
by Loren Amelang
I've installed Podman on a Raspberry Pi Zero W, and want to test the
network connection. I found this:
pi@raspberrypi:~/tripod $ podman pull docker.io/library/httpd
pi@raspberrypi:~/tripod $ podman run -dt -p 8088:80/tcp
docker.io/library/httpd
54a005199e38260bae58a6a5437dd0fbde62f2f596b25d928fb346328cfc9e73
pi@raspberrypi:~/tripod $
I chose "8088:80" because incoming port 8080 is already in use and
working on that Pi. Seems that is valid?
It seems to run, but closes itself in under a minute:
pi@raspberrypi:~/tripod $ podman ps -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
54a005199e38 docker.io/library/httpd httpd-foreground 26 minutes ago
Exited (139) 26 minutes ago 0.0.0.0:8088->80/tcp hopeful_buck
787779b20e96 docker.io/library/httpd httpd-foreground 18 seconds ago
Exited (139) 16 seconds ago 0.0.0.0:8088->80/tcp pedantic_yonath
pi@raspberrypi:~/tripod $
pi@raspberrypi:~/tripod $ podman logs -l
pi@raspberrypi:~/tripod $
pi@raspberrypi:~/tripod $ podman top -l
Error: top can only be used on running containers
pi@raspberrypi:~/tripod $
slirp4netns is already the newest version (1.0.1-2).
Could someone please suggest what to check next?
Loren
2 years, 4 months
Chicken-and-egg problem with image signatures on CoreOS
by Mark Raynsford
Hello!
I've been bounced around a couple of forums and was told that this was
probably the best place to ask the question...
https://discussion.fedoraproject.org/t/chicken-and-egg-problem-with-image...
Essentially:
* I want to set up multiple CoreOS VMs.
* CoreOS depends on being able to run all services from containers.
* I want to use podman, because all of my services can run without
privileges, and podman seems "better" in general.
* I only want to run code from signed images from sources that I trust.
Running random Docker images doesn't really cut it.
* Setting up a registry appears to require running unsigned code,
because podman can't check the docker.io signatures, and podman
and docker "should not" be run alongside each other on the same
system.
* Securing communications to the registry with TLS realistically
involves running an ACME client.
* Paradoxically, running an ACME client probably involves grabbing an
ACME client image from the registry that I'm trying to set up. :)
I can see a few ways out of this situation, but all of the various
approaches seem to involve running rather a lot of infrastructure just
to get roughly the same level of security that I'd get with ordinary
signed packages "for free" on FreeBSD or a Debian-based distro.
Is there a better way to do this?
--
Mark Raynsford | https://www.io7m.com
2 years, 4 months
Error processing tar file(signal:killed)
by ugiwgh@qq.com
When I pull an image, the image is 3.90GB, there is some error.
Trying to pull xxx/xxx/xxx:xxx...
Getting image source signatures
Copying blob sha256:4184ac44fb0438b126437611d02f8ef889ad8a75ca91cd75b5b328d8e08cf66d
Copying blob sha256:18fcb7509e4252365041f00982734e39ca0a44e638b8ebc18e2588405a5c7de2
Copying blob sha256:ea7643e57386e6b1e7b3524dd01d7e360b387ad640d32a7513ca263017175f87
Copying blob sha256:622a049262798701c22e70cb4e8ebff7e5dfb9c76d3dd948c49d5438ffe1e681
Copying blob sha256:d5fd17ec1767521cf97f61568096bfc9a7cd9c2d149576a7b43930c5a97062b0
Copying blob sha256:21e5db7c1fa24e99354f495e624cdc8920642b58ab65935db81bfceaf98a8a88
Copying blob sha256:ea7643e57386e6b1e7b3524dd01d7e360b387ad640d32a7513ca263017175f87
Copying blob sha256:18fcb7509e4252365041f00982734e39ca0a44e638b8ebc18e2588405a5c7de2
Copying blob sha256:d5fd17ec1767521cf97f61568096bfc9a7cd9c2d149576a7b43930c5a97062b0
Copying blob sha256:7240d5fd250b0bb7dbe6cad8d96e44e4548e895f9ca4f396275b7e0fab2dee35
Copying blob sha256:622a049262798701c22e70cb4e8ebff7e5dfb9c76d3dd948c49d5438ffe1e681
Copying blob sha256:4184ac44fb0438b126437611d02f8ef889ad8a75ca91cd75b5b328d8e08cf66d
Copying blob sha256:21e5db7c1fa24e99354f495e624cdc8920642b58ab65935db81bfceaf98a8a88
Copying blob sha256:b0b8e6c2c40a998c2efec64c1024b0ace3e6ae44f309550cf2cd0949885687c4
Copying blob sha256:7240d5fd250b0bb7dbe6cad8d96e44e4548e895f9ca4f396275b7e0fab2dee35
Copying blob sha256:b0b8e6c2c40a998c2efec64c1024b0ace3e6ae44f309550cf2cd0949885687c4
Copying blob sha256:3343333b1de32753c9c373a8396895404bc7e8653594881d8242361ecc8ed091
Copying blob sha256:3343333b1de32753c9c373a8396895404bc7e8653594881d8242361ecc8ed091
Error: writing blob: adding layer with blob "sha256:b0b8e6c2c40a998c2efec64c1024b0ace3e6ae44f309550cf2cd0949885687c4": Error processing tar file(signal: killed):
2 years, 4 months