mqueue msg_max in rootless container
by Michael Ivanov
Hallo!
I'm trying to run my application in podman rootless container and I stumble
on following problem: my program needs /proc/sys/fs/mqueue/msg_max to be at
least 256, but in running container this value is just 10. When I try to
specify this parameter while running the image (--sysctl 'fs.mqueue.msg_max=256')
I get the following error:
Error: open /proc/sys/fs/mqueue/msg_max: Permission denied: OCI permission denied
and container is not created.
My host where container is being run has this parameter set to 256. How can I
expose current host setting for msg_max to my container?
Best regards,
--
\ / | |
(OvO) | Михаил Иванов |
(^^^) | |
\^/ | E-mail: ivans(a)isle.spb.ru |
^ ^ | |
1 year
podman container storage backup
by Michael Ivanov
Greetings,
I make periodic backups of my laptop where I use some podman containers.
To perform a backup I just invoke rsync to copy my /home/xxxx/.local/share/containers
directory to nfs mounted filesystem.
Containers are running, but quiescent, no real activity occurs.
Is this a correct way to back up or is there anything special about
container directory to be taken into account? As far as I understand
some hash-named subdirectories are shared between different containers
and images using special kind of mounts, can this lead to duplicate
copies r inconsistencies?
Underlying fs is btrfs.
Thanks,
--
\ / | |
(OvO) | Михаил Иванов |
(^^^) | |
\^/ | E-mail: ivans(a)isle.spb.ru |
^ ^ | |
2 years, 1 month
cgroups not configured for container
by ugiwgh@qq.com
There is somthing warning, when I add "--pid=host".
How can I get this warning gone?
OS: 8.3.2011
Podman: 2.2.1
[rsync@rsyncdk2 ~]$ podman run --rm --pid=host fb7ad16314ee sleep 3
WARN[0000] cannot toggle freezer: cgroups not configured for container
WARN[0000] lstat : no such file or directory
2 years, 5 months
Run container in cluster
by ugiwgh@qq.com
I have 2 node in cluster, node1 and node2. They share the same HOME directory with lustre.
If I run a container on node1, it will take some record in ~/.local/share/containers/storage/overlay-containers. And run in /run/user/1000/containers.
But on node2 there is not /run/user/1000/containers. Because of share HOME directory, there is ~/.local/share/containers/storage/overlay-containers on node2. So there is some error on node2.
How could I redirect ~/.local/share/containers/storage/overlay-containers to the local directory on node1 and node2?
2 years, 7 months
ownership of a mounted home directry
by R C
Hello,
I built a container that mounts the /home directy (it has one
unprivileged user).
(I used buildah and podman on that unprivileged account, using rhel8)
However when I connect to the container, I see that the unprivileged
user's home directory is owned by root.
any idea why that would be, I am probably missing something
thanks,
Ron
2 years, 7 months
podman on oracle linux 8 fails to start container
by Barry Scott
The OS is oracle linux 8 with
podman-3.4.2-9.0.1.module+el8.5.0+20494+0311868c.x86_64
kernel-4.18.0-348.20.1.el8_5.x86_64
I start a container in a systemd service using:
+ /usr/bin/podman start cloud-dice
And get this error from podman:
Error: unable to start container
"76e4a2480bc7f81d3baa802f3d48fffc2e3d252a52f33039d83e339d3f158532":
failed to mount shm tmpfs
"/var/lib/containers/storage/overlay-containers/76e4a2480bc7f81d3baa802f3d48fffc2e3d252a52f33039d83e339d3f158532/userdata/shm":
invalid argument
And these messages in dmesg:
[82521.621247] tmpfs: Unknown parameter 'context'
[82521.643785] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[82521.648888] IPv6: ADDRCONF(NETDEV_UP): vethc149f6ee: link is not ready
[82521.650561] IPv6: ADDRCONF(NETDEV_CHANGE): vethc149f6ee: link becomes
ready
[82521.651950] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[82521.654408] cni-podman0: port 1(vethc149f6ee) entered blocking state
[82521.655364] cni-podman0: port 1(vethc149f6ee) entered disabled state
[82521.656594] device vethc149f6ee entered promiscuous mode
[82521.657537] cni-podman0: port 1(vethc149f6ee) entered blocking state
[82521.658455] cni-podman0: port 1(vethc149f6ee) entered forwarding state
[82521.880289] cni-podman0: port 1(vethc149f6ee) entered disabled state
[82521.883696] device vethc149f6ee left promiscuous mode
[82521.884435] cni-podman0: port 1(vethc149f6ee) entered disabled state
If I rebuild the container then I do not see the tmpfs error and
everything works.
What do I need to do to fix the "tmpfs: Unknown parameter 'context'"
error that I'm assuming is root cause?
Barry
2 years, 7 months
Our project on analysing and contributing to Podman
by Calin Georgescu
Dear Podman maintainers,
We are a group of four computer science Master's students from the Delft University of Technology in the Netherlands following the course IN4315 Software Architecture ( https://se.ewi.tudelft.nl/delftswa/). We chose Podman as the open-source project that we would study, analyse and contribute to.
Over the last 8 weeks, we have published our research in 4 essays, which you can find at https://desosa2022.netlify.app/projects/podman/. On this page, you can also find a list of PRs we made.
It has been a pleasure to work on the contribution to Podman. We enjoyed and learned a lot from this process and we want to thank you for the guidance and assistance you provided us.
We are eager to know what your thoughts are on our research. If you have any thoughts or feedback on our work that you would like to share, we would be excited to hear them. We would also like to inquire if there are any job/internship opportunities available for us in the future.
Thank you for your time and consideration. Feel free to reach out to us through email or GitHub for anything you would like to discuss.
Kind regards,
Krzysztof Baran @kbaran</kbaran>
Rover van der Noort @rvandernoort</rvandernoort>
Xueyuan Chen @keonchennl
Calin Georgescu @gcalin
2 years, 8 months