mqueue msg_max in rootless container
by Michael Ivanov
Hallo!
I'm trying to run my application in podman rootless container and I stumble
on following problem: my program needs /proc/sys/fs/mqueue/msg_max to be at
least 256, but in running container this value is just 10. When I try to
specify this parameter while running the image (--sysctl 'fs.mqueue.msg_max=256')
I get the following error:
Error: open /proc/sys/fs/mqueue/msg_max: Permission denied: OCI permission denied
and container is not created.
My host where container is being run has this parameter set to 256. How can I
expose current host setting for msg_max to my container?
Best regards,
--
\ / | |
(OvO) | Михаил Иванов |
(^^^) | |
\^/ | E-mail: ivans(a)isle.spb.ru |
^ ^ | |
1 year
podman container storage backup
by Michael Ivanov
Greetings,
I make periodic backups of my laptop where I use some podman containers.
To perform a backup I just invoke rsync to copy my /home/xxxx/.local/share/containers
directory to nfs mounted filesystem.
Containers are running, but quiescent, no real activity occurs.
Is this a correct way to back up or is there anything special about
container directory to be taken into account? As far as I understand
some hash-named subdirectories are shared between different containers
and images using special kind of mounts, can this lead to duplicate
copies r inconsistencies?
Underlying fs is btrfs.
Thanks,
--
\ / | |
(OvO) | Михаил Иванов |
(^^^) | |
\^/ | E-mail: ivans(a)isle.spb.ru |
^ ^ | |
2 years, 1 month
cgroups not configured for container
by ugiwgh@qq.com
There is somthing warning, when I add "--pid=host".
How can I get this warning gone?
OS: 8.3.2011
Podman: 2.2.1
[rsync@rsyncdk2 ~]$ podman run --rm --pid=host fb7ad16314ee sleep 3
WARN[0000] cannot toggle freezer: cgroups not configured for container
WARN[0000] lstat : no such file or directory
2 years, 5 months
How to track down an IP address that is causing errors in an container
by Christopher.Miller@gd-ms.com
This is with Podman v3.4.2 on RHEL 8.1.
I have an IP address (10.88.0.49) that I don't recognize in a Grafana container log. This IP address isn't tied to any other containers that I'm running as rootful.
I'm getting errors tied to this IP address as I can only view the Grafana UI from my RHEL8 workstation, and when others try to access the UI, they get a type of banner screen stating Grafana isn't able to load its application file.
This is the error message: ERRO[03-28|12:22:46] Error writing to response logger=context err="write tcp 10.88.0.49:3000 -> 10.88.0.1:43250: write: broken pipe"
I'm not using a container orchestrator at this time, we just piloting Prometheus/Grafana, to see how it works out in our environment.
These are the containers and their IP addresses below (all being run as rootful):
Prometheus - 10.88.0.26
Prometheus Node Exporter - 10.88.0.25
Nexus - 10.88.0.34
Grafana - 10.88.0.53
Thanks
Chris Miller
Altron INC.
703-814-7647
Christopher.miller(a)altroninc.com
Christopher.Miller(a)gd-ms.com<mailto:Christopher.Miller@gd-ms.com>
2 years, 8 months
podman-remote client - experimental, or ready for production use?
by Adam Cmiel
Hello!
On my Fedora 35, I noticed that the podman-remote package (version 3.4.4) warns not to use in production yet.
$ dnf info podman-remote
...
Description : Remote client for managing podman containers.
:
: This experimental remote client is under heavy development. Please do not
: run podman-remote in production.
:
: podman-remote uses the version 2 API to connect to a podman client to
: manage pods, containers and container images. podman-remote supports ssh
: connections as well.
Is there any difference between podman-remote and podman --remote in this regard? Or are both equally experimental?
Has this changed in later releases? Could the 4.x versions be considered ready for production use?
2 years, 8 months
podman and DBUS_SESSION_BUS_ADDRESS
by Michael Traxler
Hello,
when I try to build an image I get the following error message:
% podman build -f tumbleweed_michael.txt -t opensuse/tumbleweed_michael
STEP 1/2: FROM opensuse/tumbleweed
STEP 2/2: RUN zypper ref
error running container: error from /usr/bin/runc creating container for [/bin/sh -c zypper ref]: time="2022-03-24T12:56:21+01:00" level=warning msg="unable to get oom kill count" error="openat2 /sys/fs/cgroup/system.slice/runc-buildah-buildah282650677.scope/memory.events: no such file or directory"
time="2022-03-24T12:56:21+01:00" level=error msg="runc create failed: unable to start container process: unable to apply cgroup configuration: unable to start unit \"runc-buildah-buildah282650677.scope\" (properties [{Name:Description Value:\"libcontainer container buildah-buildah282650677\"} {Name:Slice Value:\"system.slice\"} {Name:Delegate Value:true} {Name:PIDs Value:@au [17389]} {Name:MemoryAccounting Value:true} {Name:CPUAccounting Value:true} {Name:IOAccounting Value:true} {Name:TasksAccounting Value:true} {Name:DefaultDependencies Value:false}]): Interactive authentication required."
: exit status 1
Error: error building at STEP "RUN zypper ref": error while running runtime: exit status 1
My build file is:
FROM opensuse/tumbleweed
RUN zypper ref
If I then unset
DBUS_SESSION_BUS_ADDRESS
everything works as expected.
% echo $DBUS_SESSION_BUS_ADDRESS
unix:abstract=/tmp/dbus-GR9LL799YH,guid=37ca9dd6f1faeea14747aad2623af1ba
% unset DBUS_SESSION_BUS_ADDRESS
% podman build -f tumbleweed_michael.txt -t opensuse/tumbleweed_michael
STEP 1/2: FROM opensuse/tumbleweed
STEP 2/2: RUN zypper ref
...
COMMIT opensuse/tumbleweed_michael
--> 5ea2b965db6
Successfully tagged localhost/opensuse/tumbleweed_michael:latest
5ea2b965db6412368929e52d8c34e4574cc84feef2f2e7563c1f9225a60bb8b8
Is it obvious that DBUS_SESSION_BUS_ADDRESS has to be unset?
Greetings,
Michael
2 years, 8 months
Get cgroup in rootless container
by Carl Hörberg
When running a rootless container, how can one identify the cgroup in use from inside the container? /proc/self/cgroup is "0::/", but the "real" cgroup is something like "user-1000.slice/user@1000.service/user.slice/libpod-7126f828cd4389ca0a9e29a94e78af39c91f51f3f892a799cb7f3eeff067d1bd.scope/container".
One hacky way to find it out is to look at /run/.containerenv and get the container id there, then to look for a directory in /sys/fs/cgroup named libpod-$containerid.scope, but is there a more straight forward one?
In the end I would like to be able to read "memory.max" and "memory.current" in the cgroup dir.
2 years, 8 months
Error: writing blob: adding layer with blob ***** lsetxattr /: operation not supported
by ugiwgh@qq.com
I use lustrefs for graphRoot on centos7.9. But it report "operation not supported" error.
$ podman4 pull quay.io/centos/centos:centos7.9.2009
WARN[0000] Network file system detected as backing store. Enforcing overlay option `force_mask="700"`. Add it to storage.conf to silence this warning
Trying to pull quay.io/centos/centos:centos7.9.2009...
Getting image source signatures
Copying blob 2d473b07cdd5 done
Error: writing blob: adding layer with blob "sha256:2d473b07cdd5f0912cd6f1a703352c82b512407db6b05b43f2553732b55df3bc": Error processing tar file(exit status 1): lsetxattr /: operation not supported
$ podman4 version
Client: Podman Engine
Version: 4.0.2
API Version: 4.0.2
Go Version: go1.16.13
Git Commit: c99f9f1b6960b98158b7f5fc4e6b5ac1a10b3542
Built: Wed Mar 23 18:33:41 2022
OS/Arch: linux/amd64
2 years, 8 months
Error setting up pivot dir
by ugiwgh@qq.com
I run podman on lustre filesystem.
When I run "podman load -i centos7.9.2009.tar.gz", in order to import an image.
The following error is output.
------------------------------------
Getting image source signatures
Copying blob 174f56854903 done
ERRO[0003] Error while applying layer: ApplyLayer exit status 1 stdout: stderr: Error setting up pivot dir: mkdir /public1/home/wugh/.local/share/containers/storage/vfs/dir/174f5685490326fc0a1c0f5570b8663732189b327007e47ff13d2ca59673db02/.pivot_root527142738: permission denied
Error: payload does not match any of the supported image formats (oci, oci-archive, dir, docker-archive)
------------------------------------
I think there is something to be supported on lustre.
$ podman version
Version: 3.3.1
API Version: 3.3.1
Go Version: go1.16.13
Git Commit: 08e1bd24196d92e1b377d4d38480581cfa9bf7ac-dirty
Built: Tue Mar 15 15:02:28 2022
OS/Arch: linux/amd64
2 years, 8 months
Anyone seen Podman Exit Code 1 Error when pulling multiple images?
by Christopher.Miller@gd-ms.com
Running Podman v3.4.2 and when trying to pull multiple images from an on-prem with Podman we are seeing the following error message: Podman Exit Code 1
Along with the following error output:
ERRO[0000] Error refreshing volume 0d947f52c097215a516f417e1df5a1fdbf1014743a656ca2d8a8f039d226ad1c: error acquiring lock 3 for volume 0d947f52c097215a516f417e1df5a1fdbf1014743a656ca2d8a8f039d226ad1c: file exists
Right now if we log out of quay via CLI and then log back in, the errors stop.
A quick search finds this, not sure if I'm on the correct path or not:
https://docs.podman.io/en/latest/markdown/podman-container-exists.1.html
If so, these containers do exist in our on-prem registry.
Thanks
Chris Miller
Altron INC.
703-814-7647
Christopher.miller(a)altroninc.com
Christopher.Miller(a)gd-ms.com<mailto:Christopher.Miller@gd-ms.com>
2 years, 8 months