mqueue msg_max in rootless container
by Michael Ivanov
Hallo!
I'm trying to run my application in podman rootless container and I stumble
on following problem: my program needs /proc/sys/fs/mqueue/msg_max to be at
least 256, but in running container this value is just 10. When I try to
specify this parameter while running the image (--sysctl 'fs.mqueue.msg_max=256')
I get the following error:
Error: open /proc/sys/fs/mqueue/msg_max: Permission denied: OCI permission denied
and container is not created.
My host where container is being run has this parameter set to 256. How can I
expose current host setting for msg_max to my container?
Best regards,
--
\ / | |
(OvO) | Михаил Иванов |
(^^^) | |
\^/ | E-mail: ivans(a)isle.spb.ru |
^ ^ | |
12 months
Can't maintain connection to container's listening port
by Ranbir
Hello,
I have a rootless container running postgrey on a Rocky Linux 8 server.
Besides the fact I had problems getting the container running rootless,
which I overcame, the new issue is that connections to the exposed port
are established and then immediately dropped. I can't figure out why
this is happening.
Here's postgrey listening inside the container:
[containers@bigsecret ~]$ podman exec -ti postgrey ss -tln
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 127.0.0.1:10023 0.0.0.0:*
I can connect to the port inside of the container and the connection
stays up until I cancel it:
[containers@bigsecret ~]$podman exec -ti postgrey telnet localhost 10023
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
If I try to telnet to the port from the container host using the host's
routable IP or from different server, I get a "Connection closed by
foreign host." message immediately after the connection is established.
I have systemd enabled in the container. I can control the postgrey
daemon with systemd and systemd doesn't report any errors when I check
the daemon's status.
I don't see any selinux denials. I tried turning off enforcement anyway
and saw no change. I did see language errors being logged by postgrey,
so I installed the missing RPMs in the running container (I'm just
testing things out with this container), which got rid of those errors.
But, that didn't change the connection weirdness.
Any ideas what the problem could be? The pod and container definitions
are below.
pod
{
"Id": "a9292128fc778c6287e80ff71d5e2ee1320b3395dc48a7e31af1db77cc7f695a",
"Name": "smtp",
"Created": "2021-11-25T12:58:55.447833371-05:00",
"CreateCommand": [
"podman",
"pod",
"create",
"--name",
"smtp",
"--publish",
"1.2.3.4:10023:10023",
"--publish",
"1.2.3.4:1587:587",
"--publish",
"1.2.3.4:1783:783",
"--publish",
"1.2.3.4:1025:25"
],
"State": "Running",
"Hostname": "smtp",
"CreateCgroup": true,
"CgroupParent": "user.slice",
"CgroupPath": "user.slice/user-libpod_pod_a9292128fc778c6287e80ff71d5e2ee1320b3395dc48a7e31af1db77cc7f695a.slice",
"CreateInfra": true,
"InfraContainerID": "a75ed34c8117daaff8be1e9060c07478b6894d4d06a93c963142d8b3de95b0a4",
"InfraConfig": {
"PortBindings": {
"10023/tcp": [
{
"HostIp": "1.2.3.4",
"HostPort": "10023"
}
],
"25/tcp": [
{
"HostIp": "1.2.3.4",
"HostPort": "1025"
}
],
"587/tcp": [
{
"HostIp": "1.2.3.4",
"HostPort": "1587"
}
],
"783/tcp": [
{
"HostIp": "1.2.3.4",
"HostPort": "1783"
}
]
},
"HostNetwork": false,
"StaticIP": "",
"StaticMAC": "",
"NoManageResolvConf": false,
"DNSServer": null,
"DNSSearch": null,
"DNSOption": null,
"NoManageHosts": false,
"HostAdd": null,
"Networks": null,
"NetworkOptions": null
},
"SharedNamespaces": [
"ipc",
"net",
"uts"
],
"NumContainers": 2,
"Containers": [
{
"Id": "a75ed34c8117daaff8be1e9060c07478b6894d4d06a93c963142d8b3de95b0a4",
"Name": "a9292128fc77-infra",
"State": "running"
},
{
"Id": "f32c676da8eb38f3e45bb8670e0d8330707fa3dfc216238e4f73bbe638d85a57",
"Name": "postgrey",
"State": "running"
}
]
}
container
[
{
"Id": "f32c676da8eb38f3e45bb8670e0d8330707fa3dfc216238e4f73bbe638d85a57",
"Created": "2021-12-05T00:18:28.942285862-05:00",
"Path": "/usr/sbin/init",
"Args": [
"/usr/sbin/init"
],
"State": {
"OciVersion": "1.0.2-dev",
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 6047,
"ConmonPid": 6031,
"ExitCode": 0,
"Error": "",
"StartedAt": "2021-12-22T14:32:26.339653403-05:00",
"FinishedAt": "2021-12-22T14:27:28.403171029-05:00",
"Healthcheck": {
"Status": "",
"FailingStreak": 0,
"Log": null
}
},
"Image": "9aefd5346e1f34b16a096b52575cc249b14a9a56664c6e1f2113ad3ef449c025",
"ImageName": "localhost/postgrey-v0.0.3:latest",
"Rootfs": "",
"Pod": "a9292128fc778c6287e80ff71d5e2ee1320b3395dc48a7e31af1db77cc7f695a",
"ResolvConfPath": "/tmp/podman-run-1000/containers/overlay-containers/a75ed34c8117daaff8be1e9060c07478b6894d4d06a93c963142d8b3de95b0a4/userdata/resolv.conf",
"HostnamePath": "/tmp/podman-run-1000/containers/overlay-containers/f32c676da8eb38f3e45bb8670e0d8330707fa3dfc216238e4f73bbe638d85a57/userdata/hostname",
"HostsPath": "/tmp/podman-run-1000/containers/overlay-containers/a75ed34c8117daaff8be1e9060c07478b6894d4d06a93c963142d8b3de95b0a4/userdata/hosts",
"StaticDir": "/srv/containers/storage/1000/overlay-containers/f32c676da8eb38f3e45bb8670e0d8330707fa3dfc216238e4f73bbe638d85a57/userdata",
"OCIConfigPath": "/srv/containers/storage/1000/overlay-containers/f32c676da8eb38f3e45bb8670e0d8330707fa3dfc216238e4f73bbe638d85a57/userdata/config.json",
"OCIRuntime": "runc",
"ConmonPidFile": "/tmp/podman-run-1000/containers/overlay-containers/f32c676da8eb38f3e45bb8670e0d8330707fa3dfc216238e4f73bbe638d85a57/userdata/conmon.pid",
"PidFile": "/tmp/podman-run-1000/containers/overlay-containers/f32c676da8eb38f3e45bb8670e0d8330707fa3dfc216238e4f73bbe638d85a57/userdata/pidfile",
"Name": "postgrey",
"RestartCount": 0,
"Driver": "overlay",
"MountLabel": "system_u:object_r:container_file_t:s0:c654,c974",
"ProcessLabel": "system_u:system_r:container_init_t:s0:c654,c974",
"AppArmorProfile": "",
"EffectiveCaps": [
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_NET_BIND_SERVICE",
"CAP_NET_RAW",
"CAP_SETFCAP",
"CAP_SETGID",
"CAP_SETPCAP",
"CAP_SETUID",
"CAP_SYS_CHROOT"
],
"BoundingCaps": [
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_NET_BIND_SERVICE",
"CAP_NET_RAW",
"CAP_SETFCAP",
"CAP_SETGID",
"CAP_SETPCAP",
"CAP_SETUID",
"CAP_SYS_CHROOT"
],
"ExecIDs": [],
"GraphDriver": {
"Name": "overlay",
"Data": {
"LowerDir": "/srv/containers/storage/1000/overlay/2a356f237c2fe380f476133e0939553512ac8167ff7cbb2338d9405090528f7e/diff:/srv/containers/storage/1000/overlay/04f97fe38f3ca40a0d4a7ee7f6da4276ab30746e05c360975bd2e3569afde128/diff:/srv/containers/storage/1000/overlay/4d50441def2b07f8fcd48aad187815089621ddccf2384180db0c28c5272889f8/diff:/srv/containers/storage/1000/overlay/7933807b1a3f6ecbc852d38f269984065dfb57d49ddf40fdea70dfe66a6c6b14/diff:/srv/containers/storage/1000/overlay/1855256707116c0c229fec2d3a60bce4a11fdfc8b0bffa9663c84e69ec326160/diff",
"MergedDir": "/srv/containers/storage/1000/overlay/113bb9169c33b29659143e14363c6a8fc07a7cd6a8ffc72697337a83200db18e/merged",
"UpperDir": "/srv/containers/storage/1000/overlay/113bb9169c33b29659143e14363c6a8fc07a7cd6a8ffc72697337a83200db18e/diff",
"WorkDir": "/srv/containers/storage/1000/overlay/113bb9169c33b29659143e14363c6a8fc07a7cd6a8ffc72697337a83200db18e/work"
}
},
"Mounts": [
{
"Type": "volume",
"Name": "postgrey",
"Source": "/srv/containers/storage/1000/volumes/postgrey/_data",
"Destination": "/var/spool/postfix/postgrey",
"Driver": "local",
"Mode": "Z",
"Options": [
"nosuid",
"nodev",
"rbind"
],
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "5e82bc179344af8710114ca61f84dbfe7a8866c8aac5fab6bcef70e6cba6df76",
"Source": "/srv/containers/storage/1000/volumes/5e82bc179344af8710114ca61f84dbfe7a8866c8aac5fab6bcef70e6cba6df76/_data",
"Destination": "/sys/fs/cgroup",
"Driver": "local",
"Mode": "",
"Options": [
"nodev",
"exec",
"nosuid",
"rbind"
],
"RW": true,
"Propagation": "rprivate"
}
],
"Dependencies": [
"a75ed34c8117daaff8be1e9060c07478b6894d4d06a93c963142d8b3de95b0a4"
],
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"10023/tcp": [
{
"HostIp": "1.2.3.4",
"HostPort": "10023"
}
],
"25/tcp": [
{
"HostIp": "1.2.3.4",
"HostPort": "1025"
}
],
"587/tcp": [
{
"HostIp": "1.2.3.4",
"HostPort": "1587"
}
],
"783/tcp": [
{
"HostIp": "1.2.3.4",
"HostPort": "1783"
}
]
},
"SandboxKey": "/run/user/1000/netns/cni-a2c22e7a-f19f-8320-fe77-9d44a822154d"
},
"ExitCommand": [
"/usr/bin/podman",
"--root",
"/srv/containers/storage/1000",
"--runroot",
"/tmp/podman-run-1000/containers",
"--log-level",
"warning",
"--cgroup-manager",
"systemd",
"--tmpdir",
"/tmp/run-1000/libpod/tmp",
"--runtime",
"runc",
"--storage-driver",
"overlay",
"--storage-opt",
"overlay.mount_program=/usr/bin/fuse-overlayfs",
"--events-backend",
"file",
"container",
"cleanup",
"f32c676da8eb38f3e45bb8670e0d8330707fa3dfc216238e4f73bbe638d85a57"
],
"Namespace": "",
"IsInfra": false,
"Config": {
"Hostname": "smtp",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"TERM=xterm",
"container=docker",
"HOME=/root",
"HOSTNAME=smtp"
],
"Cmd": [
"/usr/sbin/init"
],
"Image": "localhost/postgrey-v0.0.3:latest",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": "",
"OnBuild": null,
"Labels": {
"io.buildah.version": "1.21.3",
"org.label-schema.build-date": "20210620",
"org.label-schema.license": "BSD-3-Clause",
"org.label-schema.name": "Rocky Linux Base Image",
"org.label-schema.schema-version": "1.0",
"org.label-schema.vendor": "Rocky Enterprise Software Foundation",
"org.opencontainers.image.created": "2021-06-20 00:00:00+01:00",
"org.opencontainers.image.licenses": "BSD-3-Clause",
"org.opencontainers.image.title": "Rocky Linux Base Image",
"org.opencontainers.image.vendor": "Rocky Enterprise Software Foundation"
},
"Annotations": {
"io.container.manager": "libpod",
"io.kubernetes.cri-o.ContainerType": "container",
"io.kubernetes.cri-o.Created": "2021-12-05T00:18:28.942285862-05:00",
"io.kubernetes.cri-o.SandboxID": "smtp",
"io.kubernetes.cri-o.TTY": "true",
"io.podman.annotations.autoremove": "FALSE",
"io.podman.annotations.init": "FALSE",
"io.podman.annotations.privileged": "FALSE",
"io.podman.annotations.publish-all": "FALSE",
"org.opencontainers.image.stopSignal": "37"
},
"StopSignal": 37,
"CreateCommand": [
"podman",
"run",
"-d",
"-t",
"--name",
"postgrey",
"--pod",
"smtp",
"--volume",
"postgrey:/var/spool/postfix/postgrey:Z",
"postgrey-v0.0.3"
],
"SystemdMode": true,
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10
},
"HostConfig": {
"Binds": [
"postgrey:/var/spool/postfix/postgrey:Z,rw,rprivate,nosuid,nodev,rbind",
"5e82bc179344af8710114ca61f84dbfe7a8866c8aac5fab6bcef70e6cba6df76:/sys/fs/cgroup:rprivate,rw,nodev,exec,nosuid,rbind"
],
"CgroupManager": "systemd",
"CgroupMode": "private",
"ContainerIDFile": "",
"LogConfig": {
"Type": "k8s-file",
"Config": null,
"Path": "/srv/containers/storage/1000/overlay-containers/f32c676da8eb38f3e45bb8670e0d8330707fa3dfc216238e4f73bbe638d85a57/userdata/ctr.log",
"Tag": "",
"Size": "0B"
},
"NetworkMode": "container:a75ed34c8117daaff8be1e9060c07478b6894d4d06a93c963142d8b3de95b0a4",
"PortBindings": {},
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": [],
"CapDrop": [
"CAP_AUDIT_WRITE",
"CAP_MKNOD"
],
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": [],
"GroupAdd": [],
"IpcMode": "container:a75ed34c8117daaff8be1e9060c07478b6894d4d06a93c963142d8b3de95b0a4",
"Cgroup": "",
"Cgroups": "default",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "private",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [],
"Tmpfs": {},
"UTSMode": "container:a75ed34c8117daaff8be1e9060c07478b6894d4d06a93c963142d8b3de95b0a4",
"UsernsMode": "",
"ShmSize": 65536000,
"Runtime": "oci",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "user.slice/user-libpod_pod_a9292128fc778c6287e80ff71d5e2ee1320b3395dc48a7e31af1db77cc7f695a.slice",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": 0,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"CgroupConf": null
}
}
]
--
Ranbir
2 years, 10 months
Interacting with systemd user slice deployments
by Andrew G. Dunn
I'd posted in a thread with @mheon [0] asking if there was a convention
that the podman (or systemd) community would recommend for accessing
user accounts that are "daemonized" (e.g. `loginctl enable-linger
<user>`).
The state of the user for deployment is something like:
```
$ mkdir /containers
$ semanage fcontext -a -e /home /containers
$ restorecon -vR /containers
$ groupadd -g 2000 hedgedoc
$ useradd -g 2000 -u 2000 -d /containers/hedgedoc -s /sbin/nologin
hedgedoc
$ usermod --add-subuids 200000000-200065535 --add-subgids 200000000-
200065535 hedgedoc
$ loginctl enable-linger hedgedoc
```
There is a more opinionated longer write up here [1]. Something that
I've not been doing is setting a shell and providing access to the user
via ssh. This may be weird, but one of the thinks was that by not
having a shell, running rootless-as-non-root, the application is pretty
isolated.
From what we can gather there are a couple options:
## runuser
Seems like the most reasonable option, as root or sudo you:
$ runuser -ls /bin/bash hedgedoc
This will log you in, set the shell to bash, and set your working
directory to the users home directory. You can then use the shell to
interact with the user slice, invoke podman, invoke podman generate,
and daemonize pods/containers.
## su
$ su -s /bin/bash hedgedoc
This will log you in, set the shell to bash. However doesn't change
home directory. Works similar to above, but seems like runuser has more
niceness to the experience.
## machinectl
This is a bit more weird, but potentially is what systemd _wants_
people to do:
$ systemctl --user --machine=hedgedoc@.host <things>
This would allow you to interact with user units, you could drop them
in place with ansible/pyinfra and then use this `--machine` invocation
to examine the state of the unit.
- Is this something the podman folks are thinking about, mheon seems
to reference it but it was very hard to figure out how to actually
invoke.
- Is there a way to obtian a shell with this method?
Was generally curious to see if anyone would offer opinions on how they
are using user slice deployments. I've been watching quadlet [2] with
interest as well.
[0]:
https://github.com/containers/podman/issues/5858#issuecomment-994201667
[1]: https://homelab.dunn.dev/docs/server/containers/
[2]: https://github.com/containers/quadlet
2 years, 11 months
podman network - incomplete?
by lejeczek
hi guys.
this:
-> $ podman network create 11-1-1 --macvlan ens11 --subnet
11.1.1.0/24 --ip-range 11.1.1.24/31
should do more than just:
-> $ podman network inspect 11-1-1
[
{
"cniVersion": "0.4.0",
"name": "11-1-1",
"plugins": [
{
"ipam": {
"type": "dhcp"
},
"master": "ens11",
"type": "macvlan"
}
]
}
]
if I remember correctly - right?
many thanks, L.
2 years, 11 months
Blog solution for podman.io website
by Máirín Duffy
Hey, a couple of thoughts on the podman.io blog issue brought up during the
last Podman community cabal call (
https://podman.io/community/meeting/notes/2021-11-18/) -
Summary: Podman has a need for a low-overhead of posting blog posts; the
current system involves other websites and platforms and is process-heavy.
(Hopefully this is an accurate summary, lmk if not.)
- One option would be to use wordpress which has a post-by-email
feature... you have to keep the email you send the posts to secret / only
share w people authorized to post / otherwise there is no overhead /
process to getting posts live. Appears it may be possible to then import
the wordpress RSS feed into the existing jekyll site with smtg like this
https://github.com/MattKevan/Jekyll-feed-importer
- Other option (perhaps better) - use antora instead of jekyll. GitHub
pages supports antora, it lets you have a site generated from multi-repos,
believe it would enable stuff like taking snippets from the podman repo's
docs and pulling into the website in a diff repo. Could create another repo
just for informal blog content, give everyone you'd ever want to post a
blog full commit access just to that repo, antora can read in from that and
use it to generate blog posts on website (and authors wouldn't need commit
access to website)
Relevant links:
- WP post by email https://jetpack.com/support/post-by-email/
- Antora github pages support
https://docs.antora.org/antora/2.3/publish-to-github-pages/
- Antora multi-repo functionality
https://docs.antora.org/antora/2.3/features/#bring-together-content-from-...
My experience is with Jekyll and not Antora but I am currently playing
around with Antora to see if I can get a multi-repo proof-of-concept
together. If someone more technically ept would like to help, let me know
:) I am @duffy:fedora.im on Matrix and in the Podman channel!
Le meas,
~m
2 years, 11 months
Restart Status for Containers running in Podman
by Christopher.Miller@gd-ms.com
Dumb question. I looked thru the mail archive and couldn't find what I was looking for.
With Docker, if you inspect the container, you can see a RestartPolicy. Lets you know if the container will restart if the server reboots.
We have a container run Podman (version 1.4.2-stable2, yes its older, however its what I have to work with for now), is there a way to tell if restart has been set for a container?
Thanks
2 years, 11 months
Container stopped running
by Ranbir
Hello,
I'm running podmon on an up to date Rocky Linux 8 system. I'm trying to
run a rootless container. Before my update to Rocky Linux 8.5, the
rootless container was running just fine. After my update and reboot, I
keep getting this error:
Error: container_linux.go:380: starting container process caused:
process_linux.go:545: container init caused: process_linux.go:508:
setting cgroup config for procHooks process caused: open
/sys/fs/cgroup/user.slice/user-1000.slice/user(a)1000.service/user.slice/
user-
libpod_pod_a9292128fc778c6287e80ff71d5e2ee1320b3395dc48a7e31af1db77cc7f
695a.slice/libpod-
a7de373d2fef8f5ea232c09e1553923ff1968ff4caef156b997d26910a2d9558.scope/
pids.max: no such file or directory: OCI runtime attempted to invoke a
command that was not found
I did enable lingering for the "containers" user and created the file
/etc/systemd/system/user@.service.d/delegate.conf. I've run
"systemctl daemon-reload" after rebooting. I'm also exporting
XDG_RUNTIME_DIR in the user's .bashrc. None of this seems to be working
now, though it did stop similar errors before the update.
Does anyone know why the pids.max cgroup isn't being created now?
--
Ranbir
2 years, 11 months