RunRoot & mistaken IDs
by lejeczek
Hi guys.
I experience this:
-> $ podman images
WARN[0000] RunRoot is pointing to a path
(/run/user/1007/containers) which is not writable. Most
likely podman will fail.
Error: creating events dirs: mkdir /run/user/1007:
permission denied
-> $ id
uid=2001(podmania) gid=2001(podmania) groups=2001(podmania)
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
I think it might have something to do with the fact that I
changed UID for the user, but why would this be?
How troubleshoot & fix it, ideally without system reboot?
many thanks, L.
9 months, 3 weeks
mqueue msg_max in rootless container
by Michael Ivanov
Hallo!
I'm trying to run my application in podman rootless container and I stumble
on following problem: my program needs /proc/sys/fs/mqueue/msg_max to be at
least 256, but in running container this value is just 10. When I try to
specify this parameter while running the image (--sysctl 'fs.mqueue.msg_max=256')
I get the following error:
Error: open /proc/sys/fs/mqueue/msg_max: Permission denied: OCI permission denied
and container is not created.
My host where container is being run has this parameter set to 256. How can I
expose current host setting for msg_max to my container?
Best regards,
--
\ / | |
(OvO) | Михаил Иванов |
(^^^) | |
\^/ | E-mail: ivans(a)isle.spb.ru |
^ ^ | |
12 months
Ansible `template` tasks and rootless podman volume content management
by Chris Evich
Hey podman community,
While exploring Ansible management of rootless podman on a remote host,
I ran into a stinky volume-contents idempotency issue. I have an
idea[0] on how to solve this, but thought I'd reach out and see if/how
others have dealt with this situation.
---
Here's the setup:
1. I'm running an Ansible playbook against a host for which I ONLY have
access to a non-root (user) account.
2. The playbook configures `quadlet` for `systemd` management of a
configuration (podman) volume and a pod with several containers in it
running services.
3. The contents of the podman volume are 10-30 configuration files,
owned by several different UIDs/GIDs within the allocated
user-namespace. For example, some files are owned by $UID:$GID, others
may be 100123:100123, and others could be 100321:100321 (depending on
the exact user-namespace allocation details).
4. Ansible uses the 'template' module to manage 10-30 configuration
files and directories destined for the rootless podman volume. Ref:
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/templ...
5. When configuration files "change", Ansible uses a handler to restart
the pod. Ref:
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_handlers...
---
The problem:
The 'template' module knows nothing about user-namespaces. Because it's
running as a regular user, it can't `chown` the files into the
user-namespace range (permission denied). So the template module is
CONSTANTLY (and needlessly) triggering the handler to restart the pod
(due to file ownership differences). Also as you'd expect, when
`template` sets the file's UID/GID wrong, the containerized services
fail on restart.
---
Idea[0]: (untested) For the `template` task, set
`ansible_python_interpreter` to a wrapper script that execs `podman
unshare /usr/bin/python3 "$@"`.
--
Chris Evich (he/him), RHCA III
Senior Quality Assurance Engineer
If it ain't broke, your hammer isn't wide 'nough.
1 year, 3 months
Storage directory perm/mismatch error with LDAP user home on NFS
by kurien.mathew@mediakind.com
Hello,
podman fails with directory permission errors or directory mismatch errors when I do a pull on my Ubuntu 20.x with an NFS mounted LDAP user home directory. Details are provided below. Would you be able to advise on the best way to resolve the issue?
Thanks
[user@user-vm2 opr:0]$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.6 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.6 LTS"
VERSION_ID="20.04"
HOME_URL=https://www.ubuntu.com/
SUPPORT_URL=https://help.ubuntu.com/
BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/
PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privac...
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
[user@user-vm2 opr:0]$
[user@user-vm2 opr:127]$ podman --version
podman version 4.5.1
[user@user-vm2 opr:125]$ podman pull --log-level debug alpine
INFO[0000] podman filtering at log level debug
DEBU[0000] Called pull.PersistentPreRunE(podman pull --log-level debug alpine)
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/user/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver vfs
DEBU[0000] Using graph root /home/user/.local/share/containers/storage
DEBU[0000] Using run root /run/user/7148269/containers
DEBU[0000] Using static dir /home/user/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/7148269/libpod/tmp
DEBU[0000] Using volume path /home/user/.local/share/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] [graphdriver] trying provided driver "vfs"
DEBU[0000] Initializing event backend file
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
DEBU[0000] Configured OCI runtime crun initialization failed: no valid executable found for OCI runtime crun: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument
DEBU[0000] Using OCI runtime "/usr/sbin/runc"
INFO[0000] Setting parallel job count to 13
DEBU[0000] Pulling image alpine (policy: always)
DEBU[0000] Looking up image "alpine" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf"
DEBU[0000] Trying "localhost/alpine:latest" ...
DEBU[0000] Trying "docker.io/library/alpine:latest" ...
DEBU[0000] Trying "docker.io/library/alpine:latest" ...
DEBU[0000] Trying "alpine" ...
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Attempting to pull candidate docker.io/library/alpine:latest for alpine
DEBU[0000] parsed reference into "[vfs@/home/user/.local/share/containers/storage+/run/user/7148269/containers]docker.io/library/alpine:latest"
DEBU[0000] Resolving "alpine" using unqualified-search registries (/etc/containers/registries.conf)
Resolving "alpine" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/library/alpine:latest...
DEBU[0000] Copying source image //alpine:latest to destination image [vfs@/home/user/.local/share/containers/storage+/run/user/7148269/containers]docker.io/library/alpine:latest
DEBU[0000] Using registries.d directory /etc/containers/registries.d
DEBU[0000] Trying to access "docker.io/library/alpine:latest"
DEBU[0000] No credentials matching docker.io/library/alpine found in /run/user/7148269/containers/auth.json
DEBU[0000] No credentials matching docker.io/library/alpine found in /home/user/.config/containers/auth.json
DEBU[0000] No credentials matching docker.io/library/alpine found in /home/user/.docker/config.json
DEBU[0000] No credentials matching docker.io/library/alpine found in /home/user/.dockercfg
DEBU[0000] No credentials for docker.io/library/alpine found
DEBU[0000] No signature storage configuration found for docker.io/library/alpine:latest, using built-in default file:///home/user/.local/share/containers/sigstore
DEBU[0000] Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io
DEBU[0000] GET https://registry-1.docker.io/v2/
DEBU[0000] Ping https://registry-1.docker.io/v2/ status 401
DEBU[0000] GET https://auth.docker.io/token?scope=repository%3Alibrary%2Falpine%3Apull&s...
DEBU[0000] GET https://registry-1.docker.io/v2/library/alpine/manifests/latest
DEBU[0001] Content-Type from manifest GET is "application/vnd.docker.distribution.manifest.list.v2+json"
DEBU[0001] Using blob info cache at /home/user/.local/share/containers/cache/blob-info-cache-v1.boltdb
DEBU[0001] Source is a manifest list; copying (only) instance sha256:25fad2a32ad1f6f510e528448ae1ec69a28ef81916a004d3629874104f8a7f70 for current system
DEBU[0001] GET https://registry-1.docker.io/v2/library/alpine/manifests/sha256:25fad2a32...
DEBU[0001] Content-Type from manifest GET is "application/vnd.docker.distribution.manifest.v2+json"
DEBU[0001] IsRunningImageAllowed for image docker:docker.io/library/alpine:latest
DEBU[0001] Using default policy section
DEBU[0001] Requirement 0: allowed
DEBU[0001] Overall: allowed
DEBU[0001] Downloading /v2/library/alpine/blobs/sha256:c1aabb73d2339c5ebaa3681de2e9d9c18d57485045a4e311d9f8004bec208d67
DEBU[0001] GET https://registry-1.docker.io/v2/library/alpine/blobs/sha256:c1aabb73d2339...
Getting image source signatures
DEBU[0001] Reading /home/user/.local/share/containers/sigstore/library/alpine@sha256=25fad2a32ad1f6f510e528448ae1ec69a28ef81916a004d3629874104f8a7f70/signature-1
DEBU[0001] Not looking for sigstore attachments: disabled by configuration
DEBU[0001] Manifest has MIME type application/vnd.docker.distribution.manifest.v2+json, ordered candidate list [application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v1+json]
DEBU[0001] ... will first try using the original manifest unmodified
DEBU[0001] Checking if we can reuse blob sha256:31e352740f534f9ad170f75378a84fe453d6156e40700b882d737a8f4a6988a3: general substitution = true, compression for MIME type "application/vnd.docker.image.rootfs.diff.tar.gzip" = true
DEBU[0001] Failed to retrieve partial blob: blob type not supported for partial retrieval
DEBU[0001] Downloading /v2/library/alpine/blobs/sha256:31e352740f534f9ad170f75378a84fe453d6156e40700b882d737a8f4a6988a3
DEBU[0001] GET https://registry-1.docker.io/v2/library/alpine/blobs/sha256:31e352740f534...
DEBU[0001] Detected compression format gzip
DEBU[0001] Using original blob without modification
Copying blob 31e352740f53 done
DEBU[0001] Start untar layer
ERRO[0001] While applying layer: ApplyLayer stdout: stderr: setting up pivot dir: mkdir /home/user/.local/share/containers/storage/vfs/dir/78a822fe2a2d2c84f3de4a403188c45f623017d6a4521d23047c9fbb0801794c/.pivot_root3008513360:Copying blob 31e352740f53 done
DEBU[0001] Error pulling candidate docker.io/library/alpine:latest: copying system image from manifest list: writing blob: adding layer with blob "sha256:31e352740f534f9ad170f75378a84fe453d6156e40700b882d737a8f4a6988a3": ApplyLayer stdout: stderr: setting up pivot dir: mkdir /home/user/.local/share/containers/storage/vfs/dir/78a822fe2a2d2c84f3de4a403188c45f623017d6a4521d23047c9fbb0801794c/.pivot_root3008513360: permission denied exit status 1
Error: copying system image from manifest list: writing blob: adding layer with blob "sha256:31e352740f534f9ad170f75378a84fe453d6156e40700b882d737a8f4a6988a3": ApplyLayer stdout: stderr: setting up pivot dir: mkdir /home/user/.local/share/containers/storage/vfs/dir/78a822fe2a2d2c84f3de4a403188c45f623017d6a4521d23047c9fbb0801794c/.pivot_root3008513360: permission denied exit status 1
DEBU[0001] Shutting down engines
[user@user-vm2 opr:125]$
[user@user-vm2 opr:125]$
[user@user-vm2 opr:125]$
[user@user-vm2 opr:125]$
[user@user-vm2 opr:125]$
[user@user-vm2 opr:125]$ podman pull --log-level debug --root /space/containers/storage alpine
INFO[0000] podman filtering at log level debug
DEBU[0000] Called pull.PersistentPreRunE(podman pull --log-level debug --root /space/containers/storage alpine)
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /space/containers/storage/libpod/bolt_state.db
DEBU[0000] Overriding run root "/run/user/7148269/containers" with "/run/containers/storage" from database
ERRO[0000] User-selected graph driver "vfs" overwritten by graph driver "overlay" from database - delete libpod local files ("/space/containers/storage") to resolve. May prevent use of images created by other tools
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /space/containers/storage
DEBU[0000] Using run root /run/containers/storage
DEBU[0000] Using static dir /space/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/7148269/libpod/tmp
DEBU[0000] Using volume path /space/containers/storage/volumes
DEBU[0000] Using transient store: false
Error: mkdir /run/containers/storage: permission denied
DEBU[0000] Shutting down engines
[user@user-vm2 opr:125]$
[user@user-vm2 opr:125]$
[user@user-vm2 opr:125]$
[user@user-vm2 opr:125]$
[user@user-vm2 opr:125]$ podman pull --log-level debug --root /space/containers/storage --runroot /space/containers/run alpine
INFO[0000] podman filtering at log level debug
DEBU[0000] Called pull.PersistentPreRunE(podman pull --log-level debug --root /space/containers/storage --runroot /space/containers/run alpine)
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /space/containers/storage/libpod/bolt_state.db
ERRO[0000] User-selected graph driver "vfs" overwritten by graph driver "overlay" from database - delete libpod local files ("/space/containers/storage") to resolve. May prevent use of images created by other tools
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /space/containers/storage
DEBU[0000] Using run root /space/containers/run
DEBU[0000] Using static dir /space/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/7148269/libpod/tmp
DEBU[0000] Using volume path /space/containers/storage/volumes
DEBU[0000] Using transient store: false
Error: database storage temporary directory (runroot) "/run/containers/storage" does not match our storage temporary directory (runroot) "/space/containers/run": database configuration mismatch
DEBU[0000] Shutting down engines
[user@user-vm2 opr:125]$
1 year, 4 months
Re: Pasta-networked rootless Podman container gets Connection Refused with the host's public IP
by jklaiho@iki.fi
Thank you David and Stefano! You've clarified the matter significantly.
> >
> > Hi; I previously asked this on the Podman mailing list, but I'm not
> > sure if the issue in question is a feature of Podman or Passt (or
> > both), and I got no replies from the Podman list, so I figured I'd
> > try here as well.
>
> I actually saw an answer to that, did you miss this perhaps?
Indeed, I missed the reply on the Podman list – some weird threading glitch on my local client didn't show it as updated.
I'll have to do some further experimentation on the server in the near future. I'll return later with more details if I can't get it to work.
- JK
1 year, 4 months
"Connection refused" from inside containers to the host's public IP
by jklaiho@iki.fi
Hi all,
We have a bunch of rootless containers running as a non-privileged user on a Ubuntu 22.04 server under Podman 4.5.0.
One of them is running Browserless Chrome to render PDFs of the output of a Django-served URL of another container.
The Django container is set up so that its CSS/JS/etc. static files are stored on the host machine and served by the host's Nginx. To correctly display the styles, Chrome therefore needs to access the endpoint via the public URL of the site.
This is not working, because any connection attempts to the public IP from within any of the running containers fail with a Connection Refused error:
- - - -
$ curl -vvv https://our.nice.site
* Trying <redacted IPv6>:443...
* connect to <redacted IPv6> port 443 failed: Connection refused
* Trying <redacted IPv4>:443...
* connect to <redacted IPv4> port 443 failed: Connection refused
* Failed to connect to our.nice.site port 443: Connection refused
* Closing connection 0
curl: (7) Failed to connect to our.nice.site port 443: Connection refused
- - - -
The host itself is, of course, able to access itself with the public URL/IP just fine; this only occurs inside the containers. The containers are also able to access any other public URL, just not the one pointing to the host's own public IP.
We're using pasta networking. All containers are set up with quadlet. Here's the Chrome container's quadlet generator:
- - - -
[Unit]
Description=Browserless Chrome
Wants=network-online.target
After=network-online.target
[Container]
Image=browserless/chrome:1.59.0-chrome-stable
ContainerName=browserless-chrome
Network=pasta:-t,auto,-T,auto
LogDriver=journald
[Install]
WantedBy=default.target
- - - -
All the other containers also use "Network=pasta:-t,auto,-T,auto". I tried to add --map-gw to the command line, since it seemed possibly relevant, but without success.
"Network=pasta:--map-gw,-t,auto,-T,auto" failed on container startup with this error:
Error: failed to start pasta:
Port forwarding mode 'none' conflicts with previous mode
"Network=pasta:-t,auto,-T,auto,--map-gw" started the container fine, but did not fix the Connection Refused error.
Finally, the contents of containers.conf:
- - - -
[containers]
log_driver="journald"
tz="local"
[network]
network_backend="netavark"
[engine]
runtime="crun"
- - - -
Is this a bug, a misconfiguration on my part, or an intentional security feature of Podman networking and/or Podman with pasta, specifically? Is there any way for the containers to access the hosts's public IP? If not, we'll need to arrange some kind of awkward static file serving container for use by the Chrome container, but we'd really like to avoid that.
- JL
1 year, 5 months
Wordpress container running on mac cannot create theme directory on mounted path
by Mehdi Haghgoo
Hi,
I am running the following compose file with Docker Compose backed by Podman Machine on mac (Intel):
services:
db:
image: docker.io/library/mariadb:10.5
environment:
- MYSQL_DATABASE=mydb
- MYSQL_USER=wpuser
- MYSQL_PASSWORD=mypass
- MYSQL_ROOT_PASSWORD=mypass
volumes:
- wpdb_vol:/var/lib/mysql
wp:
image: docker.io/library/wordpress:php8.2-apache
environment:
- WORDPRESS_DB_NAME=mydb
- WORDPRESS_DB_USER=wpuser
- WORDPRESS_DB_PASSWORD=mypass
- WORDPRESS_DB_HOST=db
depends_on:
- db
ports:
- 8000:80
volumes:
- .:/var/www/html
- wp_uploads:/var/www/html/wp-content/uploads
adminer:
image: docker.io/library/adminer:4.6
ports:
- 8080:8080
volumes:
wp_uploads: {}
wpdb_vol: {}
When running the containers, in WordPress admin page, I cannot install a plugin. Basically, the container is not allowed to create directories under wp-content. It fails with permission error:
"Could not create directory /var/www/html/wp-content/upgrade/oceanwp-3.4.4/oceanwp"
I tried chmod 777 on all wp-content (with -r), but it didn't help.
Not sure if this is a Podman issue or a due mechanism that needs to be properly handled. How can I fix this?
Mehdi
1 year, 5 months
exec - shell functions ?
by lejeczek
Hi guys.
How do you 'exec' your container shell functions without
going into shell interactively?
many thanks, L.
1 year, 5 months
fcontext for rootfull volumes ?
by lejeczek
Hi guys.
I map /root very often - I'd imagine many do - and I do that
with Z
What I get is quite puzzling to me, say host has it:
system_u:object_r:container_file_t:s0 bin
system_u:object_r:container_file_t:s0:c526,c622 cacert.p12
system_u:object_r:container_file_t:s0:c526,c622 kracert.p12
system_u:object_r:container_file_t:s0:c74,c78 pki
in container:
-> $ ls -Z1 bin pki
bin:
system_u:object_r:container_file_t:s0 conf
system_u:object_r:container_file_t:s0 container-config
ls: cannot open directory 'pki': Permission denied
'root' existed prior to container creation and 'pki' was
added later, & outside of container.
fcontext is not enough? SELinux says:
allow container_init_t container_file_t:dir read;
label=disable seems to be the way of it it but is that the
right way?
many thanks, L.
1 year, 5 months