# in environment ?
by lejeczek
Hi guys.
Do you use # in your envs?
I wonder if it's just me having issues with those.
For a test, to reproduce the issue, 'ghost' web solution
would be easy & quick:
-> $ podman run -dt ...................... --env
database__client=mysql --env
database__connection__host=11.1.0.1 --env
database__connection__user=ghostadm --env
database__connection__password='xyz#admghost' --env
database__connection__database=ghost_xyz --env
url=https://ghost.xyz
So far all I've tried with 'database__connection__password'
failed, quoting &| escaping.
I often use # - does anybody have a way to make it work?
many thanks, L.
1 year, 3 months
Should I run podman-based systemd services as root?
by Mark Raynsford
Hello!
I'm aware of the age-old advice of not running services as root; I've
been administering UNIX-like systems for decades now.
If you follow the advice given in, for example, this page:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_at...
... What you'll get is a redis container running as root (unless the
redis image drops privileges itself - I don't know, I've never run it).
I've set up a few production systems running services that are inside
podman containers. I'm lucky enough to be using 98% software that can
run inside completely unprivileged containers. For all of these
containers, I've run each container under its own user ID. The systemd
unit for each, for example, does something along these lines:
[Service]
Type=exec
User=_cardant
Group=_cardant
ExecStart=/usr/bin/podman run ...
However, doing things this way is a little messy. For example, if for
some reason I want to do something like `podman exec` in a container, I
have to `sudo -u _cardant podman exec ...`. `podman ps` will obviously
only show me the containers running for the current user. Additionally,
any images downloaded from the registry for each service will
effectively end up in the home directory of each service user,
complicating storage accounting somewhat. The UIDs/GIDs are yet another
thing I have to manage, even though they don't have any useful meaning
(they don't identify people, they're solely there because the
containers have to run as _something_). Containers also leak internal
UID/GID values (from the /etc/subuid ranges) into the filesystem, which
can complicate things.
Additionally, there are some containers that stubbornly make it awkward
to run as a non-root user despite not actually needing privileges. The
PostgreSQL image is a good example; you can run it as a non-root user
and it'll switch to another UID inside the container and then that
UID/GID will end on the database files that are inevitably mounted
inside the container. You'll also have to match these unpredictable
weird UID/GIDs if you want to supply the container with TLS keys/certs,
because postgres will refuse to open them unless the UID/GID matches.
You can't get around this by telling postgres to run as UID 0; it'll
refuse, even though UID 0 inside the container isn't UID 0 outside of
it when running unprivileged.
I'm running all of these services on systems that have SELinux in
enforcing mode. My understanding is that containers will all have the
container_t domain and therefore even if they all ran as root, a
compromised container would not be able to do any meaningful harm to
the system.
I'm therefore not certain if the usual "don't run as root" advice
applies as containers don't have the same security properties
(especially when combined with SELinux).
I feel like it'd simplify things if I could safely run all of
the containers as root. At the very least, I'd be able to predict
UID/GID values inside the containers from outside!
I can't get any clear advice on how people are expected to run podman
containers in production. All of the various bits of documentation in
Linux distributions that talk about running under systemd never
bother to talk about UIDs or GIDs. Any documentation on running podman
rootless seems to only talk about it in the context of developers
running unprivileged containers on their local machines for
experimentation/development. If you set up containers via Fedora
Server's cockpit UI, you'll get containers running as root everywhere.
What is the best practice here?
--
Mark Raynsford | https://www.io7m.com
1 year, 3 months
How does podman "initialize" after a reboot?
by Pratham Patel
Hello everyone,
**Disclaimer: This is a long e-mail.**
I am on NixOS (23.05), using the podman binary provided by the
distribution package. There are several issues that I am facing but
the issue that I want resolved is that _I want rootless Podman
containers started at boot_.
I won't get much into NixOS other than what is needed (i.e. no
advocacy for NixOS). NixOS, being a distribution with reproducible
builds, has a different method of storing binaries. Instead of
binaries living in `/usr/bin`, binaries actually live in
`/nix/store/<hash>-pkg-ver/bin`. Thereafter, the binaries are linked
into `/run/current-system/sw/bin`. My `PATH` (from a login shell)
looks like the following:
```
[pratham@sentinel] $ echo $PATH
/home/pratham/.local/bin:/home/pratham/bin:/run/wrappers/bin:/home/pratham/.local/share/flatpak/exports/bin:/var/lib/flatpak/exports/bin:/home/pratham/.nix-profile/bin:/etc/profiles/per-user/pratham/bin:/nix/var/nix/profiles/default/bin:/run/current-system/sw/bin
```
NixOS, being an OS that you can build with configuration files (i.e.
almost zero bash code to install; except for formatting and mounting),
there exists a way to declare your Podman containers like you do in a
compose.yaml and those containers will automatically be created as a
systemd service [0]. This is great! But those service files are placed
in `/etc/systemd/user`. This has an issue: the Podman container now
runs as root. I checked this by **logging in as root** and checking
the output of `podman ps` (not just `sudo podman ps`). If I wanted
rootful containers, I wouldn't be using Podman...
So, for the time being, I have resorted to writing a systemd unit file
by hand (which is stored in `$HOME/.config/systemd/user`). But the
path `/run/current-system/sw/bin` is missing from the unit's PATH. No
biggie, I can just add it using the following line under the
`[Service]` section:
```
Environment="PATH=/run/current-system/sw/bin:$PATH"
```
(This is a temporary hack and is strongly advised against, but I did
this as a troubleshooting measure, not as a solution.)
But the service fails with the following log entries in journalctl:
```
Jul 11 10:46:47 sentinel podman[36673]:
time="2023-07-11T10:46:47+05:30" level=error msg="running
`/run/current-system/sw/bin/newuidmap 36686 0 1000 1 1 10000 65536`:
newuidmap: write to uid_map failed: Operation not permitted\n"
Jul 11 10:46:47 sentinel podman[36673]: Error: cannot set up namespace
using "/run/current-system/sw/bin/newuidmap": should have setuid or
have filecaps setuid: exit status 1
Jul 11 10:46:47 sentinel systemd[1317]: testing-env.service: Main
process exited, code=exited, status=125/n/a
```
I never encountered this error on Fedora or RHEL. While experimenting,
I noticed one thing: **If I run _any_ Podman command (even `podman
ps`) from my _login shell_ and then restart the Podman container's
systemd service, the service runs cleanly.**
From the _Why can't I use sudo with rootless Podman_ article [1]:
> One of the core reasons Podman requires a temporary files directory is for detecting if the system has rebooted. After a reboot, all containers are no longer running, all container filesystems are unmounted, and all network interfaces need to be recreated (among many other things). Podman needs to update its database to reflect this and perform some per-boot setup to ensure it is ready to launch containers. This is called "refreshing the state."
>
> This is necessary because Podman is not a daemon. Each Podman command is run as a new process and doesn't initially know what state containers are in. You can look in the database for an accurate picture of all your current containers and their states. Refreshing the state after a reboot is essential to making sure this picture continues to be accurate.
>
> To perform the refresh, you need a reliable way of detecting a system reboot, and early in development, the Podman team settled on using a sentinel file on a tmpfs filesystem. A tmpfs is an in-memory filesystem that is not saved after a reboot—every time the system starts, a tmpfs mount will be empty. By checking for the existence of a file on such a filesystem and creating it if it does not exist, Podman can know if it's the first time it has run since the system rebooted.
>
> The problem becomes in determining where you should put your temporary files directory. The obvious answer is /tmp, but this is not guaranteed to be a tmpfs filesystem (though it often is). Instead, by default, Podman will use /run, which is guaranteed to be a tmpfs. Unfortunately, /run is only writable by root, so rootless Podman must look elsewhere. Our team settled on the /run/user/$UID directories, a per-user temporary files directory.
This means that Podman needs some sort of "initialization" when the
system has rebooted. Apparently, due to NixOS' nature, this
"initialization" doesn't occur when Podman is invoked from a systemd
service (something is missing but I can't figure out _what_). So I
rebooted and setup an `inotifywait` job (logged in as `root`--not with
the `sudo` prefix--with the command `inotifywait /run/user/1000/
--recursive --monitor`; `XDG_RUNTIME_DIR` for user `pratham` is
`/run/user/1000`) and ran `podman ps` when I was logged in as user
`pratham`. It generated the following output:
```
/run/user/1000/ ATTRIB,ISDIR libpod
/run/user/1000/libpod/ ATTRIB,ISDIR
/run/user/1000/libpod/tmp/ OPEN alive.lck
/run/user/1000/libpod/tmp/ CLOSE_WRITE,CLOSE alive.lck
/run/user/1000/libpod/tmp/ OPEN alive.lck
/run/user/1000/libpod/tmp/ CLOSE_WRITE,CLOSE alive.lck
/run/user/1000/libpod/tmp/ CREATE pause.pid.NjPiqQ
/run/user/1000/libpod/tmp/ OPEN pause.pid.NjPiqQ
/run/user/1000/libpod/tmp/ MODIFY pause.pid.NjPiqQ
/run/user/1000/libpod/tmp/ CLOSE_WRITE,CLOSE pause.pid.NjPiqQ
/run/user/1000/libpod/tmp/ MOVED_FROM pause.pid.NjPiqQ
/run/user/1000/libpod/tmp/ MOVED_TO pause.pid
/run/user/1000/ ATTRIB,ISDIR libpod
/run/user/1000/libpod/ ATTRIB,ISDIR
/run/user/1000/containers/ CREATE,ISDIR overlay
/run/user/1000/containers/ OPEN,ISDIR overlay
/run/user/1000/containers/ ACCESS,ISDIR overlay
/run/user/1000/containers/ CLOSE_NOWRITE,CLOSE,ISDIR overlay
/run/user/1000/containers/overlay/ CREATE overlay-true
/run/user/1000/containers/overlay/ OPEN overlay-true
/run/user/1000/containers/overlay/ CLOSE_WRITE,CLOSE overlay-true
/run/user/1000/containers/overlay/ OPEN overlay-true
/run/user/1000/containers/overlay/ CLOSE_NOWRITE,CLOSE overlay-true
/run/user/1000/containers/overlay/ CREATE metacopy()-false
/run/user/1000/containers/overlay/ OPEN metacopy()-false
/run/user/1000/containers/overlay/ CLOSE_WRITE,CLOSE metacopy()-false
/run/user/1000/containers/overlay/ CREATE native-diff()-true
/run/user/1000/containers/overlay/ OPEN native-diff()-true
/run/user/1000/containers/overlay/ CLOSE_WRITE,CLOSE native-diff()-true
/run/user/1000/containers/ CREATE,ISDIR overlay-containers
/run/user/1000/containers/ OPEN,ISDIR overlay-containers
/run/user/1000/containers/ ACCESS,ISDIR overlay-containers
/run/user/1000/containers/ CLOSE_NOWRITE,CLOSE,ISDIR overlay-containers
/run/user/1000/containers/ CREATE,ISDIR overlay-locks
/run/user/1000/containers/ OPEN,ISDIR overlay-locks
/run/user/1000/containers/ ACCESS,ISDIR overlay-locks
/run/user/1000/containers/ CLOSE_NOWRITE,CLOSE,ISDIR overlay-locks
/run/user/1000/containers/ CREATE,ISDIR networks
/run/user/1000/libpod/tmp/ OPEN alive.lck
/run/user/1000/libpod/tmp/ CLOSE_WRITE,CLOSE alive.lck
/run/user/1000/libpod/tmp/ OPEN alive.lck
/run/user/1000/containers/ OPEN,ISDIR networks
/run/user/1000/containers/ ACCESS,ISDIR networks
/run/user/1000/containers/ CLOSE_NOWRITE,CLOSE,ISDIR networks
/run/user/1000/libpod/tmp/ CREATE alive
/run/user/1000/libpod/tmp/ OPEN alive
/run/user/1000/libpod/tmp/ CLOSE_NOWRITE,CLOSE alive
/run/user/1000/libpod/tmp/ CLOSE_WRITE,CLOSE alive.lck
/run/user/1000/systemd/units/ CREATE .#invocation:dbus.serviced739c18053185984
/run/user/1000/systemd/units/ MOVED_FROM
.#invocation:dbus.serviced739c18053185984
/run/user/1000/systemd/units/ MOVED_TO invocation:dbus.service
/run/user/1000/ CREATE,ISDIR dbus-1
/run/user/1000/ OPEN,ISDIR dbus-1
/run/user/1000/ ACCESS,ISDIR dbus-1
/run/user/1000/ CLOSE_NOWRITE,CLOSE,ISDIR dbus-1
/run/user/1000/dbus-1/ OPEN,ISDIR services
/run/user/1000/dbus-1/services/ OPEN,ISDIR
/run/user/1000/dbus-1/ ACCESS,ISDIR services
/run/user/1000/dbus-1/services/ ACCESS,ISDIR
/run/user/1000/dbus-1/ ACCESS,ISDIR services
/run/user/1000/dbus-1/services/ ACCESS,ISDIR
/run/user/1000/dbus-1/ CLOSE_NOWRITE,CLOSE,ISDIR services
/run/user/1000/dbus-1/services/ CLOSE_NOWRITE,CLOSE,ISDIR
/run/user/1000/systemd/ CREATE,ISDIR transient
/run/user/1000/systemd/ OPEN,ISDIR transient
/run/user/1000/systemd/ ACCESS,ISDIR transient
/run/user/1000/systemd/ CLOSE_NOWRITE,CLOSE,ISDIR transient
/run/user/1000/systemd/transient/ CREATE podman-2894.scope
/run/user/1000/systemd/transient/ OPEN podman-2894.scope
/run/user/1000/systemd/transient/ MODIFY podman-2894.scope
/run/user/1000/systemd/transient/ CLOSE_WRITE,CLOSE podman-2894.scope
/run/user/1000/systemd/units/ CREATE
.#invocation:podman-2894.scopeb6be723b1ec13b95
/run/user/1000/systemd/units/ MOVED_FROM
.#invocation:podman-2894.scopeb6be723b1ec13b95
/run/user/1000/systemd/units/ MOVED_TO invocation:podman-2894.scope
/run/user/1000/containers/ CREATE,ISDIR overlay-layers
/run/user/1000/containers/ OPEN,ISDIR overlay-layers
/run/user/1000/containers/ ACCESS,ISDIR overlay-layers
/run/user/1000/containers/ CLOSE_NOWRITE,CLOSE,ISDIR overlay-layers
/run/user/1000/containers/overlay-layers/ CREATE mountpoints.lock
/run/user/1000/containers/overlay-layers/ OPEN mountpoints.lock
/run/user/1000/containers/overlay-layers/ CLOSE_WRITE,CLOSE mountpoints.lock
/run/user/1000/containers/overlay-layers/ OPEN mountpoints.lock
/run/user/1000/containers/overlay-layers/ CLOSE_WRITE,CLOSE mountpoints.lock
/run/user/1000/containers/overlay-layers/ OPEN mountpoints.lock
/run/user/1000/containers/overlay-layers/ CLOSE_WRITE,CLOSE mountpoints.lock
/run/user/1000/systemd/units/ DELETE invocation:podman-2894.scope
/run/user/1000/systemd/transient/ DELETE podman-2894.scope
/run/user/1000/libpod/tmp/ OPEN pause.pid
/run/user/1000/libpod/tmp/ ACCESS pause.pid
/run/user/1000/libpod/tmp/ CLOSE_NOWRITE,CLOSE pause.pid
/run/user/1000/systemd/transient/ CREATE podman-pause-f50834a6.scope
/run/user/1000/systemd/transient/ OPEN podman-pause-f50834a6.scope
/run/user/1000/systemd/transient/ MODIFY podman-pause-f50834a6.scope
/run/user/1000/systemd/transient/ CLOSE_WRITE,CLOSE podman-pause-f50834a6.scope
/run/user/1000/systemd/units/ CREATE
.#invocation:podman-pause-f50834a6.scope03db5d0ea8888975
/run/user/1000/systemd/units/ MOVED_FROM
.#invocation:podman-pause-f50834a6.scope03db5d0ea8888975
/run/user/1000/systemd/units/ MOVED_TO invocation:podman-pause-f50834a6.scope
```
Following is the output of `podman info` on my computer:
```
[pratham@sentinel] $ podman info
host:
arch: arm64
buildahVersion: 1.30.0
cgroupControllers:
- cpu
- io
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: Unknown
path: /run/current-system/sw/bin/conmon
version: 'conmon version 2.1.7, commit: '
cpuUtilization:
idlePercent: 81.03
systemPercent: 3.02
userPercent: 15.94
cpus: 4
databaseBackend: boltdb
distribution:
codename: stoat
distribution: nixos
version: "23.05"
eventLogger: journald
hostname: sentinel
idMappings:
gidmap:
- container_id: 0
host_id: 994
size: 1
- container_id: 1
host_id: 10000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 10000
size: 65536
kernel: 6.1.38
linkmode: dynamic
logDriver: journald
memFree: 3040059392
memTotal: 3944181760
networkBackend: netavark
ociRuntime:
name: crun
package: Unknown
path: /run/current-system/sw/bin/crun
version: |-
crun version 1.8.4
commit: 1.8.4
rundir: /run/user/1000/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
exists: true
path: /run/user/1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities:
CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_
CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: ""
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable:
/nix/store/n8lbxja2hd766pnz89qki90na2b3g815-slirp4netns-1.2.0/bin/slirp4netns
package: Unknown
version: |-
slirp4netns version 1.2.0
commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 4
libseccomp: 2.5.4
swapFree: 2957766656
swapTotal: 2957766656
uptime: 0h 5m 34.00s
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- docker.io
- quay.io
store:
configFile: /home/pratham/.config/containers/storage.conf
containerStore:
number: 2
paused: 0
running: 0
stopped: 2
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/pratham/.local/share/containers/storage
graphRootAllocated: 13539516416
graphRootUsed: 7770832896
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 9
runRoot: /run/user/1000/containers
transientStore: false
volumePath: /home/pratham/.local/share/containers/storage/volumes
version:
APIVersion: 4.5.0
Built: 315532800
BuiltTime: Tue Jan 1 05:30:00 1980
GitCommit: ""
GoVersion: go1.20.5
Os: linux
OsArch: linux/arm64
Version: 4.5.0
```
So my current question is how do I do this initial setup manually? I
don't want to log into `pratham`'s login shell every time I have to
reboot my machine for the Podman containers to start.
[0]: https://nixos.wiki/wiki/Podman#Run_Podman_containers_as_systemd_services
[1]: https://www.redhat.com/sysadmin/sudo-rootless-podman
- Pratham Patel
1 year, 3 months
Installation & First Pull
by jimsaxton1@comcast.net
Hey - noob here. I installed podman ( and podman desktop ) on windows 10 OS Build 19044.3208. & WSL. I can manage the podman machine ( stop / start ) and login to the podman machine successfully with ssh. I enabled --log-level=debug see no issues. I've added the proxies and registry authentication ( and reviewed the auth.json / registry.json ) files - I can see both files being read during operations. ( like pull. the issue i have is PULL times out. From the debug log, I see "https://quay.io/v2/"; proxyconnect tcp, i/o timeout.
The podman installation and first PULL seems super easy. I think I simply missed opening a port, or something? Any help is appreciated. thank you Jim
1 year, 3 months
Upgrade podman to 4.6?
by Jochen Wiedmann
Hi,
I am using podman desktop 1.2.1. Noticed, that it is using podman
4.5.1. Latest release is podman 4.6. Should I upgrade? If so, are
there any instructions on what to do. (Simply running the Installer
doesn't seem to do the trick.)
Thanks,
Jochen
--
The woman was born in a full-blown thunderstorm. She probably told it
to be quiet. It probably did. (Robert Jordan, Winter's heart)
1 year, 3 months
Need some help for a rather strange usecase
by Boris Behrens
Hi,
sorry if this question is bad. You are allowed to flame me for this :)
I would like to create a container which is basically connected to two
separate VLANs and does some sort of bridging between them.
I also would like to be able to assign IP addresses from inside the
container, because I would like to assign IP addresses via keepalived.
The reason, why I would like to do it that way is to reduce cross traffic
between hosts.
I have three hosts, that are attached to a public VLAN. All three hosts got
a public IP address, which needs to be assigned to another host in case of
something is going wrong.
HAProxy picks up the request on the public VLAN and forwards it to the
underlying backend, which is in the same container. This backend talks to a
storage cluster via the private VLAN.
The container host is a ubuntu 20.04 with Podman 3.4.2
What I did until now:
- create two additional networks [1]
- create a container [2]
But now I have the problem that I am not allowed to add an IP address from
inside the container [3]
I also don't know if I have a conceptual error in the whole thing, because
it is a strange thing to use containers as a VM replacement.
But currently I just don't know better.
Hope someone can help me.
---
A more in depth description of what I try to solve:
I have a ceph cluster that serves s3 traffic via radosgw.
radosgw talks to all the physical disks in the cluster directly. So it does
the discribution of objects.
To do TLS termination, some basic HTTP header manipulation, and other HTTP
related stuff a HAProxy is sitting in front of the radosgw.
I don't want to have a public IP address directly on a storage host,
because misconfiguration happen and this is something I want to avoid.
So I thought I could spin up a container on some storage server, map the
public VLAN and the private storage VLAN into the container and combine
HAProxy and radosgw into one unit.
Now there is the problem with public availablity. I want to used DNS load
balancing for the HAproxy. So every HAProxy gets it's own public IP
address. But when one ore more HAproxy instance fail (there are so many
things that can go south) I would like to add the IP address to another
container.
Here comes keepalived, which does VRRP from inside the containers and when
some container stop anouncing it is available, another host spins up the IP
address, and starts to serve it.
And because I am struggling with even those simple tasks, I don't want to
even try k8s/k3s. Also I think k8s/k3s have a lot of cross traffic between
the instances, which might kill the performance really hard.
---
[1]
$ podman network create --disable-dns --driver=macvlan -o parent=bond0.50
--subnet 10.64.1.0/24 public
$ podman network create --disable-dns --driver=macvlan -o parent=bond0.43
--subnet 10.64.2.0/24 management
[2]
$ podman run --detach --hostname=frontend-`hostname` --name
frontend-`hostname -s` \
--mount=type=bind,source=/opt/frontend/etc/haproxy,destination=/etc/haproxy,ro
\
--mount=type=bind,source=/opt/frontend/etc/ssl/frontend,destination=/etc/ssl/frontend,ro
\
--network=podman,public,management \
-it ubuntu:20.04 /bin/bash
[3]
root@frontend-0cc47a6df14e:/# ip addr add 192.168.0.1/24 dev eth2
RTNETLINK answers: Operation not permitted
Best wishes
Boris
1 year, 3 months
Need help mounting a volume, thoroughly rootless, on my host, a Mac
by Mike Spreitzer
I have podman version 4.6.0 and MacOS version 12.6.
I found https://github.com/ansible/vscode-ansible/wiki/macos and so recreated my podman machine with `podman machine init -v $HOME:$HOME`. The VM config says `security_model=none` without me having to tweak it.
I was pointed at https://www.tutorialworks.com/podman-rootless-volumes/ but that does not address the added degree of difficulty that the podman machine VM injects.
I am running podman rootless and want to run a container rootless with a host directory mounted into the container.
The simplest thing does not work; the mounted directory appears inside the container to be owned by root.
```
mspreitz@mjs12 ~ % ls -ldn $HOME/test3
drwxr-xr-x 2 501 20 64 Jul 31 23:52 /Users/mspreitz/test3
mspreitz@mjs12 ~ % podman run --rm -it --entrypoint sh -v $HOME/test3:/test3 quay.io/prometheus/prometheus
/prometheus $ id
uid=65534(nobody) gid=65534(nobody) groups=65534(nobody)
/prometheus $ ls -ldn /test3
drwxr-xr-x 2 0 65534 64 Aug 1 03:52 /test3
```
Trying a little harder gets a mysterious error message.
```
mspreitz@mjs12 ~ % podman run --rm -it --entrypoint sh "--mount=type=bind,src=$HOME/test3,dst=/test3,idmap=uids=65534-501-1;gids=65534-20-1" quay.io/prometheus/prometheus
Error: preparing container ab8859c8bc4fc5df55f319e9e17a4831734d00ad2332462d18a238d4ccb0e831 for attach: crun: mount_setattr `/Users/mspreitz/test3`: Invalid argument: OCI runtime error
```
Thanks,
Mike
1 year, 3 months