[Podman] Re: "Connection refused" from inside containers to the host's public IP
by Paul Holzinger
On 21/06/2023 11:33, jklaiho(a)iki.fi wrote:
> Hi all,
>
> We have a bunch of rootless containers running as a non-privileged
> user on a Ubuntu 22.04 server under Podman 4.5.0.
>
> One of them is running Browserless Chrome to render PDFs of the output
> of a Django-served URL of another container.
>
> The Django container is set up so that its CSS/JS/etc. static files
> are stored on the host machine and served by the host's Nginx. To
> correctly display the styles, Chrome therefore needs to access the
> endpoint via the public URL of the site.
>
> This is not working, because any connection attempts to the public IP
> from within any of the running containers fail with a Connection
> Refused error:
>
> - - - -
>
> $ curl -vvv https://our.nice.site
> * Trying <redacted IPv6>:443...
> * connect to <redacted IPv6> port 443 failed: Connection refused
> * Trying <redacted IPv4>:443...
> * connect to <redacted IPv4> port 443 failed: Connection refused
> * Failed to connect to our.nice.site port 443: Connection refused
> * Closing connection 0
> curl: (7) Failed to connect to our.nice.site port 443: Connection refused
>
> - - - -
>
> The host itself is, of course, able to access itself with the public
> URL/IP just fine; this only occurs inside the containers. The
> containers are also able to access any other public URL, just not the
> one pointing to the host's own public IP.
>
> We're using pasta networking. All containers are set up with quadlet.
> Here's the Chrome container's quadlet generator:
>
> - - - -
>
> [Unit]
> Description=Browserless Chrome
> Wants=network-online.target
> After=network-online.target
>
> [Container]
> Image=browserless/chrome:1.59.0-chrome-stable
> ContainerName=browserless-chrome
>
> Network=pasta:-t,auto,-T,auto
> LogDriver=journald
>
> [Install]
> WantedBy=default.target
>
> - - - -
>
> All the other containers also use "Network=pasta:-t,auto,-T,auto". I
> tried to add --map-gw to the command line, since it seemed possibly
> relevant, but without success.
>
> "Network=pasta:--map-gw,-t,auto,-T,auto" failed on container startup
> with this error:
>
> Error: failed to start pasta:
> Port forwarding mode 'none' conflicts with previous mode
>
> "Network=pasta:-t,auto,-T,auto,--map-gw" started the container fine,
> but did not fix the Connection Refused error.
>
> Finally, the contents of containers.conf:
>
> - - - -
> [containers]
> log_driver="journald"
> tz="local"
> [network]
> network_backend="netavark"
> [engine]
> runtime="crun"
>
> - - - -
>
> Is this a bug, a misconfiguration on my part, or an intentional
> security feature of Podman networking and/or Podman with pasta,
> specifically? Is there any way for the containers to access the
> hosts's public IP? If not, we'll need to arrange some kind of awkward
> static file serving container for use by the Chrome container, but
> we'd really like to avoid that.
>
> - JL
>
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
Hi,
You are on the right track. By default pasta copies the host ip in the
container so both have the same ip address, therefore you cannot connect
to that ip because it is local to the container namespace.
With `--map-gw` pasta remaps the gateway ip to the host ip. So with
that option you have to connect to the gateway ip instead which pasta
then remaps to the actual host ns.
---
Paul
1 year, 11 months
[Podman] Re: HELP! recover files from a deleted container
by Alvin Thompson
Hi and thanks for the suggestions,
Since this is Podman for Windows which uses a WSL instance, I’m hopeful that not starting Podman or messing within the WSL instance will preserve the data if necessary. WSL stores the EXT4 filesystem in a vhdx image which hopefully is isolated from Windows enough. If I’m wrong about this please let me know.
This is a work computer with rather strict controls so what I can do with it is limited. I did make a copy of the WSL disk image so that’s something. Unfortunately, I may have already overwritten the data because in a panic the first thing I did was try to copy any folder I could find with the name “container”. I was hoping the files would be unlinked and cleaned up later if space were needed. Perhaps that’s a feature suggestion.
I’ll see if I can grab another Intel computer, install VirtualBox on it, attach a copy of the image, and boot a recovery DVD with that.
Thanks,
Alvin
> On Sep 4, 2023, at 8:15 AM, Tobias Wendorff <tobias.wendorff(a)tu-dortmund.de> wrote:
>
> 1. Immediately stop using the system: Cease all activities and avoid any further operations on the affected system. This minimizes the risk of overwriting the data you want to recover.
>
> 2. Turn it off as soon as possible. Maybe unplug the power supply to turn it off immeditely.
>
> 3. Don't boot from the disk again. Remove it if necessaray.
>
> 4. Boot into a data-recovery DVD or put it on another system and mount it read-only.
>
> The more you do on the hard drive, the more likely it is that the data will be overwritten. The data is then virtually unrecoverable. Normally, however, you can recover deleted data. They were not intentionally overwritten (shredded).
>
>
> Am 04.09.2023 um 12:26 schrieb Alvin Thompson:
>> Help!
>> Is there any way to recover files from a deleted container? Long story short, I found the behavior of `podman network rm -f` unexpected, and it wound up deleting most of my containers. One in particular had a month of work in it (I was using it as a development environment), and it turns out only part of it was backed up. I’m desperate!
>> This is Podman for Windows, so most of the files on the “host” are in the WSL environment. I can get into that no problem with `wsl -d podman-machine-default`.
>> As an added wrinkle, my default connection was `podman-machine-default-root`, but I was was not running Podman rootful. I’m not sure this is particularly relevant.
>> grep-ing for strings which are unique to the development environment shows one hit in Windows, in %HOME%/.local/containers/podman/machine/wsl/wsldist/podman-machine-default/ext4.vhdx - which I assume is the file system for the WSL layer itself. I made a copy of it.
>> A grep within WSL itself doesn’t show so any hits, so it’s possible the files were deleted as far as WSL is concerned. I tried searching for an EXT4 undelete tool, but the only one I found (extundelete) is from 10+ years ago and doesn’t appear to work anymore.
>> I haven’t stopped WSL (I’m using /tmp as a staging area) or restarted the computer.
>> I’m at wit’s end. I really don’t know where to begin or look to recover these files, which I really, really need. Any recovery suggestions (no matter how tedious) would be welcome.
>> I know it’s too late to change now, but man, the behavior of `podman network remove` is unexpected.
>> Thanks,
>> Alvin
>> _______________________________________________
>> Podman mailing list -- podman(a)lists.podman.io
>> To unsubscribe send an email to podman-leave(a)lists.podman.io
1 year, 8 months
[Podman] Re: podman image for ngninx
by Robin Lee Powell
That's pretty weird. Just to double check,
'curl http://deb.debian.org/debian/dists/buster/InRelease" works on
the machine you're running podman from, yeah?
On Mon, Dec 04, 2023 at 01:31:33PM +0100, Matthias Apitz wrote:
>
> Hello,
>
> I'm trying to build a podman image as described here:
>
> https://docs.podman.io/en/latest/Introduction.html
>
> with the command:
>
> podman build -t nginx https://git.io/Jf8ol
>
> on SuSE LINUX SLES 15 SP5. This fails with the attached nohup log. It
> fails mostly due to this:
> ...
> Adding system user `nginx' (UID 101) ...
> Adding new user `nginx' (UID 101) with group `nginx' ...
> Not creating home directory `/nonexistent'.
> + apt-get update
> Err:1 http://deb.debian.org/debian buster InRelease
> Connection failed [IP: 146.75.118.132 80]
> Err:2 http://deb.debian.org/debian-security buster/updates InRelease
> Connection failed [IP: 146.75.118.132 80]
> Err:3 http://deb.debian.org/debian buster-updates InRelease
> Connection failed [IP: 146.75.118.132 80]
> Reading package lists...
> W: Failed to fetch http://deb.debian.org/debian/dists/buster/InRelease Connection failed [IP: 146.75.118.132 80]
> ...
>
> What can I do?
>
> Thanks
>
> matthias
>
> --
> Matthias Apitz, ✉ guru(a)unixarea.de, http://www.unixarea.de/ +49-176-38902045
> Public GnuPG key: http://www.unixarea.de/key.pub
> STEP 1/10: FROM debian:buster-slim
> STEP 2/10: LABEL maintainer="NGINX Docker Maintainers <docker-maint(a)nginx.com>"
> --> Using cache 19ba62c1438c1ad05d15c3f5dcc882a4e2cd637b8f558b4be990ba1ce62c05b7
> --> 19ba62c1438
> STEP 3/10: ENV NGINX_VERSION 1.17.10
> --> Using cache 765b77c099fd40532cc44abfe9fd8bedd946283b187d436a32771a9784eb778b
> --> 765b77c099f
> STEP 4/10: ENV NJS_VERSION 0.3.9
> --> Using cache 60ed91cc39904e274b9dd80912233c46da95d9f804820a1f784054e6f588dc35
> --> 60ed91cc399
> STEP 5/10: ENV PKG_RELEASE 1~buster
> --> Using cache 8f1045d10db60c79ff8907c652f60a3a2cf31aa8db273f3e1c1e61f4e80afb63
> --> 8f1045d10db
> STEP 6/10: RUN set -x && addgroup --system --gid 101 nginx && adduser --system --disabled-login --ingroup nginx --no-create-home --home /nonexistent --gecos "nginx user" --shell /bin/false --uid 101 nginx && apt-get update && apt-get install --no-install-recommends --no-install-suggests -y gnupg1 ca-certificates && NGINX_GPGKEY=573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62; found=''; for server in ha.pool.sks-keyservers.net hkp://keyserver.ubuntu.com:80 hkp://p80.pool.sks-keyservers.net:80 pgp.mit.edu ; do echo "Fetching GPG key $NGINX_GPGKEY from $server"; apt-key adv --keyserver "$server" --keyserver-options timeout=10 --recv-keys "$NGINX_GPGKEY" && found=yes && break; done; test -z "$found" && echo >&2 "error: failed to fetch GPG key $NGINX_GPGKEY" && exit 1; apt-get remove --purge --auto-remove -y gnupg1 && rm -rf /var/lib/apt/lists/* && dpkgArch="$(dpkg --print-architecture)" && nginxPackages=" nginx=${NGINX_VERSION}-${PKG_RELEASE} nginx-module-xslt=${NGINX_VERSION}-${PKG_RELEASE} nginx-module-geoip=${NGINX_VERSION}-${PKG_RELEASE} nginx-module-image-filter=${NGINX_VERSION}-${PKG_RELEASE} nginx-module-njs=${NGINX_VERSION}.${NJS_VERSION}-${PKG_RELEASE} " && case "$dpkgArch" in amd64|i386) echo "deb https://nginx.org/packages/mainline/debian/ buster nginx" >> /etc/apt/sources.list.d/nginx.list && apt-get update ;; *) echo "deb-src https://nginx.org/packages/mainline/debian/ buster nginx" >> /etc/apt/sources.list.d/nginx.list && tempDir="$(mktemp -d)" && chmod 777 "$tempDir" && savedAptMark="$(apt-mark showmanual)" && apt-get update && apt-get build-dep -y $nginxPackages && ( cd "$tempDir" && DEB_BUILD_OPTIONS="nocheck parallel=$(nproc)" apt-get source --compile $nginxPackages ) && apt-mark showmanual | xargs apt-mark auto > /dev/null && { [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; } && ls -lAFh "$tempDir" && ( cd "$tempDir" && dpkg-scanpackages . > Packages ) && grep '^Package: ' "$tempDir/Packages" && echo "deb [ trusted=yes ] file://$tempDir ./" > /etc/apt/sources.list.d/temp.list && apt-get -o Acquire::GzipIndexes=false update ;; esac && apt-get install --no-install-recommends --no-install-suggests -y $nginxPackages gettext-base && apt-get remove --purge --auto-remove -y ca-certificates && rm -rf /var/lib/apt/lists/* /etc/apt/sources.list.d/nginx.list && if [ -n "$tempDir" ]; then apt-get purge -y --auto-remove && rm -rf "$tempDir" /etc/apt/sources.list.d/temp.list; fi
> + addgroup --system --gid 101 nginx
> Adding group `nginx' (GID 101) ...
> Done.
> + adduser --system --disabled-login --ingroup nginx --no-create-home --home /nonexistent --gecos nginx user --shell /bin/false --uid 101 nginx
> Warning: The home dir /nonexistent you specified can't be accessed: No such file or directory
> Adding system user `nginx' (UID 101) ...
> Adding new user `nginx' (UID 101) with group `nginx' ...
> Not creating home directory `/nonexistent'.
> + apt-get update
> Err:1 http://deb.debian.org/debian buster InRelease
> Connection failed [IP: 146.75.118.132 80]
> Err:2 http://deb.debian.org/debian-security buster/updates InRelease
> Connection failed [IP: 146.75.118.132 80]
> Err:3 http://deb.debian.org/debian buster-updates InRelease
> Connection failed [IP: 146.75.118.132 80]
> Reading package lists...
> W: Failed to fetch http://deb.debian.org/debian/dists/buster/InRelease Connection failed [IP: 146.75.118.132 80]
> W: Failed to fetch http://deb.debian.org/debian-security/dists/buster/updates/InRelease Connection failed [IP: 146.75.118.132 80]
> W: Failed to fetch http://deb.debian.org/debian/dists/buster-updates/InRelease Connection failed [IP: 146.75.118.132 80]
> W: Some index files failed to download. They have been ignored, or old ones used instead.
> + apt-get install --no-install-recommends --no-install-suggests -y gnupg1 ca-certificates
> Reading package lists...
> Building dependency tree...
> Reading state information...
> Package ca-certificates is not available, but is referred to by another package.
> This may mean that the package is missing, has been obsoleted, or
> is only available from another source
>
> Package gnupg1 is not available, but is referred to by another package.
> This may mean that the package is missing, has been obsoleted, or
> is only available from another source
>
> E: Package 'gnupg1' has no installation candidate
> E: Package 'ca-certificates' has no installation candidate
> + found=
> + echo Fetching GPG key from ha.pool.sks-keyservers.net
> Fetching GPG key from ha.pool.sks-keyservers.net
> + apt-key adv --keyserver ha.pool.sks-keyservers.net --keyserver-options timeout=10 --recv-keys
> E: gnupg, gnupg2 and gnupg1 do not seem to be installed, but one of them is required for this operation
> + echo Fetching GPG key from hkp://keyserver.ubuntu.com:80
> Fetching GPG key from hkp://keyserver.ubuntu.com:80
> + apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --keyserver-options timeout=10 --recv-keys
> E: gnupg, gnupg2 and gnupg1 do not seem to be installed, but one of them is required for this operation
> + echo Fetching GPG key from hkp://p80.pool.sks-keyservers.net:80
> Fetching GPG key from hkp://p80.pool.sks-keyservers.net:80
> + apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --keyserver-options timeout=10 --recv-keys
> E: gnupg, gnupg2 and gnupg1 do not seem to be installed, but one of them is required for this operation
> Fetching GPG key from pgp.mit.edu
> + echo Fetching GPG key from pgp.mit.edu
> + apt-key adv --keyserver pgp.mit.edu --keyserver-options timeout=10 --recv-keys
> E: gnupg, gnupg2 and gnupg1 do not seem to be installed, but one of them is required for this operation
> + test -z
> + echo error: failed to fetch GPG key
> error: failed to fetch GPG key
> + exit 1
> Error: building at STEP "RUN set -x && addgroup --system --gid 101 nginx && adduser --system --disabled-login --ingroup nginx --no-create-home --home /nonexistent --gecos "nginx user" --shell /bin/false --uid 101 nginx && apt-get update && apt-get install --no-install-recommends --no-install-suggests -y gnupg1 ca-certificates && NGINX_GPGKEY=573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62; found=''; for server in ha.pool.sks-keyservers.net hkp://keyserver.ubuntu.com:80 hkp://p80.pool.sks-keyservers.net:80 pgp.mit.edu ; do echo "Fetching GPG key $NGINX_GPGKEY from $server"; apt-key adv --keyserver "$server" --keyserver-options timeout=10 --recv-keys "$NGINX_GPGKEY" && found=yes && break; done; test -z "$found" && echo >&2 "error: failed to fetch GPG key $NGINX_GPGKEY" && exit 1; apt-get remove --purge --auto-remove -y gnupg1 && rm -rf /var/lib/apt/lists/* && dpkgArch="$(dpkg --print-architecture)" && nginxPackages=" nginx=${NGINX_VERSION}-${PKG_RELEASE} nginx-module-xslt=${NGINX_VERSION}-${PKG_RELEASE} nginx-module-geoip=${NGINX_VERSION}-${PKG_RELEASE} nginx-module-image-filter=${NGINX_VERSION}-${PKG_RELEASE} nginx-module-njs=${NGINX_VERSION}.${NJS_VERSION}-${PKG_RELEASE} " && case "$dpkgArch" in amd64|i386) echo "deb https://nginx.org/packages/mainline/debian/ buster nginx" >> /etc/apt/sources.list.d/nginx.list && apt-get update ;; *) echo "deb-src https://nginx.org/packages/mainline/debian/ buster nginx" >> /etc/apt/sources.list.d/nginx.list && tempDir="$(mktemp -d)" && chmod 777 "$tempDir" && savedAptMark="$(apt-mark showmanual)" && apt-get update && apt-get build-dep -y $nginxPackages && ( cd "$tempDir" && DEB_BUILD_OPTIONS="nocheck parallel=$(nproc)" apt-get source --compile $nginxPackages ) && apt-mark showmanual | xargs apt-mark auto > /dev/null && { [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; } && ls -lAFh "$tempDir" && ( cd "$tempDir" && dpkg-scanpackages . > Packages ) && grep '^Package: ' "$tempDir/Packages" && echo "deb [ trusted=yes ] file://$tempDir ./" > /etc/apt/sources.list.d/temp.list && apt-get -o Acquire::GzipIndexes=false update ;; esac && apt-get install --no-install-recommends --no-install-suggests -y $nginxPackages gettext-base && apt-get remove --purge --auto-remove -y ca-certificates && rm -rf /var/lib/apt/lists/* /etc/apt/sources.list.d/nginx.list && if [ -n "$tempDir" ]; then apt-get purge -y --auto-remove && rm -rf "$tempDir" /etc/apt/sources.list.d/temp.list; fi": while running runtime: exit status 1
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
1 year, 5 months
[Podman] Re: RunRoot & mistaken IDs
by lejeczek
On 29/01/2024 15:55, Daniel Walsh wrote:
> On 1/29/24 08:52, lejeczek via Podman wrote:
>>
>>
>> On 29/01/2024 12:04, Daniel Walsh wrote:
>>> On 1/29/24 02:35, lejeczek via Podman wrote:
>>>>
>>>>
>>>> On 28/03/2023 21:00, Chris Evich wrote:
>>>>> On 3/28/23 09:06, lejeczek via Podman wrote:
>>>>>> I think it might have something to do with the fact
>>>>>> that I changed UID for the user
>>>>>
>>>>> The files under /run/user/$UID are typically managed
>>>>> by systemd-logind. I've noticed sometimes there's a
>>>>> delay between logging out and the files being cleaned
>>>>> up. Try logging out for a minute or three and see if
>>>>> that fixes it.
>>>>>
>>>>> Also, if you have lingering enabled for the user, it
>>>>> may take a restart of particular the user.slice.
>>>>>
>>>>> Lastly, I'm not certain, but you (as root) may be able
>>>>> to `systemctl reload systemd-logind`. That's a total
>>>>> guess though.
>>>>>
>>>>>
>>>> Those parts seem very clunky - at least in up-to-date
>>>> Centos 9 stream - I have removed a user and re/created
>>>> that user in IdM and..
>>>> even after full & healthy OS reboot, containers/podman
>>>> insist:
>>>>
>>>> -> $ podman container ls -a
>>>> WARN[0000] RunRoot is pointing to a path
>>>> (/run/user/2001/containers) which is not writable. Most
>>>> likely podman will fail.
>>>> Error: default OCI runtime "crun" not found: invalid
>>>> argument
>>>>
>>>> -> $ id
>>>> uid=1107400004(podmania) gid=1107400004(podmania)
>>>> groups=1107400004(podmania)
>>>> context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
>>>>
>>>>
>>>> Where/what does it persist/insist on that old,
>>>> non-existent UID - would anybody know?
>>>>
>>>> many thanks, L.
>>>> _______________________________________________
>>>> Podman mailing list -- podman(a)lists.podman.io
>>>> To unsubscribe send an email to
>>>> podman-leave(a)lists.podman.io
>>>
>>> Do you have XDG_RUNTIME_DIR pointing at it?
>>>
>> Nope, I don't think so.
>>
>> -> $ echo $XDG_RUNTIME_DIR
>> /run/user/1107400004
>> _______________________________________________
>> Podman mailing list -- podman(a)lists.podman.io
>> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
> Ok you probably need to do a `podman system reset` since
> you changed the ownership of the homedir and the UID of
> the process running Podman. Podman recorded the previous
> settings in its database.
> _______________________________________________
>
Doing anything as the user, seems not as viable option.
-> $ podman system reset
WARN[0000] RunRoot is pointing to a path
(/run/user/2001/containers) which is not writable. Most
likely podman will fail.
Error: default OCI runtime "crun" not found: invalid argument
forcibly:
-> $ rm -fr /home.sysop/podmania/.local/share/containers/*
helps, kind of, for very next issue is:
-> $ podman system reset
ERRO[0000] cannot find UID/GID for user podmania: cannot
read subids - check rootless mode in man pages.
WARN[0000] Using rootless single mapping into the namespace.
This might break some images. Check /etc/subuid and
/etc/subgid for adding sub*ids if not using a network user
WARNING! This will remove:
...
I presumed - incorrectly? - that (these days) subordinate
UIDs should work when:
-> $ authselect current
Profile ID: sssd
Enabled features:
- with-sudo
- with-subid
or am I missing something?
p.s./btw - is it just me or Centos is getting increasingly
clunky, really?
1 year, 4 months
[Podman] Re: Container restart issue: Failed to attach 1 to compat systemd cgroup
by Giuseppe Scrivano
Lewis Gaul <lewis.gaul(a)gmail.com> writes:
> Hi Podman team,
>
> I came across an unexpected systemd warning when running inside a container - I emailed systemd-devel (this email summarises the thread, which
> you can find at https://lists.freedesktop.org/archives/systemd-devel/2023-January/048723....) and Lennart suggested emailing here. Any thoughts
> would be great!
>
> There are two different warnings seen in different scenarios, both cgroups related, and I believe related to each other given they both satisfy the
> points below.
>
> The first warning is seen after 'podman restart $CTR', coming from https://github.com/systemd/systemd/blob/v245/src/shared/cgroup-setup.c#L279:
> Failed to attach 1 to compat systemd cgroup
> /machine.slice/libpod-5e4ab2a36681c092f4ef937cf03b25a8d3d7b2fa530559bf4dac4079c84d0313.scope/init.scope: No such file or directory
>
> The second warning is seen on every boot when using '--cgroupns=private', coming from
> https://github.com/systemd/systemd/blob/v245/src/core/cgroup.c#L2967:
> Couldn't move remaining userspace processes, ignoring: Input/output error
> Failed to create compat systemd cgroup /system.slice: No such file or directory
> ...
>
> Both warnings are seen together when restarting a container using private cgroup namespace.
>
> To summarise:
> - The warnings are seen when running the container on a Centos 8 host, but not on an Ubuntu 20.04 host
> - It is assumed this issue is specific to cgroups v1, based on the warning messages
> - Disabling SELinux on the host with 'setenforce 0' makes no difference
> - Seen with systemd v245 but not with v230
> - Seen with '--privileged' and in non-privileged with '--cap-add sys_admin'
> - Changing the cgroup driver/manager doesn't seem to have any effect
> - The same is seen with docker except when running privileged the first warning becomes a fatal error after hitting "Failed to open pin file: No such file
> or directory" (coming from https://github.com/systemd/systemd/blob/v245/src/core/cgroup.c#L2972) and the container exits (however docker doesn't
> claim to support systemd)
I am afraid you are using a combination that is not well tested.
systemd takes different actions depending on what capabilities you give
it. I am not sure how CAP_SYS_ADMIN would affect it. So my suggestion
is to drop giving too many capabilities to the systemd container, as in
this case you don't want systemd to manage your system.
Have you considered using cgroupv2? cgroup delegation works much better
there, and it is safe.
Giuseppe
> Some extra details copied from the systemd email thread:
> - On first boot PID 1 can be found in /sys/fs/cgroup/systemd/machine.slice/libpod-<ctr-id>.scope/init.scope/cgroup.procs, whereas when the container
> restarts the 'init.scope/' directory does not exist and PID 1 is instead found in the parent (container root) cgroup
> /sys/fs/cgroup/systemd/machine.slice/libpod-<ctr-id>.scope/cgroup.procs (also reflected by /proc/1/cgroup). This is strange because systemd must be
> the one to create this cgroup dir in the initial boot, so I'm not sure why it wouldn't on subsequent boot.
> - I confirmed that the container has permissions to create the dir by executing a 'mkdir' in /sys/fs/cgroup/systemd/machine.slice/libpod-<ctr-id>.scope/
> inside the container after the restart, so I have no idea why systemd is not creating the 'init.scope/' dir. I notice that inside the container's systemd
> cgroup mount 'system.slice/' does exist, but 'user.slice/' also does not (both exist on normal boot).
>
> This should be reproducible using the following:
> cat << EOF > Dockerfile
> FROM ubuntu:20.04
> RUN apt-get update -y && apt-get install systemd -y && ln -s /lib/systemd/systemd /sbin/init
> ENTRYPOINT ["/sbin/init"]
> EOF
> podman build . --tag ubuntu-systemd
> podman run -it --name ubuntu --privileged --cgroupns private ubuntu-systemd
> podman restart ubuntu
>
> Thanks,
> Lewis
>
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
2 years, 4 months
[Podman] Re: RunRoot & mistaken IDs
by Daniel Walsh
On 1/29/24 10:21, lejeczek via Podman wrote:
>
>
> On 29/01/2024 15:55, Daniel Walsh wrote:
>> On 1/29/24 08:52, lejeczek via Podman wrote:
>>>
>>>
>>> On 29/01/2024 12:04, Daniel Walsh wrote:
>>>> On 1/29/24 02:35, lejeczek via Podman wrote:
>>>>>
>>>>>
>>>>> On 28/03/2023 21:00, Chris Evich wrote:
>>>>>> On 3/28/23 09:06, lejeczek via Podman wrote:
>>>>>>> I think it might have something to do with the fact that I
>>>>>>> changed UID for the user
>>>>>>
>>>>>> The files under /run/user/$UID are typically managed by
>>>>>> systemd-logind. I've noticed sometimes there's a delay between
>>>>>> logging out and the files being cleaned up. Try logging out for
>>>>>> a minute or three and see if that fixes it.
>>>>>>
>>>>>> Also, if you have lingering enabled for the user, it may take a
>>>>>> restart of particular the user.slice.
>>>>>>
>>>>>> Lastly, I'm not certain, but you (as root) may be able to
>>>>>> `systemctl reload systemd-logind`. That's a total guess though.
>>>>>>
>>>>>>
>>>>> Those parts seem very clunky - at least in up-to-date Centos 9
>>>>> stream - I have removed a user and re/created that user in IdM and..
>>>>> even after full & healthy OS reboot, containers/podman insist:
>>>>>
>>>>> -> $ podman container ls -a
>>>>> WARN[0000] RunRoot is pointing to a path
>>>>> (/run/user/2001/containers) which is not writable. Most likely
>>>>> podman will fail.
>>>>> Error: default OCI runtime "crun" not found: invalid argument
>>>>>
>>>>> -> $ id
>>>>> uid=1107400004(podmania) gid=1107400004(podmania)
>>>>> groups=1107400004(podmania)
>>>>> context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
>>>>>
>>>>> Where/what does it persist/insist on that old, non-existent UID -
>>>>> would anybody know?
>>>>>
>>>>> many thanks, L.
>>>>> _______________________________________________
>>>>> Podman mailing list -- podman(a)lists.podman.io
>>>>> To unsubscribe send an email to podman-leave(a)lists.podman.io
>>>>
>>>> Do you have XDG_RUNTIME_DIR pointing at it?
>>>>
>>> Nope, I don't think so.
>>>
>>> -> $ echo $XDG_RUNTIME_DIR
>>> /run/user/1107400004
>>> _______________________________________________
>>> Podman mailing list -- podman(a)lists.podman.io
>>> To unsubscribe send an email to podman-leave(a)lists.podman.io
>>
>> Ok you probably need to do a `podman system reset` since you changed
>> the ownership of the homedir and the UID of the process running
>> Podman. Podman recorded the previous settings in its database.
>> _______________________________________________
>>
> Doing anything as the user, seems not as viable option.
>
> -> $ podman system reset
> WARN[0000] RunRoot is pointing to a path (/run/user/2001/containers)
> which is not writable. Most likely podman will fail.
> Error: default OCI runtime "crun" not found: invalid argument
>
> forcibly:
> -> $ rm -fr /home.sysop/podmania/.local/share/containers/*
> helps, kind of, for very next issue is:
>
> -> $ podman system reset
> ERRO[0000] cannot find UID/GID for user podmania: cannot read subids -
> check rootless mode in man pages.
> WARN[0000] Using rootless single mapping into the namespace. This
> might break some images. Check /etc/subuid and /etc/subgid for adding
> sub*ids if not using a network user
> WARNING! This will remove:
> ...
>
> I presumed - incorrectly? - that (these days) subordinate UIDs should
> work when:
> -> $ authselect current
> Profile ID: sssd
> Enabled features:
> - with-sudo
> - with-subid
>
> or am I missing something?
>
> p.s./btw - is it just me or Centos is getting increasingly clunky,
> really?
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
Don't know if the remote /etc/subuid and /etc/subgid is working correctly.
Is there a test program to list the contents of subuid?
getent subuid USER?
1 year, 4 months
[Podman Desktop]🦭 Welcome to the Podman Desktop Early Adopters Program!
by Máirín Duffy
Hi,
Podman Desktop team here. 🦭
Just a couple of months ago, we released Podman Desktop 1.0 as generally
available for Mac, Windows and Linux
<https://developers.redhat.com/articles/2023/05/23/podman-desktop-now-gene...>.
(More on that below.) We're now kicking off our early adopter program,
which you applied to. We'd like to invite you to join the Podman Desktop
community!
What does it mean to be an early adopter?
Many of you mentioned that you enjoy using Podman Desktop and would like to
help us to make it even better. As an early adopter, you will have the
opportunity to contribute by joining discussions and providing constructive
feedback on Podman Desktop's upcoming features and plans before they are
released. This will involve receiving occasional emails from us, when we
have fresh things to share with you.
We need your help right now!
These are five things we're working on right now that we would love your
ideas and feedback on:
1.
Easily create container-based applications via compose syntax - Take a
look at early UI mockups
<https://github.com/containers/podman-desktop/discussions/2881> for UX
improvements to our compose capability, and let us know what you think!
2.
Inspect objects in Kubernetes deployment environments - We will expand
our Kubernetes (K8s) capabilities with a new K8s dashboard and additional
K8s objects support so you can better understand and compare K8s
environments. Review our early ideas and tell us what would help you.
<https://github.com/containers/podman-desktop/discussions/3297>
3.
Install and configure additional features more easily - Check out our
designs <https://github.com/containers/podman-desktop/discussions/3244>
for an upcoming onboarding feature to provide guided walkthroughs to help
you install and configure new container capabilities.
4.
Find answers to questions quickly and easily when stuck - Podman
Desktop's documentation <https://podman-desktop.io/docs/intro> is
expanded and improved; what are we missing? What would you like to see us
cover? Let us know
<https://github.com/containers/podman-desktop/discussions/3319>.
What's New in Podman Desktop
Since its inception, Podman Desktop has focused on providing a
user-friendly interface for managing and working with containers, as well
as providing simple predictability when deploying those containers on
Kubernetes environments. Here's what Podman Desktop 1.0 has to offer:
-
Container engine installation and updates - Podman Desktop takes care of
installing your container engines and keeping them up-to-date on your local
developer environment, providing you easy GUI-based configuration.
-
Forget rewriting your Docker tools to work with Podman - Simply
configure the compatibility mode with Docker, which maps the Docker socket
to Podman, enabling all your tools to just work with Podman.
-
Multiple container engines, one interface - Podman also provides
compatibility with Lima and Docker.
-
Push-button container deployment to Kubernetes - Native Kubernetes
support provides a simple transition for containers to Kubernetes. Spin up
a local Kubernetes cluster with Kind or miniKube, play Kubernetes YAML on
those environments, or generate Kubernetes YAML from pods and containers
you've already created. Easily create pods and generate Kubernetes yaml
with the click of a button.
-
Extensible functionality - Choose only the container engines, Kubernetes
providers, and functionality you want to use. We recently introduced two
extensions related to working with Red Hat OpenShift, and we have a
community-contributed minikube extension
<https://podman-desktop.io/docs/kubernetes/minikube>!
Sorry, that’s a long summary of everything we have been working on - but we
hope that gets you as excited as we are!
Get started with the Podman Desktop community
We encourage you to actively engage in discussions. Your fresh perspective
as an early adopter helps us notice overlooked things.
-
Join Podman Desktop Discussions in GitHub
<https://github.com/containers/podman-desktop/discussions>: Connect with
other community members in our community discussions
<https://github.com/containers/podman-desktop/discussions>. Share your
ideas, ask questions, and let us know if something doesn’t make sense.
Especially if you are new to the community, your fresh perspective helps us
notice overlooked things.
-
Review the documentation <https://podman-desktop.io/docs/intro>for
possible improvements: If you come across any missing information or you
believe Podman Desktop should be included in other open source projects'
documentation, start a discussion thread
<https://github.com/containers/podman-desktop/discussions/new/choose>.
-
Spread the word: Help us grow our open source community by sharing your
positive experiences and tutorials on social media and relevant forums.
Thanks a lot for your interest in Podman Desktop! Your support really means
a lot to us and we are looking forward to interacting more with you, along
with building the tool! If you have any questions or you are new to open
source contributions and need help, please don't hesitate to reach out to
the team via email <podman-desktop-eap-owner(a)lists.podman.io>.
Thanks again!
Máirín Duffy on behalf of the Podman Desktop team 🦭
------------------------------
You are receiving this email because you signed up for Podman Desktop Early
Adopter Program
<https://docs.google.com/forms/d/e/1FAIpQLSdrV7ek1oUSvNRwI3nBVBb8aBLUddbwX...>.
If you need to remove your data from this survey, contact the Podman
Desktop team via email <podman-desktop-eap-owner(a)lists.podman.io>. If you
don’t want to be notified about this program anymore, you can unsubscribe
from this list
<podman-desktop-eap-owner(a)lists.podman.io?subject=unsubscribe>.
1 year, 10 months
[Podman] Re: RunRoot & mistaken IDs
by Erik Sjölund
There is a tool in Fedora:
getsubids
$ man -k getsubids
getsubids (1) - get the subordinate id ranges for a user
$ grep $USER: /etc/subuid
test:100000:65536
$ getsubids $USER
0: test 100000 65536
$ grep $USER: /etc/subgid
test:200000:65536
$ getsubids -g $USER
0: test 200000 65536
$ rpm -qf /usr/bin/getsubids
shadow-utils-subid-4.14.0-2.fc40.aarch64
$
On Mon, Jan 29, 2024 at 5:05 PM Daniel Walsh <dwalsh(a)redhat.com> wrote:
>
> On 1/29/24 10:21, lejeczek via Podman wrote:
> >
> >
> > On 29/01/2024 15:55, Daniel Walsh wrote:
> >> On 1/29/24 08:52, lejeczek via Podman wrote:
> >>>
> >>>
> >>> On 29/01/2024 12:04, Daniel Walsh wrote:
> >>>> On 1/29/24 02:35, lejeczek via Podman wrote:
> >>>>>
> >>>>>
> >>>>> On 28/03/2023 21:00, Chris Evich wrote:
> >>>>>> On 3/28/23 09:06, lejeczek via Podman wrote:
> >>>>>>> I think it might have something to do with the fact that I
> >>>>>>> changed UID for the user
> >>>>>>
> >>>>>> The files under /run/user/$UID are typically managed by
> >>>>>> systemd-logind. I've noticed sometimes there's a delay between
> >>>>>> logging out and the files being cleaned up. Try logging out for
> >>>>>> a minute or three and see if that fixes it.
> >>>>>>
> >>>>>> Also, if you have lingering enabled for the user, it may take a
> >>>>>> restart of particular the user.slice.
> >>>>>>
> >>>>>> Lastly, I'm not certain, but you (as root) may be able to
> >>>>>> `systemctl reload systemd-logind`. That's a total guess though.
> >>>>>>
> >>>>>>
> >>>>> Those parts seem very clunky - at least in up-to-date Centos 9
> >>>>> stream - I have removed a user and re/created that user in IdM and..
> >>>>> even after full & healthy OS reboot, containers/podman insist:
> >>>>>
> >>>>> -> $ podman container ls -a
> >>>>> WARN[0000] RunRoot is pointing to a path
> >>>>> (/run/user/2001/containers) which is not writable. Most likely
> >>>>> podman will fail.
> >>>>> Error: default OCI runtime "crun" not found: invalid argument
> >>>>>
> >>>>> -> $ id
> >>>>> uid=1107400004(podmania) gid=1107400004(podmania)
> >>>>> groups=1107400004(podmania)
> >>>>> context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
> >>>>>
> >>>>> Where/what does it persist/insist on that old, non-existent UID -
> >>>>> would anybody know?
> >>>>>
> >>>>> many thanks, L.
> >>>>> _______________________________________________
> >>>>> Podman mailing list -- podman(a)lists.podman.io
> >>>>> To unsubscribe send an email to podman-leave(a)lists.podman.io
> >>>>
> >>>> Do you have XDG_RUNTIME_DIR pointing at it?
> >>>>
> >>> Nope, I don't think so.
> >>>
> >>> -> $ echo $XDG_RUNTIME_DIR
> >>> /run/user/1107400004
> >>> _______________________________________________
> >>> Podman mailing list -- podman(a)lists.podman.io
> >>> To unsubscribe send an email to podman-leave(a)lists.podman.io
> >>
> >> Ok you probably need to do a `podman system reset` since you changed
> >> the ownership of the homedir and the UID of the process running
> >> Podman. Podman recorded the previous settings in its database.
> >> _______________________________________________
> >>
> > Doing anything as the user, seems not as viable option.
> >
> > -> $ podman system reset
> > WARN[0000] RunRoot is pointing to a path (/run/user/2001/containers)
> > which is not writable. Most likely podman will fail.
> > Error: default OCI runtime "crun" not found: invalid argument
> >
> > forcibly:
> > -> $ rm -fr /home.sysop/podmania/.local/share/containers/*
> > helps, kind of, for very next issue is:
> >
> > -> $ podman system reset
> > ERRO[0000] cannot find UID/GID for user podmania: cannot read subids -
> > check rootless mode in man pages.
> > WARN[0000] Using rootless single mapping into the namespace. This
> > might break some images. Check /etc/subuid and /etc/subgid for adding
> > sub*ids if not using a network user
> > WARNING! This will remove:
> > ...
> >
> > I presumed - incorrectly? - that (these days) subordinate UIDs should
> > work when:
> > -> $ authselect current
> > Profile ID: sssd
> > Enabled features:
> > - with-sudo
> > - with-subid
> >
> > or am I missing something?
> >
> > p.s./btw - is it just me or Centos is getting increasingly clunky,
> > really?
> > _______________________________________________
> > Podman mailing list -- podman(a)lists.podman.io
> > To unsubscribe send an email to podman-leave(a)lists.podman.io
>
> Don't know if the remote /etc/subuid and /etc/subgid is working correctly.
>
> Is there a test program to list the contents of subuid?
>
> getent subuid USER?
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
1 year, 4 months
[Podman] Re: RunRoot & mistaken IDs
by lejeczek
On 29/01/2024 16:27, Daniel Walsh wrote:
> On 1/29/24 10:21, lejeczek via Podman wrote:
>>
>>
>> On 29/01/2024 15:55, Daniel Walsh wrote:
>>> On 1/29/24 08:52, lejeczek via Podman wrote:
>>>>
>>>>
>>>> On 29/01/2024 12:04, Daniel Walsh wrote:
>>>>> On 1/29/24 02:35, lejeczek via Podman wrote:
>>>>>>
>>>>>>
>>>>>> On 28/03/2023 21:00, Chris Evich wrote:
>>>>>>> On 3/28/23 09:06, lejeczek via Podman wrote:
>>>>>>>> I think it might have something to do with the fact
>>>>>>>> that I changed UID for the user
>>>>>>>
>>>>>>> The files under /run/user/$UID are typically managed
>>>>>>> by systemd-logind. I've noticed sometimes there's a
>>>>>>> delay between logging out and the files being
>>>>>>> cleaned up. Try logging out for a minute or three
>>>>>>> and see if that fixes it.
>>>>>>>
>>>>>>> Also, if you have lingering enabled for the user, it
>>>>>>> may take a restart of particular the user.slice.
>>>>>>>
>>>>>>> Lastly, I'm not certain, but you (as root) may be
>>>>>>> able to `systemctl reload systemd-logind`. That's a
>>>>>>> total guess though.
>>>>>>>
>>>>>>>
>>>>>> Those parts seem very clunky - at least in up-to-date
>>>>>> Centos 9 stream - I have removed a user and
>>>>>> re/created that user in IdM and..
>>>>>> even after full & healthy OS reboot,
>>>>>> containers/podman insist:
>>>>>>
>>>>>> -> $ podman container ls -a
>>>>>> WARN[0000] RunRoot is pointing to a path
>>>>>> (/run/user/2001/containers) which is not writable.
>>>>>> Most likely podman will fail.
>>>>>> Error: default OCI runtime "crun" not found: invalid
>>>>>> argument
>>>>>>
>>>>>> -> $ id
>>>>>> uid=1107400004(podmania) gid=1107400004(podmania)
>>>>>> groups=1107400004(podmania)
>>>>>> context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
>>>>>>
>>>>>>
>>>>>> Where/what does it persist/insist on that old,
>>>>>> non-existent UID - would anybody know?
>>>>>>
>>>>>> many thanks, L.
>>>>>> _______________________________________________
>>>>>> Podman mailing list -- podman(a)lists.podman.io
>>>>>> To unsubscribe send an email to
>>>>>> podman-leave(a)lists.podman.io
>>>>>
>>>>> Do you have XDG_RUNTIME_DIR pointing at it?
>>>>>
>>>> Nope, I don't think so.
>>>>
>>>> -> $ echo $XDG_RUNTIME_DIR
>>>> /run/user/1107400004
>>>> _______________________________________________
>>>> Podman mailing list -- podman(a)lists.podman.io
>>>> To unsubscribe send an email to
>>>> podman-leave(a)lists.podman.io
>>>
>>> Ok you probably need to do a `podman system reset` since
>>> you changed the ownership of the homedir and the UID of
>>> the process running Podman. Podman recorded the
>>> previous settings in its database.
>>> _______________________________________________
>>>
>> Doing anything as the user, seems not as viable option.
>>
>> -> $ podman system reset
>> WARN[0000] RunRoot is pointing to a path
>> (/run/user/2001/containers) which is not writable. Most
>> likely podman will fail.
>> Error: default OCI runtime "crun" not found: invalid
>> argument
>>
>> forcibly:
>> -> $ rm -fr /home.sysop/podmania/.local/share/containers/*
>> helps, kind of, for very next issue is:
>>
>> -> $ podman system reset
>> ERRO[0000] cannot find UID/GID for user podmania: cannot
>> read subids - check rootless mode in man pages.
>> WARN[0000] Using rootless single mapping into the
>> namespace. This might break some images. Check
>> /etc/subuid and /etc/subgid for adding sub*ids if not
>> using a network user
>> WARNING! This will remove:
>> ...
>>
>> I presumed - incorrectly? - that (these days) subordinate
>> UIDs should work when:
>> -> $ authselect current
>> Profile ID: sssd
>> Enabled features:
>> - with-sudo
>> - with-subid
>>
>> or am I missing something?
>>
>> p.s./btw - is it just me or Centos is getting
>> increasingly clunky, really?
>> _______________________________________________
>> Podman mailing list -- podman(a)lists.podman.io
>> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
> Don't know if the remote /etc/subuid and /etc/subgid is
> working correctly.
>
> Is there a test program to list the contents of subuid?
>
> getent subuid USER?
> _______________________________________________
>
I think it's all good - perhaps me getting clunky - sub ids
work.
If you hit above "issue" - read up on IdM/FreeIPA - seems
that _subordintate_ IDs are not "fully" implemented there yet(?)
There is a tool:
-> $ /usr/libexec/ipa/ipa-subids --group=containers
Found 2 user(s) without subordinate ids
Processing user 'podmania' (1/2)
Processing user 'appownia' (2/2)
Updated 2 user(s)
The ipa-subids command was successful
1 year, 4 months
[Podman] How does podman "initialize" after a reboot?
by Pratham Patel
Hello everyone,
**Disclaimer: This is a long e-mail.**
I am on NixOS (23.05), using the podman binary provided by the
distribution package. There are several issues that I am facing but
the issue that I want resolved is that _I want rootless Podman
containers started at boot_.
I won't get much into NixOS other than what is needed (i.e. no
advocacy for NixOS). NixOS, being a distribution with reproducible
builds, has a different method of storing binaries. Instead of
binaries living in `/usr/bin`, binaries actually live in
`/nix/store/<hash>-pkg-ver/bin`. Thereafter, the binaries are linked
into `/run/current-system/sw/bin`. My `PATH` (from a login shell)
looks like the following:
```
[pratham@sentinel] $ echo $PATH
/home/pratham/.local/bin:/home/pratham/bin:/run/wrappers/bin:/home/pratham/.local/share/flatpak/exports/bin:/var/lib/flatpak/exports/bin:/home/pratham/.nix-profile/bin:/etc/profiles/per-user/pratham/bin:/nix/var/nix/profiles/default/bin:/run/current-system/sw/bin
```
NixOS, being an OS that you can build with configuration files (i.e.
almost zero bash code to install; except for formatting and mounting),
there exists a way to declare your Podman containers like you do in a
compose.yaml and those containers will automatically be created as a
systemd service [0]. This is great! But those service files are placed
in `/etc/systemd/user`. This has an issue: the Podman container now
runs as root. I checked this by **logging in as root** and checking
the output of `podman ps` (not just `sudo podman ps`). If I wanted
rootful containers, I wouldn't be using Podman...
So, for the time being, I have resorted to writing a systemd unit file
by hand (which is stored in `$HOME/.config/systemd/user`). But the
path `/run/current-system/sw/bin` is missing from the unit's PATH. No
biggie, I can just add it using the following line under the
`[Service]` section:
```
Environment="PATH=/run/current-system/sw/bin:$PATH"
```
(This is a temporary hack and is strongly advised against, but I did
this as a troubleshooting measure, not as a solution.)
But the service fails with the following log entries in journalctl:
```
Jul 11 10:46:47 sentinel podman[36673]:
time="2023-07-11T10:46:47+05:30" level=error msg="running
`/run/current-system/sw/bin/newuidmap 36686 0 1000 1 1 10000 65536`:
newuidmap: write to uid_map failed: Operation not permitted\n"
Jul 11 10:46:47 sentinel podman[36673]: Error: cannot set up namespace
using "/run/current-system/sw/bin/newuidmap": should have setuid or
have filecaps setuid: exit status 1
Jul 11 10:46:47 sentinel systemd[1317]: testing-env.service: Main
process exited, code=exited, status=125/n/a
```
I never encountered this error on Fedora or RHEL. While experimenting,
I noticed one thing: **If I run _any_ Podman command (even `podman
ps`) from my _login shell_ and then restart the Podman container's
systemd service, the service runs cleanly.**
From the _Why can't I use sudo with rootless Podman_ article [1]:
> One of the core reasons Podman requires a temporary files directory is for detecting if the system has rebooted. After a reboot, all containers are no longer running, all container filesystems are unmounted, and all network interfaces need to be recreated (among many other things). Podman needs to update its database to reflect this and perform some per-boot setup to ensure it is ready to launch containers. This is called "refreshing the state."
>
> This is necessary because Podman is not a daemon. Each Podman command is run as a new process and doesn't initially know what state containers are in. You can look in the database for an accurate picture of all your current containers and their states. Refreshing the state after a reboot is essential to making sure this picture continues to be accurate.
>
> To perform the refresh, you need a reliable way of detecting a system reboot, and early in development, the Podman team settled on using a sentinel file on a tmpfs filesystem. A tmpfs is an in-memory filesystem that is not saved after a reboot—every time the system starts, a tmpfs mount will be empty. By checking for the existence of a file on such a filesystem and creating it if it does not exist, Podman can know if it's the first time it has run since the system rebooted.
>
> The problem becomes in determining where you should put your temporary files directory. The obvious answer is /tmp, but this is not guaranteed to be a tmpfs filesystem (though it often is). Instead, by default, Podman will use /run, which is guaranteed to be a tmpfs. Unfortunately, /run is only writable by root, so rootless Podman must look elsewhere. Our team settled on the /run/user/$UID directories, a per-user temporary files directory.
This means that Podman needs some sort of "initialization" when the
system has rebooted. Apparently, due to NixOS' nature, this
"initialization" doesn't occur when Podman is invoked from a systemd
service (something is missing but I can't figure out _what_). So I
rebooted and setup an `inotifywait` job (logged in as `root`--not with
the `sudo` prefix--with the command `inotifywait /run/user/1000/
--recursive --monitor`; `XDG_RUNTIME_DIR` for user `pratham` is
`/run/user/1000`) and ran `podman ps` when I was logged in as user
`pratham`. It generated the following output:
```
/run/user/1000/ ATTRIB,ISDIR libpod
/run/user/1000/libpod/ ATTRIB,ISDIR
/run/user/1000/libpod/tmp/ OPEN alive.lck
/run/user/1000/libpod/tmp/ CLOSE_WRITE,CLOSE alive.lck
/run/user/1000/libpod/tmp/ OPEN alive.lck
/run/user/1000/libpod/tmp/ CLOSE_WRITE,CLOSE alive.lck
/run/user/1000/libpod/tmp/ CREATE pause.pid.NjPiqQ
/run/user/1000/libpod/tmp/ OPEN pause.pid.NjPiqQ
/run/user/1000/libpod/tmp/ MODIFY pause.pid.NjPiqQ
/run/user/1000/libpod/tmp/ CLOSE_WRITE,CLOSE pause.pid.NjPiqQ
/run/user/1000/libpod/tmp/ MOVED_FROM pause.pid.NjPiqQ
/run/user/1000/libpod/tmp/ MOVED_TO pause.pid
/run/user/1000/ ATTRIB,ISDIR libpod
/run/user/1000/libpod/ ATTRIB,ISDIR
/run/user/1000/containers/ CREATE,ISDIR overlay
/run/user/1000/containers/ OPEN,ISDIR overlay
/run/user/1000/containers/ ACCESS,ISDIR overlay
/run/user/1000/containers/ CLOSE_NOWRITE,CLOSE,ISDIR overlay
/run/user/1000/containers/overlay/ CREATE overlay-true
/run/user/1000/containers/overlay/ OPEN overlay-true
/run/user/1000/containers/overlay/ CLOSE_WRITE,CLOSE overlay-true
/run/user/1000/containers/overlay/ OPEN overlay-true
/run/user/1000/containers/overlay/ CLOSE_NOWRITE,CLOSE overlay-true
/run/user/1000/containers/overlay/ CREATE metacopy()-false
/run/user/1000/containers/overlay/ OPEN metacopy()-false
/run/user/1000/containers/overlay/ CLOSE_WRITE,CLOSE metacopy()-false
/run/user/1000/containers/overlay/ CREATE native-diff()-true
/run/user/1000/containers/overlay/ OPEN native-diff()-true
/run/user/1000/containers/overlay/ CLOSE_WRITE,CLOSE native-diff()-true
/run/user/1000/containers/ CREATE,ISDIR overlay-containers
/run/user/1000/containers/ OPEN,ISDIR overlay-containers
/run/user/1000/containers/ ACCESS,ISDIR overlay-containers
/run/user/1000/containers/ CLOSE_NOWRITE,CLOSE,ISDIR overlay-containers
/run/user/1000/containers/ CREATE,ISDIR overlay-locks
/run/user/1000/containers/ OPEN,ISDIR overlay-locks
/run/user/1000/containers/ ACCESS,ISDIR overlay-locks
/run/user/1000/containers/ CLOSE_NOWRITE,CLOSE,ISDIR overlay-locks
/run/user/1000/containers/ CREATE,ISDIR networks
/run/user/1000/libpod/tmp/ OPEN alive.lck
/run/user/1000/libpod/tmp/ CLOSE_WRITE,CLOSE alive.lck
/run/user/1000/libpod/tmp/ OPEN alive.lck
/run/user/1000/containers/ OPEN,ISDIR networks
/run/user/1000/containers/ ACCESS,ISDIR networks
/run/user/1000/containers/ CLOSE_NOWRITE,CLOSE,ISDIR networks
/run/user/1000/libpod/tmp/ CREATE alive
/run/user/1000/libpod/tmp/ OPEN alive
/run/user/1000/libpod/tmp/ CLOSE_NOWRITE,CLOSE alive
/run/user/1000/libpod/tmp/ CLOSE_WRITE,CLOSE alive.lck
/run/user/1000/systemd/units/ CREATE .#invocation:dbus.serviced739c18053185984
/run/user/1000/systemd/units/ MOVED_FROM
.#invocation:dbus.serviced739c18053185984
/run/user/1000/systemd/units/ MOVED_TO invocation:dbus.service
/run/user/1000/ CREATE,ISDIR dbus-1
/run/user/1000/ OPEN,ISDIR dbus-1
/run/user/1000/ ACCESS,ISDIR dbus-1
/run/user/1000/ CLOSE_NOWRITE,CLOSE,ISDIR dbus-1
/run/user/1000/dbus-1/ OPEN,ISDIR services
/run/user/1000/dbus-1/services/ OPEN,ISDIR
/run/user/1000/dbus-1/ ACCESS,ISDIR services
/run/user/1000/dbus-1/services/ ACCESS,ISDIR
/run/user/1000/dbus-1/ ACCESS,ISDIR services
/run/user/1000/dbus-1/services/ ACCESS,ISDIR
/run/user/1000/dbus-1/ CLOSE_NOWRITE,CLOSE,ISDIR services
/run/user/1000/dbus-1/services/ CLOSE_NOWRITE,CLOSE,ISDIR
/run/user/1000/systemd/ CREATE,ISDIR transient
/run/user/1000/systemd/ OPEN,ISDIR transient
/run/user/1000/systemd/ ACCESS,ISDIR transient
/run/user/1000/systemd/ CLOSE_NOWRITE,CLOSE,ISDIR transient
/run/user/1000/systemd/transient/ CREATE podman-2894.scope
/run/user/1000/systemd/transient/ OPEN podman-2894.scope
/run/user/1000/systemd/transient/ MODIFY podman-2894.scope
/run/user/1000/systemd/transient/ CLOSE_WRITE,CLOSE podman-2894.scope
/run/user/1000/systemd/units/ CREATE
.#invocation:podman-2894.scopeb6be723b1ec13b95
/run/user/1000/systemd/units/ MOVED_FROM
.#invocation:podman-2894.scopeb6be723b1ec13b95
/run/user/1000/systemd/units/ MOVED_TO invocation:podman-2894.scope
/run/user/1000/containers/ CREATE,ISDIR overlay-layers
/run/user/1000/containers/ OPEN,ISDIR overlay-layers
/run/user/1000/containers/ ACCESS,ISDIR overlay-layers
/run/user/1000/containers/ CLOSE_NOWRITE,CLOSE,ISDIR overlay-layers
/run/user/1000/containers/overlay-layers/ CREATE mountpoints.lock
/run/user/1000/containers/overlay-layers/ OPEN mountpoints.lock
/run/user/1000/containers/overlay-layers/ CLOSE_WRITE,CLOSE mountpoints.lock
/run/user/1000/containers/overlay-layers/ OPEN mountpoints.lock
/run/user/1000/containers/overlay-layers/ CLOSE_WRITE,CLOSE mountpoints.lock
/run/user/1000/containers/overlay-layers/ OPEN mountpoints.lock
/run/user/1000/containers/overlay-layers/ CLOSE_WRITE,CLOSE mountpoints.lock
/run/user/1000/systemd/units/ DELETE invocation:podman-2894.scope
/run/user/1000/systemd/transient/ DELETE podman-2894.scope
/run/user/1000/libpod/tmp/ OPEN pause.pid
/run/user/1000/libpod/tmp/ ACCESS pause.pid
/run/user/1000/libpod/tmp/ CLOSE_NOWRITE,CLOSE pause.pid
/run/user/1000/systemd/transient/ CREATE podman-pause-f50834a6.scope
/run/user/1000/systemd/transient/ OPEN podman-pause-f50834a6.scope
/run/user/1000/systemd/transient/ MODIFY podman-pause-f50834a6.scope
/run/user/1000/systemd/transient/ CLOSE_WRITE,CLOSE podman-pause-f50834a6.scope
/run/user/1000/systemd/units/ CREATE
.#invocation:podman-pause-f50834a6.scope03db5d0ea8888975
/run/user/1000/systemd/units/ MOVED_FROM
.#invocation:podman-pause-f50834a6.scope03db5d0ea8888975
/run/user/1000/systemd/units/ MOVED_TO invocation:podman-pause-f50834a6.scope
```
Following is the output of `podman info` on my computer:
```
[pratham@sentinel] $ podman info
host:
arch: arm64
buildahVersion: 1.30.0
cgroupControllers:
- cpu
- io
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: Unknown
path: /run/current-system/sw/bin/conmon
version: 'conmon version 2.1.7, commit: '
cpuUtilization:
idlePercent: 81.03
systemPercent: 3.02
userPercent: 15.94
cpus: 4
databaseBackend: boltdb
distribution:
codename: stoat
distribution: nixos
version: "23.05"
eventLogger: journald
hostname: sentinel
idMappings:
gidmap:
- container_id: 0
host_id: 994
size: 1
- container_id: 1
host_id: 10000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 10000
size: 65536
kernel: 6.1.38
linkmode: dynamic
logDriver: journald
memFree: 3040059392
memTotal: 3944181760
networkBackend: netavark
ociRuntime:
name: crun
package: Unknown
path: /run/current-system/sw/bin/crun
version: |-
crun version 1.8.4
commit: 1.8.4
rundir: /run/user/1000/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
exists: true
path: /run/user/1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities:
CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_
CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: ""
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable:
/nix/store/n8lbxja2hd766pnz89qki90na2b3g815-slirp4netns-1.2.0/bin/slirp4netns
package: Unknown
version: |-
slirp4netns version 1.2.0
commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 4
libseccomp: 2.5.4
swapFree: 2957766656
swapTotal: 2957766656
uptime: 0h 5m 34.00s
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- docker.io
- quay.io
store:
configFile: /home/pratham/.config/containers/storage.conf
containerStore:
number: 2
paused: 0
running: 0
stopped: 2
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/pratham/.local/share/containers/storage
graphRootAllocated: 13539516416
graphRootUsed: 7770832896
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 9
runRoot: /run/user/1000/containers
transientStore: false
volumePath: /home/pratham/.local/share/containers/storage/volumes
version:
APIVersion: 4.5.0
Built: 315532800
BuiltTime: Tue Jan 1 05:30:00 1980
GitCommit: ""
GoVersion: go1.20.5
Os: linux
OsArch: linux/arm64
Version: 4.5.0
```
So my current question is how do I do this initial setup manually? I
don't want to log into `pratham`'s login shell every time I have to
reboot my machine for the Podman containers to start.
[0]: https://nixos.wiki/wiki/Podman#Run_Podman_containers_as_systemd_services
[1]: https://www.redhat.com/sysadmin/sudo-rootless-podman
- Pratham Patel
1 year, 10 months