[Podman] Re: Follow-up: Rootless storage usage
by Daniel Walsh
Is there any config files in ~/.config/containers?
podman system reset
Should remove everything, and from then on Podman should use rootless
overlay.
On 1/25/23 09:52, Михаил Иванов wrote:
> Is native overlay available in rootless mode?
> When I run podman as root there's no problem, overlayfs is picked up
> as default in debian. VFS is selected as default only in rootless mode.
> Rgrds,
> On 25.01.2023 14:03, Giuseppe Scrivano wrote:
>> Reinhard Tartler<siretart(a)gmail.com> writes:
>>
>>> On Tue, Jan 24, 2023 at 2:08 PM Daniel Walsh<dwalsh(a)redhat.com> wrote:
>>>
>>> On 1/24/23 03:47, Reinhard Tartler wrote:
>>>
>>> Dan,
>>>
>>> In Debian, I've chosen to just go with the upstream defaults:
>>> https://github.com/containers/storage/blob/8428fad6d0d3c4cded8fd7702af36a...
>>>
>>> This file is installed verbatim to /usr/share/containers/storage.conf.
>>>
>>> Is there a better choice? Does Fedora/Redhat provide a default storage.conf from somewhere else?
>>>
>>> Thanks,
>>> -rt
>>>
>>> That should be fine. Fedora goes with that default as well. Does debian support rootless overlay by default?
>>>
>>> If not then it would fail over to VFS if fuse-overlayfs is not installed.
>>>
>>> I'm a bit confused about what you mean with that.
>>>
>>> In Debian releases that ship podman 4.x we have at least Linux kernel 6.0. The fuse-overlayfs package is installed by default, but users may opt to not
>>> install it by configuring apt to not install "Recommends" by default.
>>>
>>> What else is required for rootless overlay?
>>>
>>> Also, if I follow this conversation, then it appears that the default storage.conf requires modification in line 118 (to uncomment the mount_program
>>> option) in order to actually use fuse-overlayfs. I would have expected podman to use fuse-overlayfs if it happens to be installed, and fallback to direct
>>> mount if not. I read Michail's email thread that this appears to be not the case and he had to spend a lot of effort figuring out how to install an
>>> appropriate configuration file. Maybe I'm missing something, but I wonder what we can do to improve the user experience?
>> what issue do you see if you use native overlay?
>>
>> Podman prefers native overlay if it is available, since it is faster.
>> If not, it tries fuse-overlays and if it is not available, it falls back
>> to vfs.
>>
>> Could you try from a fresh storage though? If fuse-overlayfs was
>> already used, then Podman will continue using it even if native overlay
>> is available, since the storage metadata is slightly different.
>>
>> Thanks,
>> Giuseppe
>> _______________________________________________
>> Podman mailing list --podman(a)lists.podman.io
>> To unsubscribe send an email topodman-leave(a)lists.podman.io
2 years, 8 months
[Podman] Re: Why do use podman machine on Mac?
by Jarkko Laiho
Macs are BSD-based, not Linux, and therefore do not run the Linux kernel, and therefore cannot run Podman (or Docker) natively.
- JK
> On 7. Sep 2023, at 19.19, Mehdi Haghgoo via Podman <podman(a)lists.podman.io> wrote:
>
> The container experience with podman machine on Windows and mac is not optimal because the containers are slow.
> Mac is a Linux-based OS. So, why can't we create native containers on it as we do on Linux?
>
> That applies to WSL. It's kind of Linux. Why cannot we create native Linux containers on it without resorting to Podman machine and podman clients?
>
>
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
2 years, 1 month
[Podman] Re: RunRoot & mistaken IDs
by lejeczek
On 28/03/2023 21:00, Chris Evich wrote:
> On 3/28/23 09:06, lejeczek via Podman wrote:
>> I think it might have something to do with the fact that
>> I changed UID for the user
>
> The files under /run/user/$UID are typically managed by
> systemd-logind. I've noticed sometimes there's a delay
> between logging out and the files being cleaned up. Try
> logging out for a minute or three and see if that fixes it.
>
> Also, if you have lingering enabled for the user, it may
> take a restart of particular the user.slice.
>
> Lastly, I'm not certain, but you (as root) may be able to
> `systemctl reload systemd-logind`. That's a total guess
> though.
>
> ---
thanks, that was that delay, yes, bit annoying If 'usermod'
was in mass/often use.
2 years, 6 months
[Podman] Re: fs.mqueue.msg_max rootless problem
by Lewis Gaul
Hi,
I think this is the same thing I raised in
https://github.com/containers/podman/discussions/19737?
This seems to be a kernel limitation - I'm not sure where the mqueue limits
come from when creating a new IPC namespace, but it doesn't inherit the
limits from the parent namespace and the root user within the user
namespace does not have permissions to modify the limits. This was
supposedly fixed in a recent kernel version although I haven't tested it.
The workaround I'm currently using (requiring sudo permissions) is along
the lines of:
podman create --ipc private --name ctr_foo ...
podman init ctr_foo
ctr_pid=$(podman inspect -f '{{.State.Pid}}' ctr_foo)
sudo nsenter --target $ctr_pid --user --ipc sysctl fs.mqueue.msg_max=64
podman start ctr_foo
Obviously this isn't ideal, and I'd be open to alternatives...
Regards,
Lewis
On Mon, 27 Nov 2023 at 12:23, Daniel Walsh <dwalsh(a)redhat.com> wrote:
> On 11/27/23 02:04, Михаил Иванов wrote:
>
> Hallo,
>
> For me rootful works:
>
> island:container [master]> cat /proc/sys/fs/mqueue/msg_max
> 256
>
> $ podman run alpine ls -ld /proc/sys/fs/mqueue/msg_max
> -rw-r--r-- 1 nobody nobody 0 Nov 27 12:10
> /proc/sys/fs/mqueue/msg_max
>
> Since it is owned by real root, a rootless user can not write to it. I
> guess we could ague this is a bug with the kernel. mqeueu/msg_max should be
> owned by root of the user namespace as opposed to real root.
>
> ## Rootful:
> island:container [master]> sudo podman run --sysctl=fs.mqueue.msg_max=64 --rm centos cat /proc/sys/fs/mqueue/msg_max
> 64
>
> ## Rootless:
> island:container [master]> podman run --sysctl=fs.mqueue.msg_max=64 --rm centos cat /proc/sys/fs/mqueue/msg_max
> Error: crun: open `/proc/sys/fs/mqueue/msg_max`: Permission denied: OCI permission denied
>
> ## What rootless gets by default (changed as compared to host setting!):
> island:container [master]> podman run --rm centos cat /proc/sys/fs/mqueue/msg_max
> 10
>
> Rgrds,
>
> On 25.11.2023 20:17, Daniel Walsh wrote:
>
> On 11/25/23 10:44, Михаил Иванов wrote:
>
> Hallo,
>
> Is it possible to get podman to propagate current host fs.mqueue.msg_max
> value to rootless container? I can do that if I specify --ipc host when
> running the container, but this also exposes other ipc stuff from host
> to container, including shared memory, which I do not want.
>
> If I specify --sysctl fs.mqueue.msg_size=64 to podman it gives me
> "OCI permission denied" error, even when my host setting (256) is greater
> than requested value.
>
> Thanks,
> --
> Micvhael Ivanov
>
>
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
> The way you attempted is correct. Might not be allowed for rootless
> containers.
>
> I attempted this in a rootful container and it blows up for me.
>
>
> podman run --sysctl fs.mqueue.msg_size=64 alpine echo hi
> Error: crun: open `/proc/sys/fs/mqueue/msg_size`: No such file or
> directory: OCI runtime attempted to invoke a command that was not found
>
>
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
>
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
1 year, 10 months
[Podman] Re: fcontext for rootfull volumes ?
by lejeczek
On 14/06/2023 15:16, lejeczek via Podman wrote:
> Hi guys.
>
> I map /root very often - I'd imagine many do - and I do
> that with Z
> What I get is quite puzzling to me, say host has it:
>
> system_u:object_r:container_file_t:s0 bin
> system_u:object_r:container_file_t:s0:c526,c622 cacert.p12
> system_u:object_r:container_file_t:s0:c526,c622 kracert.p12
> system_u:object_r:container_file_t:s0:c74,c78 pki
>
> in container:
>
> -> $ ls -Z1 bin pki
> bin:
> system_u:object_r:container_file_t:s0 conf
> system_u:object_r:container_file_t:s0 container-config
> ls: cannot open directory 'pki': Permission denied
>
> 'root' existed prior to container creation and 'pki' was
> added later, & outside of container.
> fcontext is not enough? SELinux says:
>
> allow container_init_t container_file_t:dir read;
>
> label=disable seems to be the way of it it but is that the
> right way?
ah, fcontext is good enough - another tool/daemon kept
changing labels.
2 years, 4 months
[Podman] Reliable service starts
by Mark Raynsford
Hello!
I'm using podman on Fedora CoreOS. The standard setup for a
podman-based service tends to look like this (according to the
documentation):
---
[Unit]
Description=looseleaf
After=network-online.target
Wants=network-online.target
[Service]
Type=exec
TimeoutStartSec=60
User=_looseleaf
Group=_looseleaf
Restart=on-failure
RestartSec=10s
Environment="_JAVA_OPTIONS=-XX:+UseSerialGC -Xmx64m -Xms64m"
ExecStartPre=-/bin/podman kill looseleaf
ExecStartPre=-/bin/podman rm looseleaf
ExecStartPre=/bin/podman pull docker.io/io7m/looseleaf:0.0.4
ExecStart=/bin/podman run \
--name looseleaf \
--volume /var/storage/looseleaf/etc:/looseleaf/etc:Z,ro \
--volume /var/storage/looseleaf/var:/looseleaf/var:Z,rw \
--publish 20000:20000/tcp \
--memory=128m \
--memory-reservation=80m \
docker.io/io7m/looseleaf:{{looseleaf_version}} \
/looseleaf/bin/looseleaf server --file /looseleaf/etc/config.json
ExecStop=/bin/podman stop looseleaf
[Install]
WantedBy=multi-user.target
---
The important line is this one:
/bin/podman pull docker.io/io7m/looseleaf:0.0.4
Unfortunately, this line can fail. That in itself isn't a problem, the
service will be restarted and it'll run again. The real problem is that
it can fail in ways that will break all subsequent executions.
On new Fedora CoreOS deployments, there's often a lot of network
traffic happening on first boot as the rest of the system updates
itself, and it's not unusual for `podman pull` to fail and leave the
services permanently broken (unless someone goes in and fixes them).
This is what will typically happen:
Feb 02 20:31:05 control1.io7m.com podman[1934]: Trying to pull docker.io/io7m/looseleaf:0.0.4...
Feb 02 20:31:48 control1.io7m.com podman[1934]: time="2023-02-02T20:31:48Z" level=warning msg="Failed, retrying in 1s ... (1/3). Error: initializing source docker://io7m/looseleaf:0.0.4: pinging container registry registry-1.docker.io: Get \"https://regist>
Feb 02 20:31:50 control1.io7m.com podman[1934]: Getting image source signatures
Feb 02 20:31:50 control1.io7m.com podman[1934]: Copying blob sha256:9794579c486abc6811cea048073584c869db02a4d9b615eeaa1d29e9c75738b9
Feb 02 20:31:50 control1.io7m.com podman[1934]: Copying blob sha256:8921db27df2831fa6eaa85321205a2470c669b855f3ec95d5a3c2b46de0442c9
Feb 02 20:31:50 control1.io7m.com podman[1934]: Copying blob sha256:846e3b32ee5a149e3ccb99051cdb52e96e11488293cdf72ee88168c88dd335c7
Feb 02 20:31:50 control1.io7m.com podman[1934]: Copying blob sha256:7f516ed68e97f9655d26ae3312c2aeede3dfda2dd3d19d2f9c9c118027543e87
Feb 02 20:31:50 control1.io7m.com podman[1934]: Copying blob sha256:e88daf71a034bed777eda8657762faad07639a9e27c7afb719b9a117946d1b8a
Feb 02 20:32:03 control1.io7m.com systemd[1]: looseleaf.service: start-pre operation timed out. Terminating.
It'll usually happen again on the next service restart. Then, this will
tend to happen:
Feb 02 20:34:13 control1.io7m.com podman[2745]: time="2023-02-02T20:34:13Z" level=error msg="Image docker.io/io7m/looseleaf:0.0.4 exists in local storage but may be corrupted (remove the image to resolve the issue): size for layer \"13cfed814d5b083572142bc>
Feb 02 20:34:13 control1.io7m.com podman[2745]: Trying to pull docker.io/io7m/looseleaf:0.0.4...
Feb 02 20:34:14 control1.io7m.com podman[2745]: Getting image source signatures
Feb 02 20:34:14 control1.io7m.com podman[2745]: Copying blob sha256:9794579c486abc6811cea048073584c869db02a4d9b615eeaa1d29e9c75738b9
Feb 02 20:34:14 control1.io7m.com podman[2745]: Copying blob sha256:8921db27df2831fa6eaa85321205a2470c669b855f3ec95d5a3c2b46de0442c9
Feb 02 20:34:14 control1.io7m.com podman[2745]: Copying blob sha256:846e3b32ee5a149e3ccb99051cdb52e96e11488293cdf72ee88168c88dd335c7
Feb 02 20:34:14 control1.io7m.com podman[2745]: Copying blob sha256:7f516ed68e97f9655d26ae3312c2aeede3dfda2dd3d19d2f9c9c118027543e87
Feb 02 20:34:14 control1.io7m.com podman[2745]: Copying blob sha256:e88daf71a034bed777eda8657762faad07639a9e27c7afb719b9a117946d1b8a
Feb 02 20:34:18 control1.io7m.com podman[2745]: Copying config sha256:cce9701f3b6e34e3fc26332da58edcba85bbf4f625bdb5f508805d2fa5e62e3e
Feb 02 20:34:18 control1.io7m.com podman[2745]: Writing manifest to image destination
Feb 02 20:34:18 control1.io7m.com podman[2745]: Storing signatures
Feb 02 20:34:18 control1.io7m.com podman[2745]: Error: checking platform of image cce9701f3b6e34e3fc26332da58edcba85bbf4f625bdb5f508805d2fa5e62e3e: inspecting image: size for layer "13cfed814d5b083572142bc068ae7f890f323258135f0cffe87b04cb62c3742e" is unkno>
Feb 02 20:34:18 control1.io7m.com systemd[1]: looseleaf.service: Control process exited, code=exited, status=125/n/a
At this point, there's really nothing that can be done aside from
having a human log in and running something like "podman system reset".
These systems are supposed to be as immutable as possible, and
deployments are supposed to be automated. As it stands currently, I
can't actually a deploy a machine and not have it immediately break and
require a manual intervention.
Is there some better way to handle this?
--
Mark Raynsford | https://www.io7m.com
2 years, 8 months
[Podman] Re: "floating" IP with podman
by lejeczek
On 12/06/2023 17:35, Chris Evich wrote:
>
> IIRC this is called an 'alias'. I don't have a direct
> answer to your question, but I can anticipate what the
> experts will want to know:
>
> Is this a root or rootless container?
>
> Chris Evich (he/him), RHCA III
> Senior Quality Assurance Engineer
> If it ain't broke, your hammer isn't wide 'nough.
>
> On 6/12/23 05:38, lejeczek via Podman wrote:
>> Hi guys.
>>
>> Is it possible to "attach" an IP to a container with (or
>> perhaps outside of) podman but not create a separate/new
>> iface for that?
>> As if you added a "subsequent" IP to already
>> ip-configured iface.
>>
>> many thanks, L.
>>
yes rootfool.
On this/similar topic - does 'macvlan' offer settable
metrics (it surely does not "inherit" - I expected it'd -
host iface's metric) or perhaps a "no-gateway" setup?
I'm on Centos 8 with 4.4.1 version.
2 years, 4 months
[Podman] Re: Reliable service starts
by Mark Raynsford
On 2023-02-03T09:19:44 +0100
Valentin Rothberg <vrothberg(a)redhat.com> wrote:
> Hi Mark,
>
> Thanks for reaching out.
>
> I suggest using `podman generate systemd` to generate a systemd unit.
> There's also a new way of running Podman inside of systemd called Quadlet
> that ships with the just released Podman v4.4. A blog about that topic is
> in the pipeline.
>
> Given the complexity of running Podman in systemd, `podman generate
> systemd` and Quadlet are the only supported ways.
>
> In your case, I suggest removing `podman pull` from the service. In
> contrast to `podman pull`, `podman run` won't redundantly pull the image if
> it's already in the local storage. That will relax the network bottleneck.
Thanks, I'll look into this. The systemd unit shown in my example is
actually already generated from a template (which is then included as
part of the CoreOS ignition file). I assume I won't have to run
"podman generate systemd" on the target machine? Can I run that on my
local development machine and then upload the results to the machine
that will actually run the service?
--
Mark Raynsford | https://www.io7m.com
2 years, 8 months
[Podman] Re: RunRoot & mistaken IDs
by lejeczek
On 28/03/2023 21:00, Chris Evich wrote:
> On 3/28/23 09:06, lejeczek via Podman wrote:
>> I think it might have something to do with the fact that
>> I changed UID for the user
>
> The files under /run/user/$UID are typically managed by
> systemd-logind. I've noticed sometimes there's a delay
> between logging out and the files being cleaned up. Try
> logging out for a minute or three and see if that fixes it.
>
> Also, if you have lingering enabled for the user, it may
> take a restart of particular the user.slice.
>
> Lastly, I'm not certain, but you (as root) may be able to
> `systemctl reload systemd-logind`. That's a total guess
> though.
>
>
Those parts seem very clunky - at least in up-to-date Centos
9 stream - I have removed a user and re/created that user in
IdM and..
even after full & healthy OS reboot, containers/podman insist:
-> $ podman container ls -a
WARN[0000] RunRoot is pointing to a path
(/run/user/2001/containers) which is not writable. Most
likely podman will fail.
Error: default OCI runtime "crun" not found: invalid argument
-> $ id
uid=1107400004(podmania) gid=1107400004(podmania)
groups=1107400004(podmania)
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
Where/what does it persist/insist on that old, non-existent
UID - would anybody know?
many thanks, L.
1 year, 8 months
[Podman] quay.io podman/buildah/skopeo image safety
by Chris Evich
All,
On August 23rd it was discovered that the credentials for several robot
service accounts with write-access to the container-images could have
leaked. Upon discovery, the credentials were invalidated. The earliest
possible leak opportunity was around March 10th, 2022.
While the investigation is ongoing, initial inspection of the images
seem to indicate it is unlikely any credentials had actually been
discovered and/or used to manipulate images. Nevertheless, out of an
abundance of caution, all possibly-affected images will be disabled.
quay.io/containers/podman : tags v3 - v4
quay.io/containers/buildah : tags v1.23.1 - v1.31.0
quay.io/containers/skopeo : tags v1.5.2 - v1.13.1
quay.io/podman/stable : tags v1.6 - v4.6.0
quay.io/podman/hello:latest SHA256 afda668e706a (<= Aug 2, 2023)
quay.io/buildah/stable : tags v1.23.3 - 1.31.0
quay.io/skopeo/stable : tags v1.3.0 - 1.13.1
We realize this issue has the potential to impact not only direct, but
also indirect use, such as base-images. The safety and integrity of
these images has and must take priority. At this time, all images have
been disabled. We will restore originals and/or rebuild fresh copies
based on further safety analysis.
We expect analysis to be complete and/or known-safe images restored,
before Sept. 8th. Though please keep in mind the research is ongoing,
and the situation remains somewhat fluid. When the examination work is
complete, or if any manipulation is discovered, we will issue further
updates.
Thank you in advance for your patients and understanding.
2 years, 1 month