[Podman] Re: RunRoot & mistaken IDs
by Daniel Walsh
On 1/29/24 08:52, lejeczek via Podman wrote:
>
>
> On 29/01/2024 12:04, Daniel Walsh wrote:
>> On 1/29/24 02:35, lejeczek via Podman wrote:
>>>
>>>
>>> On 28/03/2023 21:00, Chris Evich wrote:
>>>> On 3/28/23 09:06, lejeczek via Podman wrote:
>>>>> I think it might have something to do with the fact that I changed
>>>>> UID for the user
>>>>
>>>> The files under /run/user/$UID are typically managed by
>>>> systemd-logind. I've noticed sometimes there's a delay between
>>>> logging out and the files being cleaned up. Try logging out for a
>>>> minute or three and see if that fixes it.
>>>>
>>>> Also, if you have lingering enabled for the user, it may take a
>>>> restart of particular the user.slice.
>>>>
>>>> Lastly, I'm not certain, but you (as root) may be able to
>>>> `systemctl reload systemd-logind`. That's a total guess though.
>>>>
>>>>
>>> Those parts seem very clunky - at least in up-to-date Centos 9
>>> stream - I have removed a user and re/created that user in IdM and..
>>> even after full & healthy OS reboot, containers/podman insist:
>>>
>>> -> $ podman container ls -a
>>> WARN[0000] RunRoot is pointing to a path (/run/user/2001/containers)
>>> which is not writable. Most likely podman will fail.
>>> Error: default OCI runtime "crun" not found: invalid argument
>>>
>>> -> $ id
>>> uid=1107400004(podmania) gid=1107400004(podmania)
>>> groups=1107400004(podmania)
>>> context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
>>>
>>> Where/what does it persist/insist on that old, non-existent UID -
>>> would anybody know?
>>>
>>> many thanks, L.
>>> _______________________________________________
>>> Podman mailing list -- podman(a)lists.podman.io
>>> To unsubscribe send an email to podman-leave(a)lists.podman.io
>>
>> Do you have XDG_RUNTIME_DIR pointing at it?
>>
> Nope, I don't think so.
>
> -> $ echo $XDG_RUNTIME_DIR
> /run/user/1107400004
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
Ok you probably need to do a `podman system reset` since you changed the
ownership of the homedir and the UID of the process running Podman.
Podman recorded the previous settings in its database.
1 year, 4 months
[Podman] Re: Should I run podman-based systemd services as root?
by Daniel Walsh
On 7/28/23 09:27, Mark Raynsford via Podman wrote:
> On 2023-07-28T09:13:14 -0400
> Daniel Walsh <dwalsh(a)redhat.com> wrote:
>> The issue with running all of your containers as a non root users, is if
>> every container runs as a non-root user, then the containers would be
>> allowed to attack the user account and every other container, if they
>> were to escape confinement (SELinux).
> Hello!
>
> I read back what I wrote and realized it was a bit ambiguous. To be
> clear: I'm running each container as a separate non-root user; one user
> ID per container (not one user ID shared between all containers).
>
>> Running containers with the least privs possible is always the goal,
>> but it really is up to the application.
> This is where I'm still not entirely clear: Is running a container as
> root with SELinux and with flags such as --unprivileged=false really
> more powerful than as a regular user with the same kinds of flags?
No a regular user is find, and since you are doing each podman run with
a different user, you are doing it the most securely in my opinion, with
a great deal of extra work, however.
> I haven't heard of anyone escaping SELinux confinement, although I'm
> guessing it has probably been done. I'd assume a kernel-level exploit
> would probably be required and, in that case, running under a different
> UID wouldn't help matters.
Well, none that I am aware of in the last few years, but I believe in
Defense in Depth,
I believe each security measure has a chance of failure, but the more
you combine the less likely that the
entire system is vulnerable.
>
> I've tried setting up machines with all of the containers running as
> root, and it's certainly a lot less of an administrative burden. I run
> my own registry so I'm not _too_ concerned about hostile images making
> it into production containers.
>
> I feel like there's a huge hole in the documentation around this
> subject, and it's really weird that noone appears to be talking about
> it. Fedora Server runs all containers as root if configured via
> Cockpit, so presumably someone at least considered the issue?
Well I do cover a lot of security concerns in my book Podman in Action,
Chapters 10 and 11.
>
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
1 year, 10 months
[Podman] Re: Speeding up podman by using cache
by Daniel Walsh
On 10/22/24 11:04, Ganeshar, Puvi wrote:
>
> Hello Podman team,
>
> I am about explore this option so just wanted to check with you all
> before as I might be wasting my time.
>
> I am in Platform Engineering team at DirecTV, and we run Go and Java
> pipelines on Jenkins using Amazon EKS as the workers. So, the process
> is that when a Jenkins build runs, it asks the EKS for a worker
> (Kubernetes pod) and the cluster would spawn one and the new pod would
> communicate back to the Jenkins controller. We use the Jenkins
> Kubernetes pod template to configure the communication. We are
> currently running the latest LTS of podman, v5.2.2, however still
> using cgroups-v1 for now, planning to migrate early 2025 by upgrading
> the cluster to use Amazon Linux 2023 which uses cgroups-v2 as
> default. Here’s the podman configuration details that we use:
>
> host:
>
> arch: arm64
>
> buildahVersion: 1.37.2
>
> cgroupControllers:
>
> - cpuset
>
> - cpu
>
> - cpuacct
>
> - blkio
>
> - memory
>
> - devices
>
> - freezer
>
> - net_cls
>
> - perf_event
>
> - net_prio
>
> - hugetlb
>
> - pids
>
> cgroupManager: cgroupfs
>
> cgroupVersion: v1
>
> conmon:
>
> package: conmon-2.1.12-1.el9.aarch64
>
> path: /usr/bin/conmon
>
> version: 'conmon version 2.1.12, commit:
> f174c390e4760883511ab6b5c146dcb244aeb647'
>
> cpuUtilization:
>
> idlePercent: 99.22
>
> systemPercent: 0.37
>
> userPercent: 0.41
>
> cpus: 16
>
> databaseBackend: sqlite
>
> distribution:
>
> distribution: centos
>
> version: "9"
>
> eventLogger: file
>
> freeLocks: 2048
>
> hostname: podmanv5-arm
>
> idMappings:
>
> gidmap: null
>
> uidmap: null
>
> kernel: 5.10.225-213.878.amzn2.aarch64
>
> linkmode: dynamic
>
> logDriver: k8s-file
>
> memFree: 8531066880
>
> memTotal: 33023348736
>
> networkBackend: netavark
>
> networkBackendInfo:
>
> backend: netavark
>
> dns:
>
> package: aardvark-dns-1.12.1-1.el9.aarch64
>
> path: /usr/libexec/podman/aardvark-dns
>
> version: aardvark-dns 1.12.1
>
> package: netavark-1.12.2-1.el9.aarch64
>
> path: /usr/libexec/podman/netavark
>
> version: netavark 1.12.2
>
> ociRuntime:
>
> name: crun
>
> package: crun-1.16.1-1.el9.aarch64
>
> path: /usr/bin/crun
>
> version: |-
>
> crun version 1.16.1
>
> commit: afa829ca0122bd5e1d67f1f38e6cc348027e3c32
>
> rundir: /run/crun
>
> spec: 1.0.0
>
> +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
>
> os: linux
>
> pasta:
>
> executable: /usr/bin/pasta
>
> package: passt-0^20240806.gee36266-2.el9.aarch64
>
> version: |
>
> pasta 0^20240806.gee36266-2.el9.aarch64-pasta
>
> Copyright Red Hat
>
> GNU General Public License, version 2 or later
>
> https://www.gnu.org/licenses/old-licenses/gpl-2.0.html
>
> This is free software: you are free to change and redistribute it.
>
> There is NO WARRANTY, to the extent permitted by law.
>
> remoteSocket:
>
> exists: false
>
> path: /run/podman/podman.sock
>
> rootlessNetworkCmd: pasta
>
> security:
>
> apparmorEnabled: false
>
> capabilities:
> CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
>
> rootless: false
>
> seccompEnabled: true
>
> seccompProfilePath: /usr/share/containers/seccomp.json
>
> selinuxEnabled: false
>
> serviceIsRemote: false
>
> slirp4netns:
>
> executable: /usr/bin/slirp4netns
>
> package: slirp4netns-1.3.1-1.el9.aarch64
>
> version: |-
>
> slirp4netns version 1.3.1
>
> commit: e5e368c4f5db6ae75c2fce786e31eef9da6bf236
>
> libslirp: 4.4.0
>
> SLIRP_CONFIG_VERSION_MAX: 3
>
> libseccomp: 2.5.2
>
> swapFree: 0
>
> swapTotal: 0
>
> uptime: 144h 6m 15.00s (Approximately 6.00 days)
>
> variant: v8
>
> plugins:
>
> authorization: null
>
> log:
>
> - k8s-file
>
> - none
>
> - passthrough
>
> - journald
>
> network:
>
> - bridge
>
> - macvlan
>
> - ipvlan
>
> volume:
>
> - local
>
> registries:
>
> search:
>
> - registry.access.redhat.com
>
> - registry.redhat.io
>
> - docker.io
>
> store:
>
> configFile: /etc/containers/storage.conf
>
> containerStore:
>
> number: 0
>
> paused: 0
>
> running: 0
>
> stopped: 0
>
> graphDriverName: overlay
>
> graphOptions:
>
> overlay.mountopt: nodev,metacopy=on
>
> graphRoot: /var/lib/containers/storage
>
> graphRootAllocated: 107352141824
>
> graphRootUsed: 23986397184
>
> graphStatus:
>
> Backing Filesystem: xfs
>
> Native Overlay Diff: "false"
>
> Supports d_type: "true"
>
> Supports shifting: "true"
>
> Supports volatile: "true"
>
> Using metacopy: "false"
>
> imageCopyTmpDir: /var/tmp
>
> imageStore:
>
> number: 1
>
> runRoot: /run/containers/storage
>
> transientStore: false
>
> volumePath: /var/lib/containers/storage/volumes
>
> version:
>
> APIVersion: 5.2.2
>
> Built: 1724331496
>
> BuiltTime: Thu Aug 22 12:58:16 2024
>
> GitCommit: ""
>
> GoVersion: go1.22.5 (Red Hat 1.22.5-2.el9)
>
> Os: linux
>
> OsArch: linux/arm64
>
> Version: 5.2.2
>
> We migrated to podman when Kubernetes deprecated docker and have been
> using podman for the last two years or so. Its working well, however
> since we run over 500 builds a day, I am trying to explore whether I
> can speed up the podman build process by using image caching. I
> wanted to see if I use an NFS file system (Amazon FSX) as the storage
> for podman (overlay-fs) would it improve podman performance by the
> builds completing much faster as of the already downloaded images on
> the NFS. Currently, podman in each pod on the EKS cluster would
> download all the required images every time so not taking advantage of
> the cached images.
>
> These are my concerns:
>
> 1. Any race conditions, a podman processes colliding with each other
> during read and write.
> 2. Performance of I/O operations as NFS communication will be over
> the network.
>
> Have any of you tried this method before? If so, can you share any
> pitfalls that you’ve faced?
>
> Any comments / advice would be beneficial as I need to weigh up pros
> and cons before spending time on this. Also, if it causes outage due
> to storage failures it would block all our developers; so, I will have
> to design this in a way where we can recover quickly.
>
> Thanks very much in advance and have a great day.
>
> Puvi Ganeshar | @pg925u
> Principal, Platform Engineer
> CICD - Pipeline Express | Toronto
> Image
>
>
> _______________________________________________
> Podman mailing list --podman(a)lists.podman.io
> To unsubscribe send an email topodman-leave(a)lists.podman.io
You can setup an additional store which is preloaded with Images on an
NFS share, which should work fine.
Whether this improves performance or not is probably something you need
to discover.
If you are dealing with YUM and DNF, you might also want to play with
sharing of the rpm database with the build system.
https://www.redhat.com/en/blog/speeding-container-buildah
https://www.youtube.com/watch?v=qsh7NL8H4GQ
7 months, 1 week
[Podman] Re: Running x86_64-based containers on Mac computers with an Apple silicon (M1) processor
by Matthias Apitz
El día martes, enero 16, 2024 a las 07:25:33a. m. -0500, Daniel Walsh escribió:
> On 1/16/24 06:49, Matthias Apitz wrote:
> > Hello,
> >
> > For the purpose of the Subject: there is a tutorial at IBM.com:
> >
> > https://developer.ibm.com/tutorials/running-x86-64-containers-mac-silicon...
> >
> > I've followed this tutorial, with a small exception, see below.
> >
> > The first step is to 'init' the machine with:
> >
> > $ podman machine init --image-path ~/yourFedoraImageFolder/fedora-coreos-39.20231204.3.3-qemu.x86_64.qcow2.xz intel
> > Extracting compressed file: intel_fedora-coreos-39.20231204.3.3-qemu.x86_64.qco…
> > Image resized.
> > Machine init complete
> > To start your machine run:
> >
> > podman machine start intel
> >
> > Which worked fine. Now, before starting the machine, the tutorial asks to remove the following
> > lines from the config file ~/.config/containers/podman/machine/qemu/intel.json:
> >
> > "-machine",
> > "q35,accel=hvf:tcg",
> > "-cpu",
> > "host",
> >
> > These line are not there 1:1 and I removed these lines:
> >
> > $ diff .config/containers/podman/machine/qemu/intel.json .config/containers/podman/machine/qemu/intel.json.saved2
> > 6a7,12
> > > "-accel",
> > > "hvf",
> > > "-accel",
> > > "tcg",
> > > "-cpu",
> > > "host",
> > which seems to me correct to remove the HVF QEMU accelerator which only
> > is supported on x86 chips.
> >
> > Starting the machine now with:
> >
> > $ podman machine start intel
> > Di 16 Jan 2024 12:23:33 CET
> > Starting machine "intel"
> > Waiting for VM ...
> >
> > hangs for ever and the QEMU process has 100% CPU utilization:
> >
> > top
> > PID COMMAND %CPU TIME #TH #WQ #PORT MEM PURG CMPRS PGRP
> > 10802 qemu-system- 99.6 23:09.43 8/1 0 27 163M 0B 0B 10800
> >
> > Any ideas? Thanks
> >
> > matthias
> >
> >
> Sergio any ideas?
Only for the records: I used podman and qemu as installed with
$ brew install podman
$ podman --version
podman version 4.8.3
$ /opt/homebrew/bin/qemu-system-aarch64 -version
QEMU emulator version 8.2.0
Copyright (c) 2003-2023 Fabrice Bellard and the QEMU Project developers
Later I tried the oficial MacOS version of podman and qemu which was not
even able to start the qemu.
--
Matthias Apitz, ✉ guru(a)unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub
I am not at war with Russia.
Я не воюю с Россией.
Ich bin nicht im Krieg mit Russland.
1 year, 4 months
[Podman] Re: scp'ing a podman image to another host
by Jean-Baptiste Ciccolella
Is /tmp big enough to receive the image ?
JB
-
On Thu, Jan 11, 2024 at 11:48 AM Matthias Apitz <guru(a)unixarea.de> wrote:
> El día miércoles, enero 10, 2024 a las 10:09:27 -0500, Charlie Doern
> escribió:
>
> > You should also usually get some sort of:
> >
> > Storing signaturesLoaded image(s):
> >
> > after
> >
> > Writing manifest to image destination
> >
> >
> > if this doesn't show up, then the image doesn't actually get stored. I
> > remember there being some compatibility issues over certain
> > types/sizes of images w/ scp. Can you throw a `-v` in there to see if
> > it tells you anything else?
>
> I did tests in two directions:
>
> 1)
> On the source host I run:
>
> $ podman run -it docker.io/library/busybox
>
> which gave me a local additional image and I transfered this to the
> target host:
>
> $ podman images
> REPOSITORY TAG IMAGE ID CREATED
> SIZE
> localhost/suse latest c87c80c0911a 46 hours ago
> 6.31 GB
> registry.suse.com/bci/bci-base 15.4 5bd0e4152d92 2 weeks ago
> 123 MB
> docker.io/library/busybox latest 9211bbaa0dbd 3 weeks ago
> 4.5 MB
>
> $ podman image scp 9211bbaa0dbd srap57::
> Copying blob 82ae998286b2 done
> Copying config 9211bbaa0d done
> Writing manifest to image destination
> Loaded image:
> sha256:9211bbaa0dbd68fed073065eb9f0a6ed00a75090a9235eca2554c62d1e75c58f
>
> i.e. this was transfered fine and shows up on the target host as:
>
> srap57dxr1:~> podman images
> REPOSITORY TAG IMAGE ID CREATED
> SIZE
> <none> <none> b677170ada05 3 minutes ago
> 1.89 GB
> registry.suse.com/bci/bci-base 15.4 5bd0e4152d92 2 weeks ago
> 123 MB
> <none> <none> 9211bbaa0dbd 3 weeks ago
> 4.49 MB
>
> apitzm@srap57dxr1:~> podman run -t 9211bbaa0dbd
> / #
>
> 2)
>
> I copied over the files to build the image to the target host:
>
> apitzm@srrp02dxr1:~$ scp -rp suse srap57dxr1:.
> Dockerfile 100% 5051 1.2MB/s 00:00
> initSunRise.sh 100% 953 314.2KB/s 00:00
> postgresql.conf 100% 29KB 5.0MB/s 00:00
> testdb.dmp.gz 100% 388MB 110.0MB/s 00:03
> keyFile 100% 893 63.2KB/s 00:00
>
> and built the image there with:
>
> apitzm@srap57dxr1:~> podman build -t suse suse
> ...
> which worked also fine:
> ...
> STEP 58/59: ENTRYPOINT /usr/local/bin/start.sh
> --> 86dab7ac3e4d
> STEP 59/59: STOPSIGNAL SIGQUIT
> COMMIT suse
> --> a1ffb1f71791
> Successfully tagged localhost/suse:latest
> a1ffb1f717911b4e11aaa89d94c4959562c625b0e203dd906797e60d019cde57
>
>
> The big difference between the image 'docker.io/library/busybox' and
> mine is the size (4,5 MB ./. 6,1 GB). When I scp my big image I see in
> /tmp that the sftp-server writes there a temp. file as:
>
> ls -lh /tmp/tmp.RLHbJp9uzq
> -rw------- 1 apitzm apitzm 5.8G Jan 11 10:58 /tmp/tmp.RLHbJp9uzq
>
> and when this reached the size of 6 GB it gets deleted
>
> 3)
> I removed all container files on the target host:
>
> srap57dxr1:/ # rm -rf /data/guru/containers/*
> srap57dxr1:/ # du -sh /data/guru/containers/
> 1.0K /data/guru/containers/
>
> and started a fresh scp:
>
> $ podman image scp c87c80c0911a srap57::
> ...
> Copying blob a5a080851ed7 done
> Copying blob 6fc7ff0cb132 done
> Copying config c87c80c091 done
> Writing manifest to image destination
>
> When the transfer has ended on the target host one can see
> 1. the big file in /tmp gets deleted
> 2. something was written below the area of the containers (which was
> empty before):
>
> srap57dxr1:/# ls -lh /tmp/tmp.5uuhYWqqQT
> -rw------- 1 apitzm apitzm 4.3G Jan 11 11:35 /tmp/tmp.5uuhYWqqQT
> srap57dxr1:/# ls -lh /tmp/tmp.5uuhYWqqQT
> -rw------- 1 apitzm apitzm 5.9G Jan 11 11:37 /tmp/tmp.5uuhYWqqQT
> srap57dxr1:/# ls -lh /tmp/tmp.5uuhYWqqQT
> ls: cannot access '/tmp/tmp.5uuhYWqqQT': No such file or directory
> srap57dxr1:/# du -sh /data/guru/containers/
> 1.1G /data/guru/containers/
>
> How can I get more messages about the failing process?
>
> matthias
>
> > On Wed, Jan 10, 2024 at 9:33 AM Matthias Apitz <guru(a)unixarea.de> wrote:
> >
> > >
> > > I have an image on RH 8.x which runs fine (containing a SuSE SLES and
> > > PostgreSQL server):
> > >
> > > $ podman images
> > > REPOSITORY TAG IMAGE ID CREATED
> > > SIZE
> > > localhost/suse latest c87c80c0911a 26 hours ago
> > > 6.31 GB
> > > registry.suse.com/bci/bci-base 15.4 5bd0e4152d92 2 weeks ago
> > > 123 MB
> > >
> > > I created a connection to another host as:
> > >
> > > $ podman system connection list
> > > Name URI
> > > Identity Default
> > > srap57 ssh://
> > > apitzm@srap57dxr1.dev.xxxxxx.org:22/run/user/200007/podman/podman.sock
> > > true
> > >
> > > To the other host I can SSH fine based on RSA public/private keys and
> > > podman is installed there to:
> > >
> > > $ ssh apitzm(a)srap57dxr1.dev.xxxxxx.org
> > > Last login: Wed Jan 10 14:05:12 2024 from 10.201.64.28
> > > apitzm@srap57dxr1:~> podman version
> > > Client: Podman Engine
> > > Version: 4.7.2
> > > API Version: 4.7.2
> > > Go Version: go1.21.4
> > > Built: Wed Nov 1 13:00:00 2023
> > >
> > > When I now copy over the image with:
> > >
> > > $ podman image scp c87c80c0911a srap57::
> > >
> > > it transfers the ~6 GByte (I can see them in /tmp as a big tar file of
> > > tar files) and at the end it says:
> > >
> > > ...
> > > Writing manifest to image destination
> > > $
> > >
> > > (i.e. the shell prompt is there again)
> > >
> > > But on srap57dxr1.dev.xxxxxx.org I can't see anything of the image at
> the
> > > end.
> > >
> > > What I've done wrong?
> > >
> > > Thanks
> > >
> > > matthias
> > >
> > > --
> > > Matthias Apitz, ✉ guru(a)unixarea.de, http://www.unixarea.de/
> > > +49-176-38902045
> > > Public GnuPG key: http://www.unixarea.de/key.pub
> > >
> > > I am not at war with Russia. Я не воюю с Россией.
> > > Ich bin nicht im Krieg mit Russland.
> > > _______________________________________________
> > > Podman mailing list -- podman(a)lists.podman.io
> > > To unsubscribe send an email to podman-leave(a)lists.podman.io
> > >
>
> > _______________________________________________
> > Podman mailing list -- podman(a)lists.podman.io
> > To unsubscribe send an email to podman-leave(a)lists.podman.io
>
>
> --
> Matthias Apitz, ✉ guru(a)unixarea.de, http://www.unixarea.de/
> +49-176-38902045
> Public GnuPG key: http://www.unixarea.de/key.pub
>
> I am not at war with Russia. Я не воюю с Россией.
> Ich bin nicht im Krieg mit Russland.
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
1 year, 4 months
[Podman] Re: rootless buildah in toolbox container
by Debarshi Ray
Hey,
On Fri, 2023-02-03 at 18:08 -0500, Daniel Walsh wrote:
> On 2/3/23 16:08, Hendrik Haddorp wrote:
> > I also hoped that it just works and that toolbox sets up the
> > environment
> > in a way that podman/bulidah can be used. I did these steps:
> > toolbox create fedora
> > toolbox enter fedora
> > sudo dnf install buildah podman
> > buildah from scratch
> > buildah mount working-container
> > -> Error: cannot mount using driver overlay in rootless mode. You
> > need to run it in a `buildah unshare` session
>
Currently, Toolbx doesn't set up the container in a way that podman(1)
and buildah(1) will just work inside it.
However, it's planned as part of an overall desire to make well-known
tools, which usually need to run on the host, to work inside
containers. Imagine if you were hacking inside a Toolbx container on a
Fedora Silverblue host, and you need to use flatpak(1) or rpm-ostree(1)
to quickly do something on the host. Remembering which terminal is
running a shell on the host and which is inside a container can be
annoying in the middle of a frantic hacking run. Similarly, your use-
case, where Podman might be part of your hacking workflow.
There's some discussion here and an initial implementation here:
https://github.com/containers/toolbox/issues/145
https://github.com/containers/toolbox/pull/553
One easy way is to fake it:
$ toolbox enter
> flatpak-spawn --host podman images
...
> flatpak-spawn --host podman ps --all
...
This trick will work with many other tools:
$ toolbox enter
> flatpak-spawn --host flatpak list
...
In fact this is how Toolbx works inside Toolbx containers:
$ toolbox enter
> toolbox list
...
So, you could consider setting up some aliases inside your container to
make this more palatable.
I must admit that I haven't yet dug into the specifics of podman(1) and
buildah(1) inside containers too deeply.
> This can be handled by either setting up the toolbox to use a volume
> on
> ~/.local/share/containers
I saw that the Containerfile that you linked to has:
VOLUME /var/lib/containers
VOLUME /home/build/.local/share/containers
I haven't played at all with volumes that aren't bind mounts of host
paths, but I suppose this is a way to give the container a blank
~/.local/share/containers even if $HOME is a bind mount from the host?
Or does it do something else?
While Toolbx doesn't really use any namespace other than the user
namespace, I wonder if this is enough to have a feature complete
podman(1) running inside a container, or will we keep running into
corner cases and limitations caused by running inside a container.
Another consideration might be if people want to share their containers
and images across all their Toolbx containers or not.
Cheers,
Rishi
2 years, 3 months
[Podman] Re: Speeding up podman by using cache
by Ganeshar, Puvi
Dan,
Thanks and apologies for the delay, I have been away.
We mainly use the podman for building the Go and Java artifact images during the package stage of our Jenkins pipelines which get deployed to production.
Currently I am doing the image caching on the worker nodes of our EKS cluster and when I look at the sizes of the overlay directory after a couple of days on a worker node, its over 50GB in size. So I think exploring the NFS concept would be worth it to see how much it would speed up the builds (or slower due to read/write over network). I am also concerned about the network latency as currently there is no network as the images are sitting on the nodes that podman runs on.
I will also test the UID issue that you talked about.
Thanks again for your help and advice, much appreciated.
Puvi Ganeshar | @pg925u
Principal, Platform Engineer
CICD - Pipeline Express | Toronto
[Image]
From: Daniel Walsh <dwalsh(a)redhat.com>
Date: Friday, October 25, 2024 at 8:33 AM
To: Ganeshar, Puvi <puvi.ganeshar(a)directv.com>, podman(a)lists.podman.io <podman(a)lists.podman.io>
Subject: Re: [Podman] Re: Speeding up podman by using cache
On 10/24/24 11:25, Ganeshar, Puvi wrote:
Dan,
Thanks for coming back to me on this.
If I use an NFS store (with Read & Write) as Podman’s storage, do you anticipate any race conditions when multiple podman processes reading and writing at the same time? Do I need implement any locking mechanisms like what they do in relational databases.
Yum and DNF should not be a bigger issue as we don’t build them every day and we use distroless for the Go microservices and Java s built on a custom base image with all deps already included.
Thanks again.
I don't think so. We already have locking built into podman database and NFS Storage. Once the container is running Podman is not going to do anything. In detach mode podman exits.
Podman is only writing to storage and working with locks when the container is created and when images are pulled.
The key issue with NFS and storage is that if Podman needs to create a different UID other then the users UID, then NFS server will not allow Podman to do a chown.
From its point of view it sees dwalsh chowning a file to a non dwalsh UID.
Puvi Ganeshar | @pg925u
Principal, Platform Engineer
CICD - Pipeline Express | Toronto
[Image]
From: Daniel Walsh <dwalsh(a)redhat.com><mailto:dwalsh@redhat.com>
Date: Wednesday, October 23, 2024 at 11:10 AM
To: podman(a)lists.podman.io<mailto:podman@lists.podman.io> <podman(a)lists.podman.io><mailto:podman@lists.podman.io>
Subject: [Podman] Re: Speeding up podman by using cache
On 10/22/24 11:04, Ganeshar, Puvi wrote:
Hello Podman team,
I am about explore this option so just wanted to check with you all before as I might be wasting my time.
I am in Platform Engineering team at DirecTV, and we run Go and Java pipelines on Jenkins using Amazon EKS as the workers. So, the process is that when a Jenkins build runs, it asks the EKS for a worker (Kubernetes pod) and the cluster would spawn one and the new pod would communicate back to the Jenkins controller. We use the Jenkins Kubernetes pod template to configure the communication. We are currently running the latest LTS of podman, v5.2.2, however still using cgroups-v1 for now, planning to migrate early 2025 by upgrading the cluster to use Amazon Linux 2023 which uses cgroups-v2 as default. Here’s the podman configuration details that we use:
host:
arch: arm64
buildahVersion: 1.37.2
cgroupControllers:
- cpuset
- cpu
- cpuacct
- blkio
- memory
- devices
- freezer
- net_cls
- perf_event
- net_prio
- hugetlb
- pids
cgroupManager: cgroupfs
cgroupVersion: v1
conmon:
package: conmon-2.1.12-1.el9.aarch64
path: /usr/bin/conmon
version: 'conmon version 2.1.12, commit: f174c390e4760883511ab6b5c146dcb244aeb647'
cpuUtilization:
idlePercent: 99.22
systemPercent: 0.37
userPercent: 0.41
cpus: 16
databaseBackend: sqlite
distribution:
distribution: centos
version: "9"
eventLogger: file
freeLocks: 2048
hostname: podmanv5-arm
idMappings:
gidmap: null
uidmap: null
kernel: 5.10.225-213.878.amzn2.aarch64
linkmode: dynamic
logDriver: k8s-file
memFree: 8531066880
memTotal: 33023348736
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: aardvark-dns-1.12.1-1.el9.aarch64
path: /usr/libexec/podman/aardvark-dns
version: aardvark-dns 1.12.1
package: netavark-1.12.2-1.el9.aarch64
path: /usr/libexec/podman/netavark
version: netavark 1.12.2
ociRuntime:
name: crun
package: crun-1.16.1-1.el9.aarch64
path: /usr/bin/crun
version: |-
crun version 1.16.1
commit: afa829ca0122bd5e1d67f1f38e6cc348027e3c32
rundir: /run/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
pasta:
executable: /usr/bin/pasta
package: passt-0^20240806.gee36266-2.el9.aarch64
version: |
pasta 0^20240806.gee36266-2.el9.aarch64-pasta
Copyright Red Hat
GNU General Public License, version 2 or later
https://www.gnu.org/licenses/old-licenses/gpl-2.0.html<https://urldefense.com/v3/__https:/www.gnu.org/licenses/old-licenses/gpl-...>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
remoteSocket:
exists: false
path: /run/podman/podman.sock
rootlessNetworkCmd: pasta
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: false
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.3.1-1.el9.aarch64
version: |-
slirp4netns version 1.3.1
commit: e5e368c4f5db6ae75c2fce786e31eef9da6bf236
libslirp: 4.4.0
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.2
swapFree: 0
swapTotal: 0
uptime: 144h 6m 15.00s (Approximately 6.00 days)
variant: v8
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- registry.access.redhat.com
- registry.redhat.io
- docker.io
store:
configFile: /etc/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.mountopt: nodev,metacopy=on
graphRoot: /var/lib/containers/storage
graphRootAllocated: 107352141824
graphRootUsed: 23986397184
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "false"
Supports d_type: "true"
Supports shifting: "true"
Supports volatile: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 1
runRoot: /run/containers/storage
transientStore: false
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 5.2.2
Built: 1724331496
BuiltTime: Thu Aug 22 12:58:16 2024
GitCommit: ""
GoVersion: go1.22.5 (Red Hat 1.22.5-2.el9)
Os: linux
OsArch: linux/arm64
Version: 5.2.2
We migrated to podman when Kubernetes deprecated docker and have been using podman for the last two years or so. Its working well, however since we run over 500 builds a day, I am trying to explore whether I can speed up the podman build process by using image caching. I wanted to see if I use an NFS file system (Amazon FSX) as the storage for podman (overlay-fs) would it improve podman performance by the builds completing much faster as of the already downloaded images on the NFS. Currently, podman in each pod on the EKS cluster would download all the required images every time so not taking advantage of the cached images.
These are my concerns:
1. Any race conditions, a podman processes colliding with each other during read and write.
2. Performance of I/O operations as NFS communication will be over the network.
Have any of you tried this method before? If so, can you share any pitfalls that you’ve faced?
Any comments / advice would be beneficial as I need to weigh up pros and cons before spending time on this. Also, if it causes outage due to storage failures it would block all our developers; so, I will have to design this in a way where we can recover quickly.
Thanks very much in advance and have a great day.
Puvi Ganeshar | @pg925u
Principal, Platform Engineer
CICD - Pipeline Express | Toronto
[Image]
_______________________________________________
Podman mailing list -- podman(a)lists.podman.io<mailto:podman@lists.podman.io>
To unsubscribe send an email to podman-leave(a)lists.podman.io<mailto:podman-leave@lists.podman.io>
You can setup an additional store which is preloaded with Images on an NFS share, which should work fine.
Whether this improves performance or not is probably something you need to discover.
If you are dealing with YUM and DNF, you might also want to play with sharing of the rpm database with the build system.
https://www.redhat.com/en/blog/speeding-container-buildah<https://urldefense.com/v3/__https:/www.redhat.com/en/blog/speeding-contai...>
https://www.youtube.com/watch?v=qsh7NL8H4GQ<https://urldefense.com/v3/__https:/www.youtube.com/watch?v=qsh7NL8H4GQ__;...>
6 months, 3 weeks
[Podman] Container restart issue: Failed to attach 1 to compat systemd cgroup
by Lewis Gaul
Hi Podman team,
I came across an unexpected systemd warning when running inside a container
- I emailed systemd-devel (this email summarises the thread, which you can
find at
https://lists.freedesktop.org/archives/systemd-devel/2023-January/048723....)
and Lennart suggested emailing here. Any thoughts would be great!
There are two different warnings seen in different scenarios, both cgroups
related, and I believe related to each other given they both satisfy the
points below.
The first warning is seen after 'podman restart $CTR', coming from
https://github.com/systemd/systemd/blob/v245/src/shared/cgroup-setup.c#L279:
Failed to attach 1 to compat systemd cgroup
/machine.slice/libpod-5e4ab2a36681c092f4ef937cf03b25a8d3d7b2fa530559bf4dac4079c84d0313.scope/init.scope:
No such file or directory
The second warning is seen on every boot when using '--cgroupns=private',
coming from
https://github.com/systemd/systemd/blob/v245/src/core/cgroup.c#L2967:
Couldn't move remaining userspace processes, ignoring: Input/output error
Failed to create compat systemd cgroup /system.slice: No such file or
directory
...
Both warnings are seen together when restarting a container using private
cgroup namespace.
To summarise:
- The warnings are seen when running the container on a Centos 8 host, but
not on an Ubuntu 20.04 host
- It is assumed this issue is specific to cgroups v1, based on the warning
messages
- Disabling SELinux on the host with 'setenforce 0' makes no difference
- Seen with systemd v245 but not with v230
- Seen with '--privileged' and in non-privileged with '--cap-add sys_admin'
- Changing the cgroup driver/manager doesn't seem to have any effect
- The same is seen with docker except when running privileged the first
warning becomes a fatal error after hitting "Failed to open pin file: No
such file or directory" (coming from
https://github.com/systemd/systemd/blob/v245/src/core/cgroup.c#L2972) and
the container exits (however docker doesn't claim to support systemd)
Some extra details copied from the systemd email thread:
- On first boot PID 1 can be found in
/sys/fs/cgroup/systemd/machine.slice/libpod-<ctr-id>.scope/init.scope/cgroup.procs,
whereas when the container restarts the 'init.scope/' directory does not
exist and PID 1 is instead found in the parent (container root) cgroup
/sys/fs/cgroup/systemd/machine.slice/libpod-<ctr-id>.scope/cgroup.procs
(also reflected by /proc/1/cgroup). This is strange because systemd must be
the one to create this cgroup dir in the initial boot, so I'm not sure why
it wouldn't on subsequent boot.
- I confirmed that the container has permissions to create the dir by
executing a 'mkdir' in
/sys/fs/cgroup/systemd/machine.slice/libpod-<ctr-id>.scope/ inside the
container after the restart, so I have no idea why systemd is not creating
the 'init.scope/' dir. I notice that inside the container's systemd cgroup
mount 'system.slice/' does exist, but 'user.slice/' also does not (both
exist on normal boot).
This should be reproducible using the following:
cat << EOF > Dockerfile
FROM ubuntu:20.04
RUN apt-get update -y && apt-get install systemd -y && ln -s
/lib/systemd/systemd /sbin/init
ENTRYPOINT ["/sbin/init"]
EOF
podman build . --tag ubuntu-systemd
podman run -it --name ubuntu --privileged --cgroupns private ubuntu-systemd
podman restart ubuntu
Thanks,
Lewis
2 years, 4 months
[Podman] Re: systemctl status during podman build
by Chris Evich
Yeah this is annoying, but the environment during a build is typically a
lot different that when running. During a build, there's basically no
way to predict what podman arguments will be used to run the image,
volumes mounted, user-namespace stuff, etc.
I'm not a systemd expert, but I believe you can "mimic" the effects of
`systemctl enable...` with some symlinking.
Looking at my Fedora system (your case will probably vary a little), I
think what you want in your Containerfile is something like:
RUN ln -s /lib/systemd/system/httpd.service \
/etc/systemd/system/multi-user.target.wants/
Maybe there's a cleaner way to do this with some `systemctl enable ...`
options. I'd love to know about them if anyone else has smart in
systemd stuffs :D
---
Chris Evich (he/him), RHCA III
Senior Quality Assurance Engineer
If there's a "hard-way", I'm the first one to implement it.
On 9/20/23 15:19, etc(a)balosh.net wrote:
> Hi, question that I don't understand,
> I'd be grateful for explaining or reference to what I should read to get it.
>
> Why during Podman build command
> `systemctl enable httpd`
> is working
> but
> `systemctl status httpd`
> is not working?
>
> Dockerfile not working:
>
> ```
> FROM registry.access.redhat.com/ubi8/ubi-init
> RUN yum -y install httpd; yum clean all;
> RUN systemctl enable httpd;
> RUN systemctl status httpd;
> ```
>
> output of `podman build .`:
>
> ```
> STEP 2/4: RUN yum -y install httpd; yum clean all;
> STEP 1/4: FROM registry.access.redhat.com/ubi8/ubi-init
> Build output:
> --> 02f6efde590f
> --> Using cache 02f6efde590f9fec989c04a01a661d2650b462aeb8e61ad3c0e00aae1b16b1ef
> --> Using cache 4f85f566fdee4fd8f5e8058dbf39c5ec9be95a4879d4d9a8c7a77f5b9cadf8a7
> STEP 3/4: RUN systemctl enable httpd;
> STEP 4/4: RUN systemctl status httpd;
> --> 4f85f566fdee
> System has not been booted with systemd as init system (PID 1). Can't operate.
> Failed to connect to bus: Host is down
> ```
>
> But!
> If I exec into the container when he is running both of them are working.
>
> Working Dockerfile:
>
> ```
> FROM registry.access.redhat.com/ubi8/ubi-init
> RUN yum -y install httpd; yum clean all;
> RUN systemctl enable httpd;
> ```
>
> command:
> `podman build . -t x ; podman run -d --name x x ; podman exec -ti x bash -c "systemctl status httpd"`
> runs with success
>
> ```
> STEP 1/3: FROM registry.access.redhat.com/ubi8/ubi-init
> STEP 2/3: RUN yum -y install httpd; yum clean all;
> --> Using cache 02f6efde590f9fec989c04a01a661d2650b462aeb8e61ad3c0e00aae1b16b1ef
> --> 02f6efde590f
> STEP 3/3: RUN systemctl enable httpd;
> --> Using cache 4f85f566fdee4fd8f5e8058dbf39c5ec9be95a4879d4d9a8c7a77f5b9cadf8a7
> COMMIT x
> --> 4f85f566fdee
> Successfully tagged localhost/x:latest
> 4f85f566fdee4fd8f5e8058dbf39c5ec9be95a4879d4d9a8c7a77f5b9cadf8a7
> 214ee56866fc0e7d71b6d152749bdcb65d4e5aadb95dafcebb5661ee20770619
> [root@214ee56866fc /]# systemctl status httpd
> ● httpd.service - The Apache HTTP Server
> Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
> Active: active (running) since Tue 2023-09-19 20:07:22 UTC; 6s ago
> Docs: man:httpd.service(8)
> Main PID: 30 (httpd)
> Status: "Started, listening on: port 80"
> Tasks: 213 (limit: 1638)
> Memory: 22.3M
> CGroup: /system.slice/httpd.service
> ├─30 /usr/sbin/httpd -DFOREGROUND
> ├─34 /usr/sbin/httpd -DFOREGROUND
> ├─35 /usr/sbin/httpd -DFOREGROUND
> ├─36 /usr/sbin/httpd -DFOREGROUND
> └─37 /usr/sbin/httpd -DFOREGROUND
>
>
> versions:
> podman version 4.6.2
> on macos ventura 13.5.2
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
1 year, 8 months
[Podman] Re: Speeding up podman by using cache
by Daniel Walsh
On 10/24/24 11:25, Ganeshar, Puvi wrote:
>
> Dan,
>
> Thanks for coming back to me on this.
>
> If I use an NFS store (with Read & Write) as Podman’s storage, do you
> anticipate any race conditions when multiple podman processes reading
> and writing at the same time? Do I need implement any locking
> mechanisms like what they do in relational databases.
>
> Yum and DNF should not be a bigger issue as we don’t build them every
> day and we use distroless for the Go microservices and Java s built on
> a custom base image with all deps already included.
>
> Thanks again.
>
I don't think so. We already have locking built into podman database
and NFS Storage. Once the container is running Podman is not going to
do anything. In detach mode podman exits.
Podman is only writing to storage and working with locks when the
container is created and when images are pulled.
The key issue with NFS and storage is that if Podman needs to create a
different UID other then the users UID, then NFS server will not allow
Podman to do a chown.
From its point of view it sees dwalsh chowning a file to a non dwalsh UID.
> Puvi Ganeshar | @pg925u
> Principal, Platform Engineer
> CICD - Pipeline Express | Toronto
> Image
>
> *From: *Daniel Walsh <dwalsh(a)redhat.com>
> *Date: *Wednesday, October 23, 2024 at 11:10 AM
> *To: *podman(a)lists.podman.io <podman(a)lists.podman.io>
> *Subject: *[Podman] Re: Speeding up podman by using cache
>
> On 10/22/24 11:04, Ganeshar, Puvi wrote:
>
> Hello Podman team,
>
> I am about explore this option so just wanted to check with you
> all before as I might be wasting my time.
>
> I am in Platform Engineering team at DirecTV, and we run Go and
> Java pipelines on Jenkins using Amazon EKS as the workers. So,
> the process is that when a Jenkins build runs, it asks the EKS for
> a worker (Kubernetes pod) and the cluster would spawn one and the
> new pod would communicate back to the Jenkins controller. We use
> the Jenkins Kubernetes pod template to configure the
> communication. We are currently running the latest LTS of podman,
> v5.2.2, however still using cgroups-v1 for now, planning to
> migrate early 2025 by upgrading the cluster to use Amazon Linux
> 2023 which uses cgroups-v2 as default. Here’s the podman
> configuration details that we use:
>
> host:
>
> arch: arm64
>
> buildahVersion: 1.37.2
>
> cgroupControllers:
>
> - cpuset
>
> - cpu
>
> - cpuacct
>
> - blkio
>
> - memory
>
> - devices
>
> - freezer
>
> - net_cls
>
> - perf_event
>
> - net_prio
>
> - hugetlb
>
> - pids
>
> cgroupManager: cgroupfs
>
> cgroupVersion: v1
>
> conmon:
>
> package: conmon-2.1.12-1.el9.aarch64
>
> path: /usr/bin/conmon
>
> version: 'conmon version 2.1.12, commit:
> f174c390e4760883511ab6b5c146dcb244aeb647'
>
> cpuUtilization:
>
> idlePercent: 99.22
>
> systemPercent: 0.37
>
> userPercent: 0.41
>
> cpus: 16
>
> databaseBackend: sqlite
>
> distribution:
>
> distribution: centos
>
> version: "9"
>
> eventLogger: file
>
> freeLocks: 2048
>
> hostname: podmanv5-arm
>
> idMappings:
>
> gidmap: null
>
> uidmap: null
>
> kernel: 5.10.225-213.878.amzn2.aarch64
>
> linkmode: dynamic
>
> logDriver: k8s-file
>
> memFree: 8531066880
>
> memTotal: 33023348736
>
> networkBackend: netavark
>
> networkBackendInfo:
>
> backend: netavark
>
> dns:
>
> package: aardvark-dns-1.12.1-1.el9.aarch64
>
> path: /usr/libexec/podman/aardvark-dns
>
> version: aardvark-dns 1.12.1
>
> package: netavark-1.12.2-1.el9.aarch64
>
> path: /usr/libexec/podman/netavark
>
> version: netavark 1.12.2
>
> ociRuntime:
>
> name: crun
>
> package: crun-1.16.1-1.el9.aarch64
>
> path: /usr/bin/crun
>
> version: |-
>
> crun version 1.16.1
>
> commit: afa829ca0122bd5e1d67f1f38e6cc348027e3c32
>
> rundir: /run/crun
>
> spec: 1.0.0
>
> +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
>
> os: linux
>
> pasta:
>
> executable: /usr/bin/pasta
>
> package: passt-0^20240806.gee36266-2.el9.aarch64
>
> version: |
>
> pasta 0^20240806.gee36266-2.el9.aarch64-pasta
>
> Copyright Red Hat
>
> GNU General Public License, version 2 or later
>
> https://www.gnu.org/licenses/old-licenses/gpl-2.0.html
> <https://urldefense.com/v3/__https:/www.gnu.org/licenses/old-licenses/gpl-...>
>
> This is free software: you are free to change and
> redistribute it.
>
> There is NO WARRANTY, to the extent permitted by law.
>
> remoteSocket:
>
> exists: false
>
> path: /run/podman/podman.sock
>
> rootlessNetworkCmd: pasta
>
> security:
>
> apparmorEnabled: false
>
> capabilities:
> CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
>
> rootless: false
>
> seccompEnabled: true
>
> seccompProfilePath: /usr/share/containers/seccomp.json
>
> selinuxEnabled: false
>
> serviceIsRemote: false
>
> slirp4netns:
>
> executable: /usr/bin/slirp4netns
>
> package: slirp4netns-1.3.1-1.el9.aarch64
>
> version: |-
>
> slirp4netns version 1.3.1
>
> commit: e5e368c4f5db6ae75c2fce786e31eef9da6bf236
>
> libslirp: 4.4.0
>
> SLIRP_CONFIG_VERSION_MAX: 3
>
> libseccomp: 2.5.2
>
> swapFree: 0
>
> swapTotal: 0
>
> uptime: 144h 6m 15.00s (Approximately 6.00 days)
>
> variant: v8
>
> plugins:
>
> authorization: null
>
> log:
>
> - k8s-file
>
> - none
>
> - passthrough
>
> - journald
>
> network:
>
> - bridge
>
> - macvlan
>
> - ipvlan
>
> volume:
>
> - local
>
> registries:
>
> search:
>
> - registry.access.redhat.com
>
> - registry.redhat.io
>
> - docker.io
>
> store:
>
> configFile: /etc/containers/storage.conf
>
> containerStore:
>
> number: 0
>
> paused: 0
>
> running: 0
>
> stopped: 0
>
> graphDriverName: overlay
>
> graphOptions:
>
> overlay.mountopt: nodev,metacopy=on
>
> graphRoot: /var/lib/containers/storage
>
> graphRootAllocated: 107352141824
>
> graphRootUsed: 23986397184
>
> graphStatus:
>
> Backing Filesystem: xfs
>
> Native Overlay Diff: "false"
>
> Supports d_type: "true"
>
> Supports shifting: "true"
>
> Supports volatile: "true"
>
> Using metacopy: "false"
>
> imageCopyTmpDir: /var/tmp
>
> imageStore:
>
> number: 1
>
> runRoot: /run/containers/storage
>
> transientStore: false
>
> volumePath: /var/lib/containers/storage/volumes
>
> version:
>
> APIVersion: 5.2.2
>
> Built: 1724331496
>
> BuiltTime: Thu Aug 22 12:58:16 2024
>
> GitCommit: ""
>
> GoVersion: go1.22.5 (Red Hat 1.22.5-2.el9)
>
> Os: linux
>
> OsArch: linux/arm64
>
> Version: 5.2.2
>
> We migrated to podman when Kubernetes deprecated docker and have
> been using podman for the last two years or so. Its working well,
> however since we run over 500 builds a day, I am trying to explore
> whether I can speed up the podman build process by using image
> caching. I wanted to see if I use an NFS file system (Amazon FSX)
> as the storage for podman (overlay-fs) would it improve podman
> performance by the builds completing much faster as of the already
> downloaded images on the NFS. Currently, podman in each pod on
> the EKS cluster would download all the required images every time
> so not taking advantage of the cached images.
>
> These are my concerns:
>
> 1. Any race conditions, a podman processes colliding with each
> other during read and write.
> 2. Performance of I/O operations as NFS communication will be
> over the network.
>
> Have any of you tried this method before? If so, can you share
> any pitfalls that you’ve faced?
>
> Any comments / advice would be beneficial as I need to weigh up
> pros and cons before spending time on this. Also, if it causes
> outage due to storage failures it would block all our developers;
> so, I will have to design this in a way where we can recover quickly.
>
> Thanks very much in advance and have a great day.
>
> Puvi Ganeshar | @pg925u
> Principal, Platform Engineer
> CICD - Pipeline Express | Toronto
> Image
>
>
>
> _______________________________________________
>
> Podman mailing list --podman(a)lists.podman.io
>
> To unsubscribe send an email topodman-leave(a)lists.podman.io
>
> You can setup an additional store which is preloaded with Images on an
> NFS share, which should work fine.
>
> Whether this improves performance or not is probably something you
> need to discover.
>
> If you are dealing with YUM and DNF, you might also want to play with
> sharing of the rpm database with the build system.
>
> https://www.redhat.com/en/blog/speeding-container-buildah
> <https://urldefense.com/v3/__https:/www.redhat.com/en/blog/speeding-contai...>
>
> https://www.youtube.com/watch?v=qsh7NL8H4GQ
> <https://urldefense.com/v3/__https:/www.youtube.com/watch?v=qsh7NL8H4GQ__;...>
>
7 months