I have run podman both as root and rootless with --log-level debug. Rootless reports the
following
(I show only beginning of the logs where, as I understand, the decision about vfs or
overlay usage is made):
island:container [master]> podman run --name tstsys --detach --init --log-level debug
docker.example.com/test/tstsys-devel-rl9
INFO[0000] podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(podman run --name tstsys --detach --init
--log-level debug
docker.example.com/test/tstsys-devel-rl9)
DEBU[0000] Merged system config "/usr/share/containers/containers.conf"
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at
/home/tstusr/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver vfs
DEBU[0000] Using graph root /home/tstusr/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1007/containers
DEBU[0000] Using static dir /home/tstusr/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1007/libpod/tmp
DEBU[0000] Using volume path /home/tstusr/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "vfs"
DEBU[0000] Initializing event backend journald
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found
for OCI runtime krun: invalid argument
DEBU[0000] Configured OCI runtime crun initialization failed: no valid executable found
for OCI runtime crun: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found
for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found
for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found
for OCI runtime runsc: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/runc"
INFO[0000] Setting parallel job count to 25
DEBU[0000] Pulling image
docker.example.com/test/tstsys-devel-rl9 (policy: missing)
DEBU[0000] Looking up image "docker.example.com/test/tstsys-devel-rl9" in local
containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Trying "docker.example.com/test/tstsys-devel-rl9:latest" ...
DEBU[0000] Trying "docker.example.com/test/tstsys-devel-rl9:latest" ...
DEBU[0000] Trying "docker.example.com/test/tstsys-devel-rl9" ...
DEBU[0000] Loading registries configuration
"/home/tstusr/.config/containers/registries.conf"
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Attempting to pull candidate
docker.example.com/test/tstsys-devel-rl9:latest
for
docker.example.com/test/tstsys-devel-rl9
DEBU[0000] parsed reference
into"[vfs@/home/tstusr/.local/share/containers/storage+/run/user/1007/containers]docker.example.com/test/tstsys-devel-rl9:latest"
Trying to pull
docker.example.com/test/tstsys-devel-rl9:latest...
Same for root podman attempt:
root@island:~# podman run --name tstsys --detach --init --log-level debug
docker.example.com/test/tstsys-devel-rl9
INFO[0000] podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(podman run --name tstsys --detach --init
--log-level debug
docker.example.com/test/tstsys-devel-rl9)
DEBU[0000] Merged system config "/usr/share/containers/containers.conf"
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver
DEBU[0000] Using graph root /var/lib/containers/storage
DEBU[0000] Using run root /run/containers/storage
DEBU[0000] Using static dir /var/lib/containers/storage/libpod
DEBU[0000] Using tmp dir /run/libpod
DEBU[0000] Using volume path /var/lib/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] overlay: test mount with multiple lowers succeeded
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] overlay: test mount indicated that metacopy is not being used
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true,
usingMetacopy=false
DEBU[0000] Initializing event backend journald
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found
for OCI runtime krun: invalid argument
DEBU[0000] Configured OCI runtime crun initialization failed: no valid executable found
for OCI runtime crun: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found
for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found
for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found
for OCI runtime runsc: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/runc"
DEBU[0000] Successfully loaded network podman: &{podman
2f259bab93aaaaa2542ba43ef33eb990d0999ee1b9924b557b7be53c0b7a1bb9 bridge cni-podman0
2022-05-13 11:05:41.68704218
4 +0300 MSK [{{{10.88.0.0 ffff0000}} 10.88.0.1 <nil>}] false false false map[] map[]
map[driver:host-local]}
DEBU[0000] Successfully loaded 1 networks
DEBU[0000] Podman detected system restart - performing state refresh
INFO[0000] Setting parallel job count to 25
DEBU[0000] Pulling image
docker.example.com/test/tstsys-devel-rl9 (policy: missing)
DEBU[0000] Looking up image "docker.example.com/test/tstsys-devel-rl9" in local
containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Trying "docker.example.com/test/tstsys-devel-rl9:latest" ...
DEBU[0000] Trying "docker.example.com/test/tstsys-devel-rl9:latest" ...
DEBU[0000] Trying "docker.example.com/test/tstsys-devel-rl9" ...
DEBU[0000] Loading registries configuration
"/root/.config/containers/registries.conf"
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Attempting to pull candidate
docker.example.com/test/tstsys-devel-rl9:latest
for
docker.example.com/test/tstsys-devel-rl9
DEBU[0000] parsed reference
into"[overlay@/var/lib/containers/storage+/run/containers/storage]docker.example.com/test/tstsys-devel-rl9:latest"
Trying to pull
docker.example.com/test/tstsys-devel-rl9:latest...
Best regards,
On 10.01.2023 15:40, Giuseppe Scrivano wrote:
Daniel Walsh<dwalsh(a)redhat.com> writes:
> On 1/9/23 14:29, Михаил Иванов wrote:
>
> I repeated everything again: podman system reset, then deleted everything in
> ~/.local/share/containers to be on the safe side and ran buildah to create an image.
> No overlay directories appeared under ~/.local/share/containers, only vfs* instead.
>
> When I run same buildah command as root, overlay* directories do appear under
> /var/lib/containers and no vfs ones.
>
> My debian is the latest one (sid) and it has the following fuse-related packages:
>
> ii fuse-overlayfs 1.9-1 amd64 implementation of
overlay+shiftfs in FUSE for rootless containers
> ii fuse3 3.12.0-1 amd64 Filesystem in Userspace (3.x
version)
> ii libfuse2:amd64 2.9.9-6 amd64 Filesystem in Userspace
(library)
> ii libfuse3-3:amd64 3.12.0-1 amd64 Filesystem in Userspace
(library) (3.x version)
>
> podman is 4.3.1
>
> For both root and rootless configs podman system info reports runc runtime
> (runc_1.1.4+ds1-1+b1_amd64 package).
>
> Rgrds,
>
> Giuseppe any ideas?
not sure why overlay or fuse-overlayfs are not picked. It would be
helpful to open an upstream issue and attach the output of `podman
--log-level debug ...` in addition to all the other information that is
requested.
> On 03.01.2023 18:25, Daniel Walsh wrote:
>
> On 12/30/22 08:35, Михаил Иванов wrote:
>
> > You could do a podman system reset and then remove all content
>> from the storage with
>> rm -rf ~/.local/share/containers
>> To make sure there is nothing hidden there,
> But that's almost exactly what I did:
>> I just purged the whole storage using podman system reset.
>> I verified that ~/.local/share/containers became empty
>> (only bolt database was still remaining using about 200Mb space)
> I'm using whatever storage was provided by default podman install
> (debian sid/bookworm) podman is 4.3.1 How can I reconfigure it to
> different type? I assumed this has to be done in storage.conf but
> this file is not present anywhere at all.
>
> I have no idea why debian would be choosing VFS, unless this is an older version of
debian and did not support rootless overaly. You could
> try installling fuse-overlayfs and doing another reset, then Podman info should
show you using overlay with fuse-overlayfs.
>
> Best regards,
> On 29.12.2022 15:04, Daniel Walsh wrote:
>
> On 12/27/22 13:19, Михаил Иванов wrote:
>
> Hallo again,
> I just purged the whole storage using podman system reset.
> I verified that ~/.local/share/containers became empty
> (only bolt database was still remaining using about 200Mb space)
> I have run 2 containers from docker.io: ibmcom/db2 and ibmcom/db2console.
> podman system df reports space usage by images 4Gb and by containers 6Mb.
> podman images command shows consistent values (two images, 2.83Gb + 1.21Gb)
> system df command shows that 32Gb is used on ~/.local/share/containers filesystem
> du shows that all this space is located under
~/.local/share/containers/storage/vfs/dir
> This directory contains 32 subdirs, 11 subdirs of 1.1Gb each, 6 subdirs of 2.6Gb
each,
> rest subdirs take anywhere from 94Mb to 646Mb
> When I try to diff -rw for directories of same size, I see only reports for missing
> symlink files, but never real file differences.
> No real activity was performed with podman apart from running these two containers.
> I am running podman 4.3.1 on debian bookworm (kernel 6.0.8)
>
> What is wrong?
>
> I guess the first question I would have for you is why are you using VFS storage?
And not overlay or fuse-overlay?
>
> Could there be other containers or storage that is un-accounted for? Did you do
any podman builds? Or Buildah?
>
> ```
> $ podman ps --all
> CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
> 2e7070eacb0f docker.io/library/alpine:latest touch /dan/walsh 6 days ago Exited
(0) 6 days ago nervous_noether
> $ podman ps --all --external
> CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
> 2e7070eacb0f docker.io/library/alpine:latest touch /dan/walsh 6 days ago
Exited (0) 6 days ago nervous_noether
> 9478fed6d8db docker.io/library/alpine:latest buildah 14 seconds ago
Storage alpine-working-container
> ```
>
> You could do a podman system reset and then remove all content from the storage
with
>
> rm -rf ~/.local/share/containers
>
> To make sure there is nothing hidden there,
>
> Other then that I am not sure what could be showing the difference in storage.
>
> Best regards,
> --
> On 23.12.2022 19:43, Михаил Иванов wrote:
>
> Hallo,
> I notice some disk space discrepancy when running rootless podman containers.
> I use dedicated fs for podman storage mountrd to ~/.local/share/containers.
> df and du show consistent used disk space:
> island:named [master]> df -h ~/.local/share/containers
> /dev/mapper/sys-containers 117G 84G 32G 73%
~/.local/share/containers
>
> island:named [master]> sudo du -sh
~/.local/share/containers/storage/{vfs,volumes}
> 74G /home/ivans/.local/share/containers/storage/vfs
> 11G /home/ivans/.local/share/containers/storage/volumes
> island:named [master]>
> But space usage shown by podman system df is about 44% less than reported above:
>
> island:named [master]> podman system df
> TYPE TOTAL ACTIVE SIZE RECLAIMABLE
> Images 32 5 39.49GB 25.96GB (66%)
> Containers 7 7 1.85GB 0B (0%)
> Local Volumes 2 2 10.83GB 0B (0%)
>
> Volume space is practically same, it's vfs space (where as I understand images
> and containers are located) that differs.
> I also run buildah as same user, but buildah ls shows nothing.
> I have ran podman system prune, but it reclaimed 0 bytes.
> So is this extra space usage expected? Or is it sthing wrong with my storage?
> Thanks and regards,