[Podman] Re: HELP! recover files from a deleted container
by Robin Lee Powell
If you have the image, and you know what the data you're looking for
looks like (i.e. it's text you can search for), try just loading the
image in a hex editor and searching for it.
On Mon, Sep 04, 2023 at 09:05:37AM -0400, Alvin Thompson wrote:
> Hi and thanks for the suggestions,
>
> Since this is Podman for Windows which uses a WSL instance, I’m hopeful that not starting Podman or messing within the WSL instance will preserve the data if necessary. WSL stores the EXT4 filesystem in a vhdx image which hopefully is isolated from Windows enough. If I’m wrong about this please let me know.
>
> This is a work computer with rather strict controls so what I can do with it is limited. I did make a copy of the WSL disk image so that’s something. Unfortunately, I may have already overwritten the data because in a panic the first thing I did was try to copy any folder I could find with the name “container”. I was hoping the files would be unlinked and cleaned up later if space were needed. Perhaps that’s a feature suggestion.
>
> I’ll see if I can grab another Intel computer, install VirtualBox on it, attach a copy of the image, and boot a recovery DVD with that.
>
> Thanks,
> Alvin
>
>
> > On Sep 4, 2023, at 8:15 AM, Tobias Wendorff <tobias.wendorff(a)tu-dortmund.de> wrote:
> >
> > 1. Immediately stop using the system: Cease all activities and avoid any further operations on the affected system. This minimizes the risk of overwriting the data you want to recover.
> >
> > 2. Turn it off as soon as possible. Maybe unplug the power supply to turn it off immeditely.
> >
> > 3. Don't boot from the disk again. Remove it if necessaray.
> >
> > 4. Boot into a data-recovery DVD or put it on another system and mount it read-only.
> >
> > The more you do on the hard drive, the more likely it is that the data will be overwritten. The data is then virtually unrecoverable. Normally, however, you can recover deleted data. They were not intentionally overwritten (shredded).
> >
> >
> > Am 04.09.2023 um 12:26 schrieb Alvin Thompson:
> >> Help!
> >> Is there any way to recover files from a deleted container? Long story short, I found the behavior of `podman network rm -f` unexpected, and it wound up deleting most of my containers. One in particular had a month of work in it (I was using it as a development environment), and it turns out only part of it was backed up. I’m desperate!
> >> This is Podman for Windows, so most of the files on the “host” are in the WSL environment. I can get into that no problem with `wsl -d podman-machine-default`.
> >> As an added wrinkle, my default connection was `podman-machine-default-root`, but I was was not running Podman rootful. I’m not sure this is particularly relevant.
> >> grep-ing for strings which are unique to the development environment shows one hit in Windows, in %HOME%/.local/containers/podman/machine/wsl/wsldist/podman-machine-default/ext4.vhdx - which I assume is the file system for the WSL layer itself. I made a copy of it.
> >> A grep within WSL itself doesn’t show so any hits, so it’s possible the files were deleted as far as WSL is concerned. I tried searching for an EXT4 undelete tool, but the only one I found (extundelete) is from 10+ years ago and doesn’t appear to work anymore.
> >> I haven’t stopped WSL (I’m using /tmp as a staging area) or restarted the computer.
> >> I’m at wit’s end. I really don’t know where to begin or look to recover these files, which I really, really need. Any recovery suggestions (no matter how tedious) would be welcome.
> >> I know it’s too late to change now, but man, the behavior of `podman network remove` is unexpected.
> >> Thanks,
> >> Alvin
> >> _______________________________________________
> >> Podman mailing list -- podman(a)lists.podman.io
> >> To unsubscribe send an email to podman-leave(a)lists.podman.io
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
1 year, 8 months
[Podman] Re: Container health check from another container
by Михаил Иванов
Hallo Daniels, thanks for the advice. I tried to run the pod using quadlet,
but I hit the same problem. It seems that when started under quadlet all
containers, belonging to the pod are started up without checking whether
the other containers are already in healthy state or not.
Is there a way to specify container dependencies inside podman/kubernetes pod?
I don't like to run oracle in a separate container and define dependencies
using systemd After/Require options. I prefer to keep oracle container in
single pod with application, so that it is not necessary to map oracle port
to different value for each application.
Best regards,
--
Michael Ivanov
On 21.11.2023 17:39, Daniel Walsh wrote:
> On 11/21/23 09:33, Михаил Иванов wrote:
>> Hallo Valentin, the actual case is to wait until oracle container
>> is in healthy state and only then allow to access it from the other
>> container. Currently I loop on oracle container health check before
>> running the second container. This approach is possible when I run
>> containers from shell script, but probably will not work in eg. kubernetes.
>> Rgrds,
> This sounds exactly like a systemd use case, have you investigated
> using quadlet for this?
>> On 21.11.2023 11:49, Valentin Rothberg wrote:
>>> Thanks for reaching out, Michael.
>>>
>>> On Tue, Nov 21, 2023 at 9:45 AM Михаил Иванов <ivans(a)isle.spb.ru> wrote:
>>>
>>> Hi, is it possible to run health check on a container from another container in same pod?
>>>
>>>
>>> The answer is probably no, but I want to make sure to understand
>>> your use case. Can you elaborate on it in more detail?
>>>
>>> --
>>>
>>> Michael Ivanov
>>>
>>> _______________________________________________
>>> Podman mailing list -- podman(a)lists.podman.io
>>> To unsubscribe send an email to podman-leave(a)lists.podman.io
>>>
>>>
>>> _______________________________________________
>>> Podman mailing list --podman(a)lists.podman.io
>>> To unsubscribe send an email topodman-leave(a)lists.podman.io
>>
>> _______________________________________________
>> Podman mailing list --podman(a)lists.podman.io
>> To unsubscribe send an email topodman-leave(a)lists.podman.io
>
>
>
> _______________________________________________
> Podman mailing list --podman(a)lists.podman.io
> To unsubscribe send an email topodman-leave(a)lists.podman.io
1 year, 6 months
[Podman] Re: HELP! recover files from a deleted container
by Valentin Rothberg
Hi Alvin,
I am really sorry about the data loss.
Unfortunately, there is no magic `podman recover` feature that would bring
the data back. The behavior of `network rm` is documented but I sympathize
that it's not helpful in your situation.
I feel bad that I cannot help you much.
Good luck,
Valentin
On Mon, Sep 4, 2023 at 12:28 PM Alvin Thompson <alvin(a)thompsonlogic.com>
wrote:
> Help!
>
> Is there any way to recover files from a deleted container? Long story
> short, I found the behavior of `podman network rm -f` unexpected, and it
> wound up deleting most of my containers. One in particular had a month of
> work in it (I was using it as a development environment), and it turns out
> only part of it was backed up. I’m desperate!
>
> This is Podman for Windows, so most of the files on the “host” are in the
> WSL environment. I can get into that no problem with `wsl -d
> podman-machine-default`.
>
> As an added wrinkle, my default connection was
> `podman-machine-default-root`, but I was was not running Podman rootful.
> I’m not sure this is particularly relevant.
>
> grep-ing for strings which are unique to the development environment shows
> one hit in Windows, in
> %HOME%/.local/containers/podman/machine/wsl/wsldist/podman-machine-default/ext4.vhdx
> - which I assume is the file system for the WSL layer itself. I made a copy
> of it.
>
> A grep within WSL itself doesn’t show so any hits, so it’s possible the
> files were deleted as far as WSL is concerned. I tried searching for an
> EXT4 undelete tool, but the only one I found (extundelete) is from 10+
> years ago and doesn’t appear to work anymore.
>
> I haven’t stopped WSL (I’m using /tmp as a staging area) or restarted the
> computer.
>
> I’m at wit’s end. I really don’t know where to begin or look to recover
> these files, which I really, really need. Any recovery suggestions (no
> matter how tedious) would be welcome.
>
> I know it’s too late to change now, but man, the behavior of `podman
> network remove` is unexpected.
>
> Thanks,
> Alvin
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
1 year, 8 months
[Podman] Re: fs.mqueue.msg_max rootless problem
by Михаил Иванов
Sorry Levis, but without --user option to nsenter it also dos not work properly.
nsenter does not give an error in this case and report correct msg_max value,
but after container is started msg_max is still 10 inside the container.
Best regards,
On 29.11.2023 23:48, Lewis Gaul wrote:
> For the record I made one small mistake - the user namespace should
> not be entered.
>
> [centos@localhost ~]$ podman create --rm -it --name ctr_foo --ipc
> private busybox
> 9e9addf1ffaf88933c277c4f6cf1983cb68e69e23778da432f6a9d1b6a0d2ee6
> [centos@localhost ~]$ podman init ctr_foo
> ctr_foo
> [centos@localhost ~]$ ctr_pid=$(podman inspect -f '{{.State.Pid}}'
> ctr_foo)
> [centos@localhost ~]$ sudo nsenter --target $ctr_pid --ipc sysctl
> fs.mqueue.msg_max=64
> fs.mqueue.msg_max = 64
> [centos@localhost ~]$ podman start -a ctr_foo
> / # sysctl fs.mqueue
> fs.mqueue.msg_default = 10
> fs.mqueue.msg_max = 64
> fs.mqueue.msgsize_default = 8192
> fs.mqueue.msgsize_max = 8192
> fs.mqueue.queues_max = 256
>
> But yes I understand this isn't always going to be a suitable
> approach, I think the fix needs to be in the kernel (and I'm now
> unclear whether it has been fixed or not since Giuseppe said in
> the "mqueue msg_max in rootless container" email thread that nothing
> has changed in v6.7).
>
> Regards,
> Lewis
>
> On Wed, 29 Nov 2023 at 19:02, Михаил Иванов <ivans(a)isle.spb.ru> wrote:
>
> Hallo, thanks for advice!
>
> But sorry, for me it did not work:
>
> podman create --name ctest --pod test --ipc private --cap-add=SYS_PTRACE --init --replace test-image
> container=99425540b8e3544409e4086cf1a44b04cf9f402f1d7505f807324dce71eb2373
> podman init test
> test
> podman inspect -f '{{.State.Pid}}' test
> pid=2157674
> sudo nsenter --target 2157674 --user --ipc sysctl fs.mqueue.msg_max=64
> sysctl: permission denied on key "fs.mqueue.msg_max"
>
> Anyway, even if it would work, this method would not be appropriate in my case,
> since eventually my containers should be run from quadlet (which in turn uses
> podman kube play). Shell is used only during development.
>
> Best regards,
>
> On 29.11.2023 18:10, Lewis Gaul wrote:
>> Hi,
>>
>> I think this is the same thing I raised in
>> https://github.com/containers/podman/discussions/19737?
>>
>> This seems to be a kernel limitation - I'm not sure where the
>> mqueue limits come from when creating a new IPC namespace, but it
>> doesn't inherit the limits from the parent namespace and the root
>> user within the user namespace does not have permissions to
>> modify the limits. This was supposedly fixed in a recent kernel
>> version although I haven't tested it.
>>
>> The workaround I'm currently using (requiring sudo permissions)
>> is along the lines of:
>> podman create --ipc private --name ctr_foo ...
>> podman init ctr_foo
>> ctr_pid=$(podman inspect -f '{{.State.Pid}}' ctr_foo)
>> sudo nsenter --target $ctr_pid --user --ipc sysctl
>> fs.mqueue.msg_max=64
>> podman start ctr_foo
>>
>> Obviously this isn't ideal, and I'd be open to alternatives...
>>
>> Regards,
>> Lewis
>>
>> On Mon, 27 Nov 2023 at 12:23, Daniel Walsh <dwalsh(a)redhat.com> wrote:
>>
>> On 11/27/23 02:04, Михаил Иванов wrote:
>>> Hallo,
>>>
>>> For me rootful works:
>>>
>>> island:container [master]> cat /proc/sys/fs/mqueue/msg_max
>>> 256
>>
>> $ podman run alpine ls -ld /proc/sys/fs/mqueue/msg_max
>> -rw-r--r-- 1 nobody nobody 0 Nov 27 12:10
>> /proc/sys/fs/mqueue/msg_max
>>
>> Since it is owned by real root, a rootless user can not write
>> to it. I guess we could ague this is a bug with the kernel.
>> mqeueu/msg_max should be owned by root of the user namespace
>> as opposed to real root.
>>
>>> ## Rootful:
>>> island:container [master]> sudo podman run --sysctl=fs.mqueue.msg_max=64 --rm centos cat /proc/sys/fs/mqueue/msg_max
>>> 64
>>>
>>> ## Rootless:
>>> island:container [master]> podman run --sysctl=fs.mqueue.msg_max=64 --rm centos cat /proc/sys/fs/mqueue/msg_max
>>> Error: crun: open `/proc/sys/fs/mqueue/msg_max`: Permission denied: OCI permission denied
>>>
>>> ## What rootless gets by default (changed as compared to host setting!):
>>> island:container [master]> podman run --rm centos cat /proc/sys/fs/mqueue/msg_max
>>> 10
>>>
>>> Rgrds,
>>> On 25.11.2023 20:17, Daniel Walsh wrote:
>>>> On 11/25/23 10:44, Михаил Иванов wrote:
>>>>> Hallo,
>>>>> Is it possible to get podman to propagate current host fs.mqueue.msg_max
>>>>> value to rootless container? I can do that if I specify --ipc host when
>>>>> running the container, but this also exposes other ipc stuff from host
>>>>> to container, including shared memory, which I do not want.
>>>>>
>>>>> If I specify --sysctl fs.mqueue.msg_size=64 to podman it gives me
>>>>> "OCI permission denied" error, even when my host setting (256) is greater
>>>>> than requested value.
>>>>> Thanks,
>>>>> --
>>>>> Micvhael Ivanov
>>>>>
>>>>> _______________________________________________
>>>>> Podman mailing list --podman(a)lists.podman.io
>>>>> To unsubscribe send an email topodman-leave(a)lists.podman.io
>>>>
>>>> The way you attempted is correct. Might not be allowed for
>>>> rootless containers.
>>>>
>>>> I attempted this in a rootful container and it blows up for me.
>>>>
>>>>
>>>> podman run --sysctl fs.mqueue.msg_size=64 alpine echo hi
>>>> Error: crun: open `/proc/sys/fs/mqueue/msg_size`: No such
>>>> file or directory: OCI runtime attempted to invoke a
>>>> command that was not found
>>>>
>>>>
>>>> _______________________________________________
>>>> Podman mailing list --podman(a)lists.podman.io
>>>> To unsubscribe send an email topodman-leave(a)lists.podman.io
>>
>>
>> _______________________________________________
>> Podman mailing list -- podman(a)lists.podman.io
>> To unsubscribe send an email to podman-leave(a)lists.podman.io
>>
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
>
> _______________________________________________
> Podman mailing list --podman(a)lists.podman.io
> To unsubscribe send an email topodman-leave(a)lists.podman.io
1 year, 5 months
[Podman] Re: podman build stops with NO SPACE left on device
by Daniel Walsh
On 7/23/24 06:58, Matthias Apitz wrote:
> Hello,
>
> I'm creating a podman container on RedHat 8.1 which should run our
> application server on SuSE SLES15 SP6. The build was fine, but a second
> build to add some more components stops with the following details:
>
> $ podman -v
> podman version 4.9.4-rhel
>
> $ podman build -t sles15-sp6 suse
>
> suse/Dockerfile:
>
> FROM registry.suse.com/bci/bci-base:15.6
> LABEL maintainer="Matthias Apitz <guru(a)unixarea.de>"
> ...
>
> #
> # sisis-pap
> #
> RUN cd /home/sisis/install ; tar xzf sisis-pap-V7.3-linux-pkg-tar.gz ; cd sisis-pap ; ./install
>
> ...
> Installation beendet.
> Hinweise zum weiteren Vorgehen entnehmen Sie bitte
> der Freigabemitteilung FGM-sisis-pap-V7.3.htm
> Installation erfolgreich beendet
>
> (the 4 German lines are coming out at the end of the above script
> './install'; i.e. the software of the tar archive was unpacked and
> installed fine, but the error is while writing the container after this
> step to disk)
>
> Error: committing container for step {Env:[PATH=/bin:/usr/bin:/usr/local/bin] Command:run Args:[cd /home/sisis/install ; tar xzf sisis-pap-V7.3-linux-pkg-tar.gz ; cd sisis-pap ; ./install] Flags:[] Attrs:map[] Message:RUN cd /home/sisis/install ; tar xzf sisis-pap-V7.3-linux-pkg-tar.gz ; cd sisis-pap ; ./install Heredocs:[] Original:RUN cd /home/sisis/install ; tar xzf sisis-pap-V7.3-linux-pkg-tar.gz ; cd sisis-pap ; ./install}: copying layers and metadata for container "a11a6ce841891057fb53dfa276d667a938764a6a63e9374b61385f0012532aa0": writing blob: adding layer with blob "sha256:a0b630090f1fb5cae0e1ec48e5498021be8e609563859d8cebaf0ba75b89e21d": processing tar file(write /home/sisis/install/sisis-pap/usr/local/sisis-pap/pgsql-14.1/share/locale/fr/LC_MESSAGES/pg_test_fsync-14.mo: no space left on device): exit status 1
>
> $ podman images
> REPOSITORY TAG IMAGE ID CREATED SIZE
> <none> <none> 4ea3a0a7bd94 27 minutes ago 2.85 GB
> localhost/sles15-sp6 latest 0874a5469069 About an hour ago 6.31 GB
> registry.suse.com/bci/bci-base 15.6 0babc7595746 12 days ago 130 MB
>
> $ ls -l .local/share/containers
> lrwxrwxrwx 1 root root 24 Aug 18 2023 .local/share/containers -> /appdata/guru/containers
>
> $ env | grep TMP
> TMPDIR=/home/apitzm/.local/share/containers/tmp
>
> apitzm@srrp02dxr1:~$ df -kh /appdata/
> Filesystem Size Used Avail Use% Mounted on
> /dev/mapper/vga-appdata 98G 83G 11G 89% /appdata
>
> The container would need again 6.31 GB, maybe a bit more, but not 11G.
>
> Why it is complaining?
>
> matthias
>
>
>
Are you using vfs versus overlay?
10 months, 1 week
[Podman] Re: ro sysfs doesn't affect spc_t?
by Peter Hunt
I did manage to get a container running with the changes in
https://github.com/containers/container-selinux/pull/291 I am not sure why
the avc messages weren't coming through, but reinstalling fixed the problem
for me somehow
Thanks!
On Thu, Dec 21, 2023 at 12:42 PM Daniel Walsh <dwalsh(a)redhat.com> wrote:
> On 12/19/23 16:25, Peter Hunt wrote:
>
> Hey team,
>
> I've got some odd behavior on a podman in Openshift use case I am trying
> to figure out. I am trying to run podman in openshift without privilege,
> extra capabilities and ideally a custom SELinux label that isn't `spc_t`. I
> have managed to adapt the `container_engine_t` type to get past any
> denials, but now I'm hitting an issue where the sysfs of the container is
> read only:
>
> I am running with this yaml:
> ```
> apiVersion: v1
> kind: Pod
> metadata:
> name: no-priv
> annotations:
> io.kubernetes.cri-o.Devices: "/dev/fuse"
>
> spec:
> containers:
> - name: no-priv-rootful
> image: quay.io/podman/stable
> args:
> - sleep
> - "1000000"
> securityContext:
> runAsUser: 1000
> seLinuxOptions:
> type: "container_engine_t"
> ```
> and using a container-selinux based on
> https://github.com/haircommander/container-selinux/tree/engine_t-improvem...
>
> when I run this container, and then run podman inside, I get this error:
>
> ```
> $ oc exec -ti pod/no-priv-rootful -- bash
> [podman@no-priv-rootful /]$ podman run ubi8 ls
> WARN[0005] Path "/run/secrets/etc-pki-entitlement" from
> "/etc/containers/mounts.conf" doesn't exist, skipping
> Error: crun: set propagation for `sys`: Permission denied: OCI permission
> denied
> ```
>
> What I find odd, and what is the subject of this email, is that when I
> adapt the selinux label to be "spc_t":
> ```
> type: "spc_t"
> ```
>
> the container runs fine. There are no denials in AVC when I run
> `container_engine_t`, but clearly something is different. Can anyone help
> me identify what is happening?
>
> Thanks
> Peter
> --
>
> Peter Hunt, RHCE
>
> They/Them or He/Him
>
> Senior Software Engineer, Openshift
>
> Red Hat <https://www.redhat.com>
> <https://www.redhat.com>
>
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
> Have you tried it in permissive mode?
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
1 year, 5 months
[Podman] Re: How does podman set rootfs ownership to root when using --userns keep-id ?
by Daniel Walsh
On 5/4/23 04:29, Paul Holzinger wrote:
> Hi Fabio,
>
> My understanding is that the image is copied and chown-ed to the
> correct uids when running rootless.
> There is also the concept of idmapped mounts in the kernel but the
> kernel only allows this as root at the moment.
>
> Paul
>
> On Thu, May 4, 2023 at 8:56 AM Fabio <fabio(a)redaril.me> wrote:
>
> Hi all,
>
> I'm trying to understand some of the internals of namespace-based
> Linux
> containers and I'm kindly asking you for help.
>
> When launching `podman run -it --rm -v ~/Downloads:/dwn
> docker.io/library/ubuntu <http://docker.io/library/ubuntu>
> /bin/bash`, the inside user is root. That is
> expected, and without any surprise the /proc/self/uid_map is:
> 0 1000 1
> 1 100000 65536
>
> When launching `podman run -it --rm -v ~/Downloads:/dwn --userns
> keep-id
> docker.io/library/ubuntu <http://docker.io/library/ubuntu>
> /bin/bash` instead, the /proc/self/uid_map is:
> 0 1 1000
> 1000 0 1
> 1001 1001 64536
>
> If I'm understanding it well, in the latter case there is a double
> mapping: to keep host UID and GID, podman fires two user namespaces,
> where the inner namespace maps its IDs the outer namespace, which
> finally maps to the host (that is, 1000 -> 0 -> 1000 again).
>
Correct.
>
> The mechanism I don't get is how podman manages to make the rootfs
> owned
> by root inside the inner namespace, while assigning volumes to the
> unprivileged inner user:
> dr-xr-xr-x. 1 root root 18 May 4 06:33 .
> dr-xr-xr-x. 1 root root 18 May 4 06:33 ..
> lrwxrwxrwx. 1 root root 7 Mar 8 02:05 bin -> usr/bin
> drwxr-xr-x. 1 root root 0 Apr 18 2022 boot
> [...]
> drwxr-xr-x. 1 myuser 1000 2.1K May 3 15:07 dwn
>
> What is the algorithm here? I have a feeling there is some clever
> combination of syscalls here I don't get. When I tried to
> reproduce this
> double namespace situation, the rootfs of the inner namespace was all
> owned by 1000, not 0.
>
> Thank you so so much for your time if you're willing to help me,
> Fabio.
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
>
> _______________________________________________
> Podman mailing list --podman(a)lists.podman.io
> To unsubscribe send an email topodman-leave(a)lists.podman.io
2 years
[Podman] Re: Follow-up: Rootless storage usage
by Михаил Иванов
Is native overlay available in rootless mode?
When I run podman as root there's no problem, overlayfs is picked up
as default in debian. VFS is selected as default only in rootless mode.
Rgrds,
On 25.01.2023 14:03, Giuseppe Scrivano wrote:
> Reinhard Tartler<siretart(a)gmail.com> writes:
>
>> On Tue, Jan 24, 2023 at 2:08 PM Daniel Walsh<dwalsh(a)redhat.com> wrote:
>>
>> On 1/24/23 03:47, Reinhard Tartler wrote:
>>
>> Dan,
>>
>> In Debian, I've chosen to just go with the upstream defaults:
>> https://github.com/containers/storage/blob/8428fad6d0d3c4cded8fd7702af36a...
>>
>> This file is installed verbatim to /usr/share/containers/storage.conf.
>>
>> Is there a better choice? Does Fedora/Redhat provide a default storage.conf from somewhere else?
>>
>> Thanks,
>> -rt
>>
>> That should be fine. Fedora goes with that default as well. Does debian support rootless overlay by default?
>>
>> If not then it would fail over to VFS if fuse-overlayfs is not installed.
>>
>> I'm a bit confused about what you mean with that.
>>
>> In Debian releases that ship podman 4.x we have at least Linux kernel 6.0. The fuse-overlayfs package is installed by default, but users may opt to not
>> install it by configuring apt to not install "Recommends" by default.
>>
>> What else is required for rootless overlay?
>>
>> Also, if I follow this conversation, then it appears that the default storage.conf requires modification in line 118 (to uncomment the mount_program
>> option) in order to actually use fuse-overlayfs. I would have expected podman to use fuse-overlayfs if it happens to be installed, and fallback to direct
>> mount if not. I read Michail's email thread that this appears to be not the case and he had to spend a lot of effort figuring out how to install an
>> appropriate configuration file. Maybe I'm missing something, but I wonder what we can do to improve the user experience?
> what issue do you see if you use native overlay?
>
> Podman prefers native overlay if it is available, since it is faster.
> If not, it tries fuse-overlays and if it is not available, it falls back
> to vfs.
>
> Could you try from a fresh storage though? If fuse-overlayfs was
> already used, then Podman will continue using it even if native overlay
> is available, since the storage metadata is slightly different.
>
> Thanks,
> Giuseppe
> _______________________________________________
> Podman mailing list --podman(a)lists.podman.io
> To unsubscribe send an email topodman-leave(a)lists.podman.io
2 years, 4 months
[Podman] Re: After reboot, Container not responding to connection requests
by Valentin Rothberg
Hi Jacques,
Thanks for reaching out.
Are you always running the service as root? Can you share the logs of this
service?
Since you're running Podman in systemd, you may be interested in looking
into Quadlet [1] [2].
[1] https://www.redhat.com/sysadmin/quadlet-podman
[2] https://docs.podman.io/en/latest/markdown/podman-systemd.unit.5.html
On Thu, Aug 31, 2023 at 10:39 PM Jacques Jessen <jacques.jessen(a)gmail.com>
wrote:
> Running Podman as root and created a container for Symantec's HSM Agent.
>
> When manually started, it reports as working:
>
>
> [root@PoC ~]# podman ps
> CONTAINER ID IMAGE COMMAND
> CREATED STATUS PORTS
> NAMES
> b53be5503ca7 localhost/symantec_hsm_agent:2.1_269362 catalina.sh run 4
> minutes ago Up 4 minutes 0.0.0.0:8080->8080/tcp, 0.0.0.0:8082->8082/tcp,
> 0.0.0.0:8443->8443/tcp symhsm_agent
>
> [root@PoC ~]# podman stats
> ID NAME CPU % MEM USAGE / LIMIT MEM % NET
> IO BLOCK IO PIDS CPU TIME AVG CPU %
> b53be5503ca7 symhsm_agent 3.55% 216MB / 4.112GB 5.25%
> 1.93kB / 1.09kB 249.2MB / 0B 29 3.759969275s 3.55%
>
>
> You can successfully access the 8080, 8082, 8443 ports with a browser.
>
> However, if the server is rebooted, while Podman will show results as
> above that it is working, from a browser you will be told:
>
>
> ERR_CONNECTION_TIMED_OUT
>
>
> If you manually Stop and Start the container, you can successfully access
> the 8080, 8082, 8443 ports with a browser.
>
> Given there's no change in the configuration, this feels like there's a
> timing issue with the initial start. I used the Podman provided response
> to create the Service file:
>
>
> [root@PoC ~]# podman generate systemd --new --name symhsm_agent
> # container-symhsm_agent.service
> # autogenerated by Podman
>
> [Unit]
> Description=Podman container-symhsm_agent.service
> Documentation=man:podman-generate-systemd(1)
> Wants=network-online.target
> After=network-online.target
> RequiresMountsFor=%t/containers
>
> [Service]
> Environment=PODMAN_SYSTEMD_UNIT=%n
> Restart=on-failure
> TimeoutStopSec=70
> ExecStart=/usr/bin/podman run \
> --cidfile=%t/%n.ctr-id \
> --cgroups=no-conmon \
> --rm \
> --sdnotify=conmon \
> --replace \
> -d \
> --name symhsm_agent \
> -p 8443:8443 \
> -p 8082:8082 \
> -p 8080:8080 \
> -v /opt/podman/:/usr/local/luna symantec_hsm_agent:2.1_269362
> ExecStop=/usr/bin/podman stop \
> --ignore -t 10 \
> --cidfile=%t/%n.ctr-id
> ExecStopPost=/usr/bin/podman rm \
> -f \
> --ignore -t 10 \
> --cidfile=%t/%n.ctr-id
> Type=notify
> NotifyAccess=all
>
> [Install]
> WantedBy=default.target
>
>
> Having to manually login and restart the container kind of defeats the
> purpose.
>
> Thoughts and feedback appreciated.
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
1 year, 8 months
[Podman] Re: 'system reset' makes things weird - ?
by Paul Holzinger
To clarify as I am the guy who wrote that code:
> The existence of any config. file in /etc/cni/net.d tells podman to use
> the "old" CNI networking system.
This is not correct. If you only have the default "podman" cni config file
it will be ignored by the backend detection and choose netavark.
The reason you see the warning is because the code first tries to init the
cni backend to check how many networks are configured.
This init will also validate the files and thus throw the warning. When we
know that you only have the default config we configure
netavark and then store this decision in the `defaultNetworkBackend` file
in graphroot. SO that is why the warning will only be displayed
for the first command.
As mentioned the correct way to fix this is to either remove the config
file or configure the network backend in containers.conf to not go
through the auto detection logic. You could also install
containernetworking-plugins to make the warning go away but this isn't
needed
because you use netavark.
Basically the whole idea behind this logic was to support upgrades from 3.X
to 4.0 without forcing netavark on them while new installs
should default to netavark.
---
Paul
On Tue, May 9, 2023 at 10:31 PM Chris Evich <cevich(a)redhat.com> wrote:
> On 5/9/23 14:30, lejeczek via Podman wrote:
> > It seems that this: /etc/cni/net.d/87-podman.conflist is some remainer
> > of some previous installation
>
> Ahh, that explains perfectly what you're seeing. You're right, it must
> be a leftover from a previous setup, using an older version of podman.
> The existence of any config. file in /etc/cni/net.d tells podman to use
> the "old" CNI networking system. When the directory is empty, you get
> the "new" netavark system. The two are _NOT_ compatible, forward or
> backward with each other.
>
> When upgrading (and system reset-ing) podman doesn't touch these files -
> It does the "safe" thing, and assumes there could be some valuable
> settings inside. It's up to you (when upgrading / switching systems) to
> migrate the configuration manually (if applicable).
>
> Unfortunately, this is not made very clear by the error messages and
> behaviors. However, what you've found is working as intended -
> moving/removing those files THEN resetting will put you on the new
> netavark system - where you'll no longer see the errors.
>
> Hope that helps.
>
> ---
> Chris Evich (he/him), RHCA III
> Senior Quality Assurance Engineer
> If it ain't broke, your hammer isn't wide 'nough.
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
2 years