shouldn't the current directory be the default context for "podman build"?
by Robert P. J. Day
"man podman-build" suggests that the context argument is optional:
SYNOPSIS
podman build [options] [context]
podman image build [options] [context]
...
If no context directory is specified, then Podman will assume
the current working directory as the build context, which
should contain the Containerfile.
but if i have a directory with nothing but a Containerfile, i get:
$ podman build
Error: no context directory specified, and no containerfile specified
$
OTOH, specifying context of current directory:
$ podman build .
STEP 1: FROM alpine:latest
... etc etc ...
thoughts?
rday
1 week, 4 days
mqueue msg_max in rootless container
by Michael Ivanov
Hallo!
I'm trying to run my application in podman rootless container and I stumble
on following problem: my program needs /proc/sys/fs/mqueue/msg_max to be at
least 256, but in running container this value is just 10. When I try to
specify this parameter while running the image (--sysctl 'fs.mqueue.msg_max=256')
I get the following error:
Error: open /proc/sys/fs/mqueue/msg_max: Permission denied: OCI permission denied
and container is not created.
My host where container is being run has this parameter set to 256. How can I
expose current host setting for msg_max to my container?
Best regards,
--
\ / | |
(OvO) | Михаил Иванов |
(^^^) | |
\^/ | E-mail: ivans(a)isle.spb.ru |
^ ^ | |
1 year
Expoising ports automatically
by Jorge Fábregas
Hi,
Newbie question... I'm playing out with rootless container based on:
docker.io/library/mysql
If I run this image (without specifying ports or volumes) it will create
a volume automatically. I did "inspect" the image and I see the volume
is defined there. I also noticed the ExposedPorts as well but podman
didn't create theses. Why would it create the volume but not expose the
ports (if they're available and above 1024)?
Thanks,
Jorge
3 years, 3 months
Namespaced users in Host's passwd?
by Jorge Fábregas
Hi,
I have a test system running various rootless containers. Do people
create the namedspaced users in the host as well (with , say ,
/sbin/nologin) just so that when you use commands like "ps" you
immediately know who it is? For example: jorge-daemon, jorge-apache,
etc... The downside - I assume - is that once your start your containers
in another order..things will get messed up.
Thanks,
Jorge
3 years, 3 months
Re: Podman on Redhat
by Scott McCarty
Kent,
That is correct, you cannot log into a system as root, then use su -
$USER to become a user and expect Rootless to work. You must login as the
user account you want to use so that the correct login variables get set.
When you first asked I thought you .want logging in as a user, then using
su or su - to become root.
Email is a really bad forum for trying to answer these questions while
global support is also trying to answer them. It creates a lot of confusion
and chatter between a bunch of different teams trying to solve your problem.
I don't know what you are trying to do with systemd so I'm going to have to
encourage you to continue to work that through support.
Best Regards
Scott M
On Mon, Aug 23, 2021, 2:55 PM Collins, Kent <Robert.Collins(a)bnsf.com> wrote:
> The Redhat support site updated today with the following:
>
> So the documentation is wrong and the podman tool does not work if you su
> or sudo.
>
>
>
> RedHat support site is updated as below today.
>
> Resolution
>
> · It is currently *not supported by Red Hat* to use rootless
> podman via another other means than using ssh to access the user you
> intend to execute podman as. Using su or su - is not a currently
> supported mechanism of rootless podman.
>
> · The following steps show a *completely unsupported method* of
> working around this, and as every user environment has the potential to be
> different, cannot be guaranteed to work or even troubleshot by Red Hat
> Support at this time.
>
> · A Request for Feature Enhancement
> <https://bugzilla.redhat.com/show_bug.cgi?id=1996757> has been opened to
> allow for Rootless Podman to execute with su or su - to a non-root user.
>
> https://access.redhat.com/solutions/6204862
>
>
>
>
>
>
>
> *Kent Collins*
>
> Office: 817.352.0251 | Enterprise Information Management | Cell:
> 817.879.7764
>
> Data Solutions Architect/Scientist – Published Author and Conference
> Speaker
>
> [image: ibm-champion-analytics-7-year-milestone]
>
> “Death and life *are* in the power of the tongue: and they that love it
> shall eat the fruit thereof.”
>
> Prov 18:21
>
>
>
> [image: BNSF_CW_Top_100_2016]
>
>
>
> *From:* Scott McCarty [mailto:smccarty@redhat.com]
> *Sent:* Thursday, August 19, 2021 5:21 AM
> *To:* Collins, Kent <Robert.Collins(a)BNSF.com>
> *Cc:* jeremy.valcourt(a)gmail.com; Walsh, Daniel <dwalsh(a)redhat.com>;
> podman mailing list <podman(a)lists.podman.io>
> *Subject:* Re: [Podman] Re: Podman on Redhat
>
>
>
> *EXTERNAL EMAIL*
>
> Kent,
>
> That at least gives me a hair to work with. It sounds like this was a
> RHEL 8.0 or 8.1 box which was upgraded to RHEL 8.4. In those esrly versions
> of RHEL, there were still some manual steps to getting rootless working.
>
>
>
> In RHEL 8.4 rootless should work quite well with no extra steps necessary.
> We've done a lot of work to make sure it works out of the box.
>
>
>
> In addition to the upgrade problem, I suspect your corporate standard
> might make security changes which could make rootless.more fragile.
>
>
>
> Do you have permissions to add a new user? If so, could you add a test
> user and try to run your command with that?
>
>
>
> This would give us a baseline to ensure that it's not something in the
> default configuration of your user account.
>
>
>
> Best Regards
>
> Scott M
>
>
>
>
>
>
>
> On Wed, Aug 18, 2021, 3:48 PM Collins, Kent <Robert.Collins(a)bnsf.com>
> wrote:
>
> Hi
>
> The Unix setup was correct already. No issues.
>
>
>
> If you do not setup the subuid and subgid files you get the error below.
>
>
>
> ERRO[0000] cannot find UID/GID for user b000980: No subuid ranges found
> for user "b000980" in /etc/subuid - check rootless mode in man pages.
>
> WARN[0000] using rootless single mapping into the namespace. This might
> break some images. Check /etc/subuid and /etc/subgid for adding sub*ids
>
> Error: stat /db/admin/rest/images/db2rest.tar: permission denied
>
>
>
> So all the steps were done perfectly following ( Steps 1-3 were done )
>
>
> https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tu...
> <https://imsva91-ctp.trendmicro.com:443/wis/clicktime/v1/query?url=https%3...>
>
>
>
> So far no luck getting podman to work.
>
>
>
> I ran the failing command using debug
>
>
>
> DEBU[0000] Workdir "/opt/ibm/dbrest" resolved to host path
> "/home/db2rest1/.local/share/containers/storage/overlay/719f222c5894b8b113d90bae2d0a64dffba8b3303bc0513617e3176bf6ea6200/merged/opt/ibm/dbrest"
>
> DEBU[0000] Not modifying container
> 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c /etc/passwd
>
> DEBU[0000] Not modifying container
> 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c /etc/group
>
> DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode
> subscription
>
> DEBU[0000] Setting CGroups for container
> 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c to
> user.slice:libpod:9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c
>
> DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d
>
> DEBU[0000] Created OCI spec for container
> 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c at
> /home/db2rest1/.local/share/containers/storage/overlay-containers/9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c/userdata/config.json
>
> DEBU[0000] /usr/bin/conmon messages will be logged to syslog
>
> DEBU[0000] running conmon: /usr/bin/conmon
> args="[--api-version 1 -c
> 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c -u
> 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c -r
> /usr/bin/runc -b
> /home/db2rest1/.local/share/containers/storage/overlay-containers/9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c/userdata
> -p
> /tmp/runtime-u1/containers/overlay-containers/9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c/userdata/pidfile
> -n db2rest_dsn08d --exit-dir /tmp/runtime-u1/libpod/tmp/exits
> --socket-dir-path /tmp/runtime-u1/libpod/tmp/socket -s -l
> k8s-file:/home/db2rest1/.local/share/containers/storage/overlay-containers/9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c/userdata/ctr.log
> --log-level debug --syslog -t --conmon-pidfile
> /tmp/runtime-u1/containers/overlay-containers/9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c/userdata/conmon.pid
> --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg
> /home/db2rest1/.local/share/containers/storage --exit-command-arg --runroot
> --exit-command-arg /tmp/runtime-u1/containers --exit-command-arg
> --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager
> --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg
> /tmp/runtime-u1/libpod/tmp --exit-command-arg --runtime --exit-command-arg
> runc --exit-command-arg --storage-driver --exit-command-arg overlay
> --exit-command-arg --storage-opt --exit-command-arg
> overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg
> --events-backend --exit-command-arg file --exit-command-arg --syslog
> --exit-command-arg container --exit-command-arg cleanup --exit-command-arg
> 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c]"
>
> INFO[0000] Running conmon under slice user.slice and unitName
> libpod-conmon-9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c.scope
>
> DEBU[0000] Received: -1
>
> DEBU[0000] Cleaning up container
> 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c
>
> DEBU[0000] Tearing down network namespace at
> /tmp/runtime-u1/netns/cni-13220a15-ad73-aec3-3ef7-7f7a08eb50f0 for
> container 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c
>
> DEBU[0000] unmounted container
> "9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c"
>
> DEBU[0000] ExitCode msg: "time=\"2021-08-18t14:22:06-05:00\" level=error
> msg=\"read unix @->/run/systemd/private: read: connection reset by
> peer\"\ntime=\"2021-08-18t14:22:06-05:00\" level=error
> msg=\"container_linux.go:367: starting container process caused:
> process_linux.go:340: applying cgroup configuration for process caused:
> read unix @->/run/systemd/private: read: connection reset by peer\": oci
> runtime error"
>
> Error: OCI runtime error: time="2021-08-18T14:22:06-05:00" level=error
> msg="read unix @->/run/systemd/private: read: connection reset by peer"
>
> time="2021-08-18T14:22:06-05:00" level=error msg="container_linux.go:367:
> starting container process caused: process_linux.go:340: applying cgroup
> configuration for process caused: read unix @->/run/systemd/private: read:
> connection reset by peer"
>
>
>
>
>
> *Kent Collins*
>
> Office: 817.352.0251 | Enterprise Information Management | Cell:
> 817.879.7764
>
> Data Solutions Architect/Scientist – Published Author and Conference
> Speaker
>
> [image: ibm-champion-analytics-7-year-milestone]
>
> “Death and life *are* in the power of the tongue: and they that love it
> shall eat the fruit thereof.”
>
> Prov 18:21
>
>
>
> [image: BNSF_CW_Top_100_2016]
>
>
>
> *From:* Collins, Kent
> *Sent:* Wednesday, August 18, 2021 9:16 AM
> *To:* jeremy.valcourt(a)gmail.com; dwalsh(a)redhat.com
> *Cc:* podman(a)lists.podman.io
> *Subject:* Podman on Redhat
>
>
>
> So far running Podman ( non-Root ) on Redhat has been a horrible
> experience. It seems to take very little to break Podman.
>
>
>
> From breaking when using su or sudo to the directory length issue, these
> simple normal Unix everyday operations seem to be difficult for development
> of podman.
>
>
>
> I am trying to run a very simple API container using Podman as non-root
> and at this point I cannot start any containers.
>
>
>
> On top of that, workarounds found in searching for solutions also never
> work.
>
>
>
> For example these two work arounds do not work.
>
>
>
> export DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/$(id -u)/bus
>
>
>
> systemd-run --scope --user $SHELL
>
>
>
> I will admit I am not a Podman expert. My goal in using Podman over
> Docker should not require it. It only needs to perform basic container
> operations. Stop/start/rm/run/load
>
>
>
> Any help to get this working would be appreciated.
>
>
>
> ==> podman --version
>
> podman version 3.0.2-dev
>
>
>
> x /etc/*ease[1]: NAME="Red Hat Enterprise
> Linux"
> x
>
> x /etc/*ease[2]: VERSION="8.4
> (Ootpa)"
> x
>
> x /etc/*ease[3]:
> ID="rhel"
> x
>
> x /etc/*ease[4]: ID_LIKE="fedora"
>
>
>
>
>
>
>
> *Kent Collins*
>
> Office: 817.352.0251 | Enterprise Information Management | Cell:
> 817.879.7764
>
> Data Solutions Architect/Scientist – Published Author and Conference
> Speaker
>
> [image: ibm-champion-analytics-7-year-milestone]
>
> “Death and life *are* in the power of the tongue: and they that love it
> shall eat the fruit thereof.”
>
> Prov 18:21
>
>
>
> [image: BNSF_CW_Top_100_2016]
>
>
>
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
>
>
3 years, 4 months
Podman on Fedora 34
by Pavel Sosin
I can confirm that the Podman 3.2 on Fedora 34 workstation does work. I had
"luck" to reinstall everything from scratch after an attempt to upgrade a
machine to Win 11 completely destroyed one of the laptops in my home.
Yes, it works, the role of the user's systemd manager in cgroup management
is clear to me: User's systemd must be activated by the pam/logind to
create a consistent environment, create user's service, and
initialyze user's session including /run/user, ... But the documentation is
confusing anyway: The in-line comments in the
/usr/share/containers/containers.conf without a single example can't
replace the documentation.
TCP sockets in the IPV6 local networks, Does gateway IP as network manager
metter?
#Path to look valid OCI runtime: crun. runc, runcd, kata, ??? There are
no explanations or links to the documentation. In the Kata documentation
Podman as a container manager is not mentioned, only Kubernetes and Docker
What are the differences ?
3 years, 4 months
Re: Podman on Redhat
by Scott McCarty
Kent,
That at least gives me a hair to work with. It sounds like this was a
RHEL 8.0 or 8.1 box which was upgraded to RHEL 8.4. In those esrly versions
of RHEL, there were still some manual steps to getting rootless working.
In RHEL 8.4 rootless should work quite well with no extra steps necessary.
We've done a lot of work to make sure it works out of the box.
In addition to the upgrade problem, I suspect your corporate standard might
make security changes which could make rootless.more fragile.
Do you have permissions to add a new user? If so, could you add a test user
and try to run your command with that?
This would give us a baseline to ensure that it's not something in the
default configuration of your user account.
Best Regards
Scott M
On Wed, Aug 18, 2021, 3:48 PM Collins, Kent <Robert.Collins(a)bnsf.com> wrote:
> Hi
>
> The Unix setup was correct already. No issues.
>
>
>
> If you do not setup the subuid and subgid files you get the error below.
>
>
>
> ERRO[0000] cannot find UID/GID for user b000980: No subuid ranges found
> for user "b000980" in /etc/subuid - check rootless mode in man pages.
>
> WARN[0000] using rootless single mapping into the namespace. This might
> break some images. Check /etc/subuid and /etc/subgid for adding sub*ids
>
> Error: stat /db/admin/rest/images/db2rest.tar: permission denied
>
>
>
> So all the steps were done perfectly following ( Steps 1-3 were done )
>
>
> https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tu...
>
>
>
> So far no luck getting podman to work.
>
>
>
> I ran the failing command using debug
>
>
>
> DEBU[0000] Workdir "/opt/ibm/dbrest" resolved to host path
> "/home/db2rest1/.local/share/containers/storage/overlay/719f222c5894b8b113d90bae2d0a64dffba8b3303bc0513617e3176bf6ea6200/merged/opt/ibm/dbrest"
>
> DEBU[0000] Not modifying container
> 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c /etc/passwd
>
> DEBU[0000] Not modifying container
> 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c /etc/group
>
> DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode
> subscription
>
> DEBU[0000] Setting CGroups for container
> 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c to
> user.slice:libpod:9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c
>
> DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d
>
> DEBU[0000] Created OCI spec for container
> 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c at
> /home/db2rest1/.local/share/containers/storage/overlay-containers/9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c/userdata/config.json
>
> DEBU[0000] /usr/bin/conmon messages will be logged to syslog
>
> DEBU[0000] running conmon: /usr/bin/conmon
> args="[--api-version 1 -c
> 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c -u
> 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c -r
> /usr/bin/runc -b
> /home/db2rest1/.local/share/containers/storage/overlay-containers/9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c/userdata
> -p
> /tmp/runtime-u1/containers/overlay-containers/9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c/userdata/pidfile
> -n db2rest_dsn08d --exit-dir /tmp/runtime-u1/libpod/tmp/exits
> --socket-dir-path /tmp/runtime-u1/libpod/tmp/socket -s -l
> k8s-file:/home/db2rest1/.local/share/containers/storage/overlay-containers/9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c/userdata/ctr.log
> --log-level debug --syslog -t --conmon-pidfile
> /tmp/runtime-u1/containers/overlay-containers/9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c/userdata/conmon.pid
> --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg
> /home/db2rest1/.local/share/containers/storage --exit-command-arg --runroot
> --exit-command-arg /tmp/runtime-u1/containers --exit-command-arg
> --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager
> --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg
> /tmp/runtime-u1/libpod/tmp --exit-command-arg --runtime --exit-command-arg
> runc --exit-command-arg --storage-driver --exit-command-arg overlay
> --exit-command-arg --storage-opt --exit-command-arg
> overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg
> --events-backend --exit-command-arg file --exit-command-arg --syslog
> --exit-command-arg container --exit-command-arg cleanup --exit-command-arg
> 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c]"
>
> INFO[0000] Running conmon under slice user.slice and unitName
> libpod-conmon-9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c.scope
>
> DEBU[0000] Received: -1
>
> DEBU[0000] Cleaning up container
> 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c
>
> DEBU[0000] Tearing down network namespace at
> /tmp/runtime-u1/netns/cni-13220a15-ad73-aec3-3ef7-7f7a08eb50f0 for
> container 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c
>
> DEBU[0000] unmounted container
> "9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c"
>
> DEBU[0000] ExitCode msg: "time=\"2021-08-18t14:22:06-05:00\" level=error
> msg=\"read unix @->/run/systemd/private: read: connection reset by
> peer\"\ntime=\"2021-08-18t14:22:06-05:00\" level=error
> msg=\"container_linux.go:367: starting container process caused:
> process_linux.go:340: applying cgroup configuration for process caused:
> read unix @->/run/systemd/private: read: connection reset by peer\": oci
> runtime error"
>
> Error: OCI runtime error: time="2021-08-18T14:22:06-05:00" level=error
> msg="read unix @->/run/systemd/private: read: connection reset by peer"
>
> time="2021-08-18T14:22:06-05:00" level=error msg="container_linux.go:367:
> starting container process caused: process_linux.go:340: applying cgroup
> configuration for process caused: read unix @->/run/systemd/private: read:
> connection reset by peer"
>
>
>
>
>
> *Kent Collins*
>
> Office: 817.352.0251 | Enterprise Information Management | Cell:
> 817.879.7764
>
> Data Solutions Architect/Scientist – Published Author and Conference
> Speaker
>
> [image: ibm-champion-analytics-7-year-milestone]
>
> “Death and life *are* in the power of the tongue: and they that love it
> shall eat the fruit thereof.”
>
> Prov 18:21
>
>
>
> [image: BNSF_CW_Top_100_2016]
>
>
>
> *From:* Collins, Kent
> *Sent:* Wednesday, August 18, 2021 9:16 AM
> *To:* jeremy.valcourt(a)gmail.com; dwalsh(a)redhat.com
> *Cc:* podman(a)lists.podman.io
> *Subject:* Podman on Redhat
>
>
>
> So far running Podman ( non-Root ) on Redhat has been a horrible
> experience. It seems to take very little to break Podman.
>
>
>
> From breaking when using su or sudo to the directory length issue, these
> simple normal Unix everyday operations seem to be difficult for development
> of podman.
>
>
>
> I am trying to run a very simple API container using Podman as non-root
> and at this point I cannot start any containers.
>
>
>
> On top of that, workarounds found in searching for solutions also never
> work.
>
>
>
> For example these two work arounds do not work.
>
>
>
> export DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/$(id -u)/bus
>
>
>
> systemd-run --scope --user $SHELL
>
>
>
> I will admit I am not a Podman expert. My goal in using Podman over
> Docker should not require it. It only needs to perform basic container
> operations. Stop/start/rm/run/load
>
>
>
> Any help to get this working would be appreciated.
>
>
>
> ==> podman --version
>
> podman version 3.0.2-dev
>
>
>
> x /etc/*ease[1]: NAME="Red Hat Enterprise
> Linux"
> x
>
> x /etc/*ease[2]: VERSION="8.4
> (Ootpa)"
> x
>
> x /etc/*ease[3]:
> ID="rhel"
> x
>
> x /etc/*ease[4]: ID_LIKE="fedora"
>
>
>
>
>
>
>
> *Kent Collins*
>
> Office: 817.352.0251 | Enterprise Information Management | Cell:
> 817.879.7764
>
> Data Solutions Architect/Scientist – Published Author and Conference
> Speaker
>
> [image: ibm-champion-analytics-7-year-milestone]
>
> “Death and life *are* in the power of the tongue: and they that love it
> shall eat the fruit thereof.”
>
> Prov 18:21
>
>
>
> [image: BNSF_CW_Top_100_2016]
>
>
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
3 years, 4 months
Re: Podman on Redhat
by Collins, Kent
Hi
The Unix setup was correct already. No issues.
If you do not setup the subuid and subgid files you get the error below.
ERRO[0000] cannot find UID/GID for user b000980: No subuid ranges found for user "b000980" in /etc/subuid - check rootless mode in man pages.
WARN[0000] using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding sub*ids
Error: stat /db/admin/rest/images/db2rest.tar: permission denied
So all the steps were done perfectly following ( Steps 1-3 were done )
https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tu...
So far no luck getting podman to work.
I ran the failing command using debug
DEBU[0000] Workdir "/opt/ibm/dbrest" resolved to host path "/home/db2rest1/.local/share/containers/storage/overlay/719f222c5894b8b113d90bae2d0a64dffba8b3303bc0513617e3176bf6ea6200/merged/opt/ibm/dbrest"
DEBU[0000] Not modifying container 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c /etc/passwd
DEBU[0000] Not modifying container 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c /etc/group
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode subscription
DEBU[0000] Setting CGroups for container 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c to user.slice:libpod:9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d
DEBU[0000] Created OCI spec for container 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c at /home/db2rest1/.local/share/containers/storage/overlay-containers/9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c/userdata/config.json
DEBU[0000] /usr/bin/conmon messages will be logged to syslog
DEBU[0000] running conmon: /usr/bin/conmon args="[--api-version 1 -c 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c -u 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c -r /usr/bin/runc -b /home/db2rest1/.local/share/containers/storage/overlay-containers/9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c/userdata -p /tmp/runtime-u1/containers/overlay-containers/9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c/userdata/pidfile -n db2rest_dsn08d --exit-dir /tmp/runtime-u1/libpod/tmp/exits --socket-dir-path /tmp/runtime-u1/libpod/tmp/socket -s -l k8s-file:/home/db2rest1/.local/share/containers/storage/overlay-containers/9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c/userdata/ctr.log --log-level debug --syslog -t --conmon-pidfile /tmp/runtime-u1/containers/overlay-containers/9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/db2rest1/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /tmp/runtime-u1/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /tmp/runtime-u1/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c]"
INFO[0000] Running conmon under slice user.slice and unitName libpod-conmon-9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c.scope
DEBU[0000] Received: -1
DEBU[0000] Cleaning up container 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c
DEBU[0000] Tearing down network namespace at /tmp/runtime-u1/netns/cni-13220a15-ad73-aec3-3ef7-7f7a08eb50f0 for container 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c
DEBU[0000] unmounted container "9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c"
DEBU[0000] ExitCode msg: "time=\"2021-08-18t14:22:06-05:00\" level=error msg=\"read unix @->/run/systemd/private: read: connection reset by peer\"\ntime=\"2021-08-18t14:22:06-05:00\" level=error msg=\"container_linux.go:367: starting container process caused: process_linux.go:340: applying cgroup configuration for process caused: read unix @->/run/systemd/private: read: connection reset by peer\": oci runtime error"
Error: OCI runtime error: time="2021-08-18T14:22:06-05:00" level=error msg="read unix @->/run/systemd/private: read: connection reset by peer"
time="2021-08-18T14:22:06-05:00" level=error msg="container_linux.go:367: starting container process caused: process_linux.go:340: applying cgroup configuration for process caused: read unix @->/run/systemd/private: read: connection reset by peer"
Kent Collins
Office: 817.352.0251 | Enterprise Information Management | Cell: 817.879.7764
Data Solutions Architect/Scientist – Published Author and Conference Speaker
[ibm-champion-analytics-7-year-milestone]
“Death and life are in the power of the tongue: and they that love it shall eat the fruit thereof.”
Prov 18:21
[BNSF_CW_Top_100_2016]
From: Collins, Kent
Sent: Wednesday, August 18, 2021 9:16 AM
To: jeremy.valcourt(a)gmail.com; dwalsh(a)redhat.com
Cc: podman(a)lists.podman.io
Subject: Podman on Redhat
So far running Podman ( non-Root ) on Redhat has been a horrible experience. It seems to take very little to break Podman.
From breaking when using su or sudo to the directory length issue, these simple normal Unix everyday operations seem to be difficult for development of podman.
I am trying to run a very simple API container using Podman as non-root and at this point I cannot start any containers.
On top of that, workarounds found in searching for solutions also never work.
For example these two work arounds do not work.
export DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/$(id -u)/bus
systemd-run --scope --user $SHELL
I will admit I am not a Podman expert. My goal in using Podman over Docker should not require it. It only needs to perform basic container operations. Stop/start/rm/run/load
Any help to get this working would be appreciated.
==> podman --version
podman version 3.0.2-dev
x /etc/*ease[1]: NAME="Red Hat Enterprise Linux" x
x /etc/*ease[2]: VERSION="8.4 (Ootpa)" x
x /etc/*ease[3]: ID="rhel" x
x /etc/*ease[4]: ID_LIKE="fedora"
Kent Collins
Office: 817.352.0251 | Enterprise Information Management | Cell: 817.879.7764
Data Solutions Architect/Scientist – Published Author and Conference Speaker
[ibm-champion-analytics-7-year-milestone]
“Death and life are in the power of the tongue: and they that love it shall eat the fruit thereof.”
Prov 18:21
[BNSF_CW_Top_100_2016]
3 years, 4 months
Re: Podman on Redhat
by Scott McCarty
Kent,
We'd be happy to help, but I can't quite discern what you're trying to
do. I've never seen these workarounds that you mention, so I don't know
what they are trying to work around. When you say "simple API service," I
think of a web service, but maybe you are trying to share a Unix socket?
As for su and sudo breaking, I have never seen that happen in RHEL with
Podman. I'd be happy to do a remote session to dig into what you're trying
to do.
Best Regards
Scott M
On Wed, Aug 18, 2021 at 10:16 AM Collins, Kent <Robert.Collins(a)bnsf.com>
wrote:
> So far running Podman ( non-Root ) on Redhat has been a horrible
> experience. It seems to take very little to break Podman.
>
>
>
> From breaking when using su or sudo to the directory length issue, these
> simple normal Unix everyday operations seem to be difficult for development
> of podman.
>
>
>
> I am trying to run a very simple API container using Podman as non-root
> and at this point I cannot start any containers.
>
>
>
> On top of that, workarounds found in searching for solutions also never
> work.
>
>
>
> For example these two work arounds do not work.
>
>
>
> export DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/$(id -u)/bus
>
>
>
> systemd-run --scope --user $SHELL
>
>
>
> I will admit I am not a Podman expert. My goal in using Podman over
> Docker should not require it. It only needs to perform basic container
> operations. Stop/start/rm/run/load
>
>
>
> Any help to get this working would be appreciated.
>
>
>
> ==> podman --version
>
> podman version 3.0.2-dev
>
>
>
> x /etc/*ease[1]: NAME="Red Hat Enterprise
> Linux"
> x
>
> x /etc/*ease[2]: VERSION="8.4
> (Ootpa)"
> x
>
> x /etc/*ease[3]:
> ID="rhel"
> x
>
> x /etc/*ease[4]: ID_LIKE="fedora"
>
>
>
>
>
>
>
> *Kent Collins*
>
> Office: 817.352.0251 | Enterprise Information Management | Cell:
> 817.879.7764
>
> Data Solutions Architect/Scientist – Published Author and Conference
> Speaker
>
> [image: ibm-champion-analytics-7-year-milestone]
>
> “Death and life *are* in the power of the tongue: and they that love it
> shall eat the fruit thereof.”
>
> Prov 18:21
>
>
>
> [image: BNSF_CW_Top_100_2016]
>
>
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
--
--
18 ways to differentiate open source products from upstream suppliers:
https://opensource.com/article/21/2/differentiating-products-upstream-sup...
--
Scott McCarty
Product Management - Containers, Red Hat Enterprise Linux & OpenShift
Email: smccarty(a)redhat.com
Phone: 312-660-3535
Cell: 330-807-1043
Web: http://crunchtools.com
3 years, 4 months