Kent,
That at least gives me a hair to work with. It sounds like this was a
RHEL 8.0 or 8.1 box which was upgraded to RHEL 8.4. In those esrly versions
of RHEL, there were still some manual steps to getting rootless working.
In RHEL 8.4 rootless should work quite well with no extra steps necessary.
We've done a lot of work to make sure it works out of the box.
In addition to the upgrade problem, I suspect your corporate standard might
make security changes which could make rootless.more fragile.
Do you have permissions to add a new user? If so, could you add a test user
and try to run your command with that?
This would give us a baseline to ensure that it's not something in the
default configuration of your user account.
Best Regards
Scott M
On Wed, Aug 18, 2021, 3:48 PM Collins, Kent <Robert.Collins(a)bnsf.com> wrote:
Hi
The Unix setup was correct already. No issues.
If you do not setup the subuid and subgid files you get the error below.
ERRO[0000] cannot find UID/GID for user b000980: No subuid ranges found
for user "b000980" in /etc/subuid - check rootless mode in man pages.
WARN[0000] using rootless single mapping into the namespace. This might
break some images. Check /etc/subuid and /etc/subgid for adding sub*ids
Error: stat /db/admin/rest/images/db2rest.tar: permission denied
So all the steps were done perfectly following ( Steps 1-3 were done )
https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tu...
So far no luck getting podman to work.
I ran the failing command using debug
DEBU[0000] Workdir "/opt/ibm/dbrest" resolved to host path
"/home/db2rest1/.local/share/containers/storage/overlay/719f222c5894b8b113d90bae2d0a64dffba8b3303bc0513617e3176bf6ea6200/merged/opt/ibm/dbrest"
DEBU[0000] Not modifying container
9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c /etc/passwd
DEBU[0000] Not modifying container
9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c /etc/group
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode
subscription
DEBU[0000] Setting CGroups for container
9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c to
user.slice:libpod:9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d
DEBU[0000] Created OCI spec for container
9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c at
/home/db2rest1/.local/share/containers/storage/overlay-containers/9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c/userdata/config.json
DEBU[0000] /usr/bin/conmon messages will be logged to syslog
DEBU[0000] running conmon: /usr/bin/conmon
args="[--api-version 1 -c
9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c -u
9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c -r
/usr/bin/runc -b
/home/db2rest1/.local/share/containers/storage/overlay-containers/9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c/userdata
-p
/tmp/runtime-u1/containers/overlay-containers/9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c/userdata/pidfile
-n db2rest_dsn08d --exit-dir /tmp/runtime-u1/libpod/tmp/exits
--socket-dir-path /tmp/runtime-u1/libpod/tmp/socket -s -l
k8s-file:/home/db2rest1/.local/share/containers/storage/overlay-containers/9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c/userdata/ctr.log
--log-level debug --syslog -t --conmon-pidfile
/tmp/runtime-u1/containers/overlay-containers/9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c/userdata/conmon.pid
--exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg
/home/db2rest1/.local/share/containers/storage --exit-command-arg --runroot
--exit-command-arg /tmp/runtime-u1/containers --exit-command-arg
--log-level --exit-command-arg debug --exit-command-arg --cgroup-manager
--exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg
/tmp/runtime-u1/libpod/tmp --exit-command-arg --runtime --exit-command-arg
runc --exit-command-arg --storage-driver --exit-command-arg overlay
--exit-command-arg --storage-opt --exit-command-arg
overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg
--events-backend --exit-command-arg file --exit-command-arg --syslog
--exit-command-arg container --exit-command-arg cleanup --exit-command-arg
9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c]"
INFO[0000] Running conmon under slice user.slice and unitName
libpod-conmon-9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c.scope
DEBU[0000] Received: -1
DEBU[0000] Cleaning up container
9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c
DEBU[0000] Tearing down network namespace at
/tmp/runtime-u1/netns/cni-13220a15-ad73-aec3-3ef7-7f7a08eb50f0 for
container 9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c
DEBU[0000] unmounted container
"9a7e3a501de24c4afb08f105817d3e443a7f381cd7441b7b861b64ad3b77677c"
DEBU[0000] ExitCode msg: "time=\"2021-08-18t14:22:06-05:00\" level=error
msg=\"read unix @->/run/systemd/private: read: connection reset by
peer\"\ntime=\"2021-08-18t14:22:06-05:00\" level=error
msg=\"container_linux.go:367: starting container process caused:
process_linux.go:340: applying cgroup configuration for process caused:
read unix @->/run/systemd/private: read: connection reset by peer\": oci
runtime error"
Error: OCI runtime error: time="2021-08-18T14:22:06-05:00" level=error
msg="read unix @->/run/systemd/private: read: connection reset by peer"
time="2021-08-18T14:22:06-05:00" level=error msg="container_linux.go:367:
starting container process caused: process_linux.go:340: applying cgroup
configuration for process caused: read unix @->/run/systemd/private: read:
connection reset by peer"
*Kent Collins*
Office: 817.352.0251 | Enterprise Information Management | Cell:
817.879.7764
Data Solutions Architect/Scientist – Published Author and Conference
Speaker
[image: ibm-champion-analytics-7-year-milestone]
“Death and life *are* in the power of the tongue: and they that love it
shall eat the fruit thereof.”
Prov 18:21
[image: BNSF_CW_Top_100_2016]
*From:* Collins, Kent
*Sent:* Wednesday, August 18, 2021 9:16 AM
*To:* jeremy.valcourt(a)gmail.com; dwalsh(a)redhat.com
*Cc:* podman(a)lists.podman.io
*Subject:* Podman on Redhat
So far running Podman ( non-Root ) on Redhat has been a horrible
experience. It seems to take very little to break Podman.
From breaking when using su or sudo to the directory length issue, these
simple normal Unix everyday operations seem to be difficult for development
of podman.
I am trying to run a very simple API container using Podman as non-root
and at this point I cannot start any containers.
On top of that, workarounds found in searching for solutions also never
work.
For example these two work arounds do not work.
export DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/$(id -u)/bus
systemd-run --scope --user $SHELL
I will admit I am not a Podman expert. My goal in using Podman over
Docker should not require it. It only needs to perform basic container
operations. Stop/start/rm/run/load
Any help to get this working would be appreciated.
==> podman --version
podman version 3.0.2-dev
x /etc/*ease[1]: NAME="Red Hat Enterprise
Linux"
x
x /etc/*ease[2]: VERSION="8.4
(Ootpa)"
x
x /etc/*ease[3]:
ID="rhel"
x
x /etc/*ease[4]: ID_LIKE="fedora"
*Kent Collins*
Office: 817.352.0251 | Enterprise Information Management | Cell:
817.879.7764
Data Solutions Architect/Scientist – Published Author and Conference
Speaker
[image: ibm-champion-analytics-7-year-milestone]
“Death and life *are* in the power of the tongue: and they that love it
shall eat the fruit thereof.”
Prov 18:21
[image: BNSF_CW_Top_100_2016]
_______________________________________________
Podman mailing list -- podman(a)lists.podman.io
To unsubscribe send an email to podman-leave(a)lists.podman.io