feasible to upgrade podman on CentOS 8 to current version?
by Robert P. J. Day
i just upgraded a CentOS box to CentOS 8, and i can see that the
version of podman is (unsurprisingly) a bit dated:
$ podman --version
podman version 1.0.2-dev
compared to my fedora 30 system:
$ podman --version
podman version 1.6.1
is it feasible to try to download and build from source to get the
latest version on my CentOS system, or would that just be more trouble
than it's worth?
rday
--
========================================================================
Robert P. J. Day Ottawa, Ontario, CANADA
http://crashcourse.ca
Twitter: http://twitter.com/rpjday
LinkedIn: http://ca.linkedin.com/in/rpjday
========================================================================
5 years
port bindings are not yet supported by rootless containers
by Álvaro Castillo
Hello,
I am interesting to run a container with port redirects. I was trying run nginx container with ports redirect likes 80:1024 80:1200 80:81...
But It's does give me the same error always.
port bindings are not yet supported by rootless containers
My OS is CentOS 8, but I've tried with Fedora 31 beta and It's same happens.
Can you help me?
Thanks.
5 years
podman references
by Álvaro Castillo
Hello all,
I am newbie here, I am interesting in get more information about how works podman with Kubernetes without uses Docker. Books, pdf, articles, howtos...
Greetings!
5 years
rootless podman group credentials limited to users primary group?
by eae@us.ibm.com
Scenario: rootless user with primary and secondary group membership starts a container with mounted filesystem.
Expected behavior: the group credentials of podman container would respect the results of newgrp before starting container.
Actual behavior: the group credentials for access are always the primary group.
5 years
[varlink] how to do exec using podman-remote?
by Francesco Romani
Hi all,
I'm using podman-remote and I'm trying to exec commands in containers.
It seems it fails, but I don't really understand why.
I'm on fedora 30 (with last updates), I tried podman 1.6.1 (from RPMs)
but also 1.6.2 compiled from scratch:
$ podman version
Version: 1.6.2
RemoteAPI Version: 1
Go Version: go1.12.10
OS/Arch: linux/amd64
Ultimately, I'd need to exec commands from a golang program, but for now
experimenting with command line is fine, but can't make even this work :)
here's what I tried:
$ varlink call -m unix:/run/podman/io.podman/io.podman.CreateContainer
'{"create":{"args":["fedora:30", "/bin/sleep", "10h"]}}'
{
"container":
"c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853"
}
$ varlink call -m unix:/run/podman/io.podman/io.podman.StartContainer
'{"name":"c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853"}'
{
"container":
"c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853"
}
The container IS running now:
root 7836 0.0 0.0 77876 1756 ? Ssl 09:11 0:00
/usr/bin/conmon --api-version 1 -s -c
c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853 -u
c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853 -r
/usr/bin/runc -b
/var/lib/containers/storage/overlay-containers/c24e28054f89c0a
root 7848 0.0 0.0 2320 684 ? Ss 09:11 0:00 \_
/bin/sleep 10h
So I do:
$ varlink call -m unix:/run/podman/io.podman/io.podman.ExecContainer '{
"opts": { "name":
"c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853",
"tty": true, "privileged": true, "cmd": ["/bin/date"] } }'
Call failed with error: io.podman.ErrorOccurred
{
"reason": "client must use upgraded connection to exec"
}
So I downloaded go-varlink-cmd
(https://github.com/varlink/go-varlink-cmd) and patched to supporte the
upgraded connection (on client side)[1], but doesn't look much better:
$ ~/bin/go-varlink-cmd call -upgrade
unix:/run/podman/io.podman/io.podman.ExecContainer '{ "opts": { "name":
"c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853",
"tty": true, "privileged": true, "cmd": ["/bin/date"] } }'
recv -> 0 # return value
retval -> map[string]interface {}(nil) # what got as answer
{} # answer translated to JSON
No luck with minimal command line either:
$ ~/bin/go-varlink-cmd call -upgrade
unix:/run/podman/io.podman/io.podman.ExecContainer '{ "opts": { "name":
"c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853",
"cmd": ["/bin/date"] } }'
recv -> 0
retval -> map[string]interface {}(nil)
{}
Maybe, just wondering: do I need to set something when I create the
container? If so the docs aren't crystal clear :\
I tried to look at logs but can't make much sense. Here's the logs for
podman-remote on my system, with log level increased to debug:
Oct 28 09:21:09 myhost.lan systemd[1]: Started Podman Remote API Service.
Oct 28 09:21:09 myhost.lan audit[1]: SERVICE_START pid=1 uid=0
auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0
msg='unit=io.podman comm="systemd" exe="/usr/lib/systemd/systemd"
hostname=? addr=? terminal=? res=success'
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="Using varlink socket:
unix:/run/podman/io.podman"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="using conmon:
\"/usr/bin/conmon\""
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="Initializing boltdb
state at /var/lib/containers/storage/libpod/bolt_state.db"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="Using graph driver
overlay"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="Using graph root
/var/lib/containers/storage"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="Using run root
/var/run/containers/storage"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="Using static dir
/var/lib/containers/storage/libpod"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="Using tmp dir
/var/run/libpod"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="Using volume path
/var/lib/containers/storage/volumes"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="Set libpod namespace
to \"\""
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="[graphdriver] trying
provided driver \"overlay\""
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="cached value indicated
that overlay is supported"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="cached value indicated
that metacopy is being used"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="cached value indicated
that native-diff is not being used"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=warning msg="Not using native
diff for overlay, this may cause degraded performance for building
images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="backingFs=extfs,
projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="Initializing event
backend journald"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="using runtime
\"/usr/bin/runc\""
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=warning msg="Error initializing
configured OCI runtime crun: no valid executable found for OCI runtime
crun: invalid argument"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=info msg="Found CNI network
podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="Creating new exec
session in container
c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853 with
session id 6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba1534c014c15054f67c"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=error msg="ExecContainer failed
to HANG-UP on
c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853: write
unix /run/podman/io.podman->@: write: broken pipe"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=error msg="Exec Container err:
write unix /run/podman/io.podman->@: write: broken pipe"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="/usr/bin/conmon
messages will be logged to syslog"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="running conmon:
/usr/bin/conmon" args="[--api-version 1 -s -c
c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853 -u
6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba1534c014c15054f67c -r
/usr/bin/runc -b />
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="disabling SD notify"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=info msg="Running conmon under
slice machine.slice and unitName
libpod-conmon-c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853.scope"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=warning msg="Failed to add conmon
to systemd sandbox cgroup: Unit
libpod-conmon-c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853.scope
already exists."
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="Attaching to container
c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853 exec
session 6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba1534c014c15054f67c"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="connecting to socket
/var/run/libpod/socket/6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba1534c014c15054f67c/attach"
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c
<ninfo>: addr{sun_family=AF_UNIX, sun_path=/tmp/conmon-term.B2039Z}
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c
<ninfo>: attach sock path:
/var/run/libpod/socket/6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba1534c014c15054f67c/attach
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c
<ninfo>: addr{sun_family=AF_UNIX,
sun_path=/var/run/libpod/socket/6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba1534c014c15054f67c/attach}
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c
<ninfo>: ctl fifo path:
/var/lib/containers/storage/overlay-containers/c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853/userdata/6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba1534c014c15054f67c/ctl
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c
<ninfo>: terminal_ctrl_fd: 18
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c
<ndebug>: sending attach message to parent
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c
<ndebug>: sent attach message to parent
Oct 28 09:21:09 myhost.lan conmon[8108]: conmon c24e28054f89c0a0ac9c
<ndebug>: exec with attach is waiting for start message from parent
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="Received: 0"
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c
<ninfo>: Accepted connection 20
Oct 28 09:21:09 myhost.lan conmon[8108]: conmon c24e28054f89c0a0ac9c
<ndebug>: exec with attach got start message from parent
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c
<ninfo>: about to accept from console_socket_fd: 14
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c
<ninfo>: about to recvfd from connfd: 21
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c
<ninfo>: console = {.name = '/dev/ptmx8 09:21:09 conmon: conmon
c24e28054f89c0a0ac9c <ninfo>: about to recvfd from connfd: 21
'; .fd = 14}
Oct 28 09:21:09 myhost.lan systemd[2088]:
run-runc-c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853-runc.xJTNZd.mount:
Succeeded.
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c
<ndebug>: couldn't find cb for pid 8121
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c
<ndebug>: container status and pid were found prior to callback being
registered. calling manually
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c
<ndebug>: container PID: 8121
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c
<nwarn>: Failed to open cgroups file: /proc/8121/cgroup
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c
<nwarn>: Failed to get memory cgroup path. Container may have exited
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="Received: 8121"
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c
<nwarn>: stdio_input read failed Input/output error
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c
<nwarn>: Failed to write to socket
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c
<error>: Unable to send container stderr message to parent Bad file
descriptor
Oct 28 09:21:09 myhost.lan podman[8087]: 2019-10-28 09:21:09.632987596
+0100 CET m=+0.247438867 container exec
c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853
(image=docker.io/library/fedora:30, name=sad_thompson)
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=debug msg="Successfully started
exec session
6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba1534c014c15054f67c in
container c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853"
Oct 28 09:21:09 myhost.lan podman[8087]:
time="2019-10-28T09:21:09+01:00" level=error msg="write unix
/run/podman/io.podman->@: write: broken pipe"
I'm out of ideas and can't find anything in the docs - apologies if I
missed anything, please feel free to send me there
Any help or comment would be appreciated.
Thanks and bests!
--
Francesco Romani
Senior SW Eng., Virtualization R&D
Red Hat
IRC: fromani github: @fromanirh
5 years
Sharing blob-info-cache-v1.boltdb across multiple machines
by eae@us.ibm.com
We have a cluster of machines where /home is a remote gluster mount. Running podman rootless nicely solves the problem of accessing the remote filesystem with user credentials. Since remote filesystems do not currently support namespaces, podman is run with --root, --runroot, and --tmpdir set to be /tmp/$USER. All works well on the first client machine, but an image pulled successfully on one machine will fail to pull on a second. For example, on the second machine:
$ podman run --rm -it ubuntu
Trying to pull docker.io/library/ubuntu...Getting image source signatures
Copying blob c58094023a2e done
Copying blob 079b6d2a1e53 done
Copying blob 11048ebae908 done
Copying blob 22e816666fd6 done
Copying config cf0f3ca922 done
Writing manifest to image destination
Storing signatures
ERRO[0168] Error while applying layer: ApplyLayer exit status 1 stdout: stderr: lchown /etc/gshadow: operation not permitted
ERRO[0200] Error pulling image ref //ubuntu:latest: Error committing the finished image: error adding layer with blob "sha256:22e816666fd6516bccd19765947232debc14a5baf2418b2202fd67b3807b6b91": ApplyLayer exit status 1 stdout: stderr: lchown /etc/gshadow: operation not permitted
Failed
Trying to pull registry.fedoraproject.org/ubuntu...ERRO[0200] Error pulling image ref //registry.fedoraproject.org/ubuntu:latest: Error initializing source docker://registry.fedoraproject.org/ubuntu:latest: Error reading manifest latest in registry.fedoraproject.org/ubuntu: manifest unknown: manifest unknown
Failed
Trying to pull quay.io/ubuntu...ERRO[0201] Error pulling image ref //quay.io/ubuntu:latest: Error initializing source docker://quay.io/ubuntu:latest: Error reading manifest latest in quay.io/ubuntu: error parsing HTTP 404 response body: invalid character '<' looking for beginning of value: "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 3.2 Final//EN\">\n<title>404 Not Found</title>\n<h1>Not Found</h1>\n<p>The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.</p>\n"
Failed
Trying to pull registry.centos.org/ubuntu...ERRO[0201] Error pulling image ref //registry.centos.org/ubuntu:latest: Error initializing source docker://registry.centos.org/ubuntu:latest: Error reading manifest latest in registry.centos.org/ubuntu: manifest unknown: manifest unknown
Failed
Error: unable to pull ubuntu: 4 errors occurred:
* Error committing the finished image: error adding layer with blob "sha256:22e816666fd6516bccd19765947232debc14a5baf2418b2202fd67b3807b6b91": ApplyLayer exit status 1 stdout: stderr: lchown /etc/gshadow: operation not permitted
* Error initializing source docker://registry.fedoraproject.org/ubuntu:latest: Error reading manifest latest in registry.fedoraproject.org/ubuntu: manifest unknown: manifest unknown
* Error initializing source docker://quay.io/ubuntu:latest: Error reading manifest latest in quay.io/ubuntu: error parsing HTTP 404 response body: invalid character '<' looking for beginning of value: "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 3.2 Final//EN\">\n<title>404 Not Found</title>\n<h1>Not Found</h1>\n<p>The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.</p>\n"
* Error initializing source docker://registry.centos.org/ubuntu:latest: Error reading manifest latest in registry.centos.org/ubuntu: manifest unknown: manifest unknown
Our guess is that this is happening because blob-info-cache-v1.boltdb is in the shared /home filesystem.
Is there a suggested approach to running rootless podman on multiple machines with a shared /home directory?
Thanks,
Eddie
5 years
rootless centos + gpu problem? plz advise.
by Lou DeGenaro
Script started on Mon 14 Oct 2019 10:51:34 AM CDT
]0;eae@ducc-saicluster-gpu-801 ~]$ podman info
host:
BuildahVersion: 1.9.0
Conmon:
package: podman-1.4.4-4.el7.centos.x86_64
path: /usr/libexec/podman/conmon
version: 'conmon version 0.3.0, commit: unknown'
Distribution:
distribution: '"centos"'
version: "7"
MemFree: 52633743360
MemTotal: 63154479104
OCIRuntime:
package: runc-1.0.0-65.rc8.el7.centos.x86_64
path: /usr/bin/runc
version: 'runc version spec: 1.0.1-dev'
SwapFree: 2146758656
SwapTotal: 2146758656
arch: amd64
cpus: 8
hostname: ducc-saicluster-gpu-801.sl.cloud9.ibm.com
kernel: 3.10.0-1062.1.2.el7.x86_64
os: linux
rootless: true
uptime: 145h 40m 2.43s (Approximately 6.04 days)
registries:
blocked: null
insecure: null
search:
- registry.access.redhat.com
- docker.io
- registry.fedoraproject.org
- quay.io
- registry.centos.org
store:
ConfigFile: /home/eae/.config/containers/storage.conf
ContainerStore:
number: 0
GraphDriverName: vfs
GraphOptions: null
GraphRoot: /tmp/eae/containers/storage
GraphStatus: {}
ImageStore:
number: 2
RunRoot: /run/user/13642
VolumePath: /tmp/eae/containers/storage/volumes
]0;eae@ducc-saicluster-gpu-801:~ [eae@ducc-saicluster-gpu-801 ~]$ cat
.config/containers/libpod.conf
volume_path = "/tmp/eae/containers/storage/volumes"
image_default_transport = "docker://"
runtime = "runc"
conmon_path = ["/usr/libexec/podman/conmon",
"/usr/local/lib/podman/conmon", "/usr/bin/conmon", "/usr/sbin/conmon",
"/usr/local/bin/conmon", "/usr/local/sbin/conmon"]
conmon_env_vars =
["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"]
cgroup_manager = "cgroupfs"
init_path = "/usr/libexec/podman/catatonit"
static_dir = "/tmp/eae/containers/storage/libpod"
tmp_dir = "/run/user/13642/libpod/tmp"
max_log_size = -1
no_pivot_root = false
cni_config_dir = "/etc/cni/net.d/"
cni_plugin_dir = ["/usr/libexec/cni", "/usr/lib/cni", "/usr/local/lib/cni",
"/opt/cni/bin"]
infra_image = "k8s.gcr.io/pause:3.1"
infra_command = "/pause"
enable_port_reservation = true
label = true
network_cmd_path = ""
num_locks = 2048
events_logger = "journald"
EventsLogFilePath = ""
detach_keys = "ctrl-p,ctrl-q"
hooks_dir = ["/usr/share/containers/oci/hooks.d"]
[runtimes]
runc = ["/usr/bin/runc", "/usr/sbin/runc", "/usr/local/bin/runc",
"/usr/local/sbin/runc", "/sbin/runc", "/bin/runc",
"/usr/lib/cri-o-runc/sbin/runc"]
]0;eae@ducc-saicluster-gpu-801:~ [eae@ducc-saicluster-gpu-801 ~]$ cat
/usr/share/containers/oci/hooks.d/oci-nvidia-hook.json
{
"version": "1.0.0",
"hook": {
"path": "/usr/bin/nvidia-container-toolkit",
"args": ["nvidia-container-toolkit", "prestart"],
"env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
]
},
"when": {
"always": true,
"commands": [".*"]
},
"stages": ["prestart"]
}
[eae@ducc-saicluster-gpu-801 ~]$ podman --log-level=debug run --rm
nvidia/cuda nvidia-smi
[36mINFO [0m[0000] running as rootless
[37mDEBU [0m[0000] Initializing boltdb state at
/tmp/eae/containers/storage/libpod/bolt_state.db
[37mDEBU [0m[0000] Using graph driver vfs
[37mDEBU [0m[0000] Using graph root /tmp/eae/containers/storage
[37mDEBU [0m[0000] Using run root /run/user/13642
[37mDEBU [0m[0000] Using static dir /tmp/eae/containers/storage/libpod
[37mDEBU [0m[0000] Using tmp dir /run/user/13642/libpod/tmp
[37mDEBU [0m[0000] Using volume path /tmp/eae/containers/storage/volumes
[37mDEBU [0m[0000] Set libpod namespace to ""
[37mDEBU [0m[0000] [graphdriver] trying provided driver "vfs"
[37mDEBU [0m[0000] Initializing event backend journald
[37mDEBU [0m[0000] parsed reference into "[vfs@
/tmp/eae/containers/storage+/run/user/13642]docker.io/nvidia/cuda:latest"
[37mDEBU [0m[0000] parsed reference into
"[vfs@/tmp/eae/containers/storage+/run/user/13642]@946e78c7b2984354477ae4b75bf519940f4df648c092564d1d9c83ea8c92c8f3"
[37mDEBU [0m[0000] exporting opaque data as blob
"sha256:946e78c7b2984354477ae4b75bf519940f4df648c092564d1d9c83ea8c92c8f3"
[37mDEBU [0m[0000] parsed reference into
"[vfs@/tmp/eae/containers/storage+/run/user/13642]@946e78c7b2984354477ae4b75bf519940f4df648c092564d1d9c83ea8c92c8f3"
[37mDEBU [0m[0000] exporting opaque data as blob
"sha256:946e78c7b2984354477ae4b75bf519940f4df648c092564d1d9c83ea8c92c8f3"
[37mDEBU [0m[0000] parsed reference into
"[vfs@/tmp/eae/containers/storage+/run/user/13642]@946e78c7b2984354477ae4b75bf519940f4df648c092564d1d9c83ea8c92c8f3"
[37mDEBU [0m[0000] Got mounts: []
[37mDEBU [0m[0000] Got volumes: []
[37mDEBU [0m[0000] Using slirp4netns netmode
[37mDEBU [0m[0000] created OCI spec and options for new container
[37mDEBU [0m[0000] Allocated lock 0 for container
75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8
[37mDEBU [0m[0000] parsed reference into
"[vfs@/tmp/eae/containers/storage+/run/user/13642]@946e78c7b2984354477ae4b75bf519940f4df648c092564d1d9c83ea8c92c8f3"
[37mDEBU [0m[0000] exporting opaque data as blob
"sha256:946e78c7b2984354477ae4b75bf519940f4df648c092564d1d9c83ea8c92c8f3"
[37mDEBU [0m[0009] created container
"75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8"
[37mDEBU [0m[0009] container
"75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8" has work
directory
"/tmp/eae/containers/storage/vfs-containers/75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8/userdata"
[37mDEBU [0m[0009] container
"75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8" has run
directory
"/run/user/13642/vfs-containers/75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8/userdata"
[37mDEBU [0m[0009] New container created
"75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8"
[37mDEBU [0m[0009] container
"75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8" has
CgroupParent
"/libpod_parent/libpod-75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8"
[37mDEBU [0m[0009] Not attaching to stdin
[37mDEBU [0m[0009] mounted container
"75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8" at
"/tmp/eae/containers/storage/vfs/dir/dfce96c6e34e6c12aad6da967a42c56db04a30664bf8c6a081ee5efb1dcb7b19"
[37mDEBU [0m[0009] Created root filesystem for container
75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8 at
/tmp/eae/containers/storage/vfs/dir/dfce96c6e34e6c12aad6da967a42c56db04a30664bf8c6a081ee5efb1dcb7b19
[37mDEBU [0m[0009] /etc/system-fips does not exist on host, not mounting
FIPS mode secret
[37mDEBU [0m[0009] reading hooks from /usr/share/containers/oci/hooks.d
[37mDEBU [0m[0009] added hook
/usr/share/containers/oci/hooks.d/oci-nvidia-hook.json
[37mDEBU [0m[0009] hook oci-nvidia-hook.json matched; adding to stages
[prestart]
[37mDEBU [0m[0009] Created OCI spec for container
75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8 at
/tmp/eae/containers/storage/vfs-containers/75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8/userdata/config.json
[37mDEBU [0m[0009] /usr/libexec/podman/conmon messages will be logged to
syslog
[37mDEBU [0m[0009] running conmon: /usr/libexec/podman/conmon [37margs
[0m="[-c 75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8
-u 75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8 -n
flamboyant_pare -r /usr/bin/runc -b
/tmp/eae/containers/storage/vfs-containers/75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8/userdata
-p
/run/user/13642/vfs-containers/75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8/userdata/pidfile
--exit-dir /run/user/13642/libpod/tmp/exits --conmon-pidfile
/run/user/13642/vfs-containers/75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8/userdata/conmon.pid
--exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg
/tmp/eae/containers/storage --exit-command-arg --runroot --exit-command-arg
/run/user/13642 --exit-command-arg --log-level --exit-command-arg debug
--exit-command-arg --cgroup-manager --exit-command-arg cgroupfs
--exit-command-arg --tmpdir --exit-command-arg /run/user/13642/libpod/tmp
--exit-command-arg --runtime --exit-command-arg runc --exit-command-arg
--storage-driver --exit-command-arg vfs --exit-command-arg container
--exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg
75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8
--socket-dir-path /run/user/13642/libpod/tmp/socket -l
k8s-file:/tmp/eae/containers/storage/vfs-containers/75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8/userdata/ctr.log
--log-level debug --syslog]"
[33mWARN [0m[0009] Failed to add conmon to cgroupfs sandbox cgroup: error
creating cgroup for blkio: mkdir /sys/fs/cgroup/blkio/libpod_parent:
permission denied
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied
[37mDEBU [0m[0010] Received container pid: -1
[37mDEBU [0m[0010] Cleaning up container
75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8
[37mDEBU [0m[0010] Network is already cleaned up, skipping...
[37mDEBU [0m[0010] unmounted container
"75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8"
[37mDEBU [0m[0010] Cleaning up container
75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8
[37mDEBU [0m[0010] Network is already cleaned up, skipping...
[37mDEBU [0m[0010] Container
75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8 storage is
already unmounted, skipping...
[37mDEBU [0m[0010] Container
75bb8e197bea3d0c56f5060ab5e1388a1bdcab354e9820bd5554d3bf273a54d8 storage is
already unmounted, skipping...
[31mERRO [0m[0010] container_linux.go:345: starting container process
caused "process_linux.go:430: container init caused \"process_linux.go:413:
running prestart hook 0 caused \\\"error running hook: exit status 1,
stdout: , stderr: nvidia-container-cli: mount error: open failed:
/sys/fs/cgroup/devices/user.slice/devices.allow: permission
denied\\\\n\\\"\""
: OCI runtime error
Script done on Mon 14 Oct 2019 10:53:23 AM CDT
5 years, 1 month
Re: rootless centos + gpu problem? plz advise.
by Edward Epstein
We discovered that the ubuntu 18.04 machine had a configuration change to
get rootless working with nvidia:
"no-cgroups = true" was set in /etc/nvidia-container-runtime/config.toml
Unfortunately this config change did not work on Centos 7, but it did
change the rootless error to:
nvidia-container-cli: initialization error: cuda error: unknown error\\\\n
\\\"\""
This config change breaks podman running from root, with the error:
Failed to initialize NVML: Unknown Error
Interestingly, root on ubuntu gets the same error even though rootless
works.
5 years, 1 month
Trouble with mounting /sys/fs/cgroup from host to podman container
by Per Lundberg
Hi,
While trying to use podman for a current task where I am testing an
Ansible role being developed with the Molecule testing framework
(https://molecule.readthedocs.io), I am running into some trouble
with the /sys/fs/cgroup mounting.
All the containers I'm trying to run with this volume being mounted into
the container (to allow systemd to run inside podman) gives me errors
like these:
$ podman run -d --name centos7 --privileged=True --volume
/sys/fs/cgroup:/sys/fs/cgroup:rw --tty=True molecule_local/centos:7
/sbin/init
Error: container_linux.go:345: starting container process caused
"process_linux.go:430: container init caused \"rootfs_linux.go:58:
mounting \\\"/sys/fs/cgroup\\\" to rootfs
\\\"/home/per/.local/share/containers/storage/vfs/dir/41434e3e7d6979474c6a4829745acba3d124189037c7fef34455594823a91a2c\\\"
at
\\\"/home/per/.local/share/containers/storage/vfs/dir/41434e3e7d6979474c6a4829745acba3d124189037c7fef34455594823a91a2c/sys/fs/cgroup\\\"
caused \\\"operation not permitted\\\"\"": OCI runtime permission denied
error
$ podman run -d --name ubuntu16.04 --privileged=True --volume
/sys/fs/cgroup:/sys/fs/cgroup:rw --tty=True molecule_local/ubuntu:16.04
/sbin/init
Error: container_linux.go:345: starting container process caused
"process_linux.go:430: container init caused \"rootfs_linux.go:58:
mounting \\\"/sys/fs/cgroup\\\" to rootfs
\\\"/home/per/.local/share/containers/storage/vfs/dir/ff8d8b2f47aacc6a30522091aad2cad6e81d9f0cc011d7e1fb1f09b62bc7210b\\\"
at
\\\"/home/per/.local/share/containers/storage/vfs/dir/ff8d8b2f47aacc6a30522091aad2cad6e81d9f0cc011d7e1fb1f09b62bc7210b/sys/fs/cgroup\\\"
caused \\\"operation not permitted\\\"\"": OCI runtime permission denied
error
$ podman run --log-opt debug -d --name ubuntu18.04 --privileged=True
--volume /sys/fs/cgroup:/sys/fs/cgroup:rw --tty=True
molecule_local/ubuntu:18.04 /sbin/init
Error: container_linux.go:345: starting container process caused
"process_linux.go:430: container init caused \"rootfs_linux.go:58:
mounting \\\"/sys/fs/cgroup\\\" to rootfs
\\\"/home/per/.local/share/containers/storage/vfs/dir/0740c9c17a0fe13542746fbd248e0c3cb35aaf7c965e56cac5875840b2aab235\\\"
at
\\\"/home/per/.local/share/containers/storage/vfs/dir/0740c9c17a0fe13542746fbd248e0c3cb35aaf7c965e56cac5875840b2aab235/sys/fs/cgroup\\\"
caused \\\"operation not permitted\\\"\"": OCI runtime permission denied
error
Running "podman run -it molecule_local/centos:7" works fine.
Any ideas? I guess I could run podman with sudo (and this might be
required for this particular use case), but if possible, I'd prefer
avoiding it.
This is btw on Debian buster/bullseye, with the 1.6.1-1~ubuntu19.04~ppa3
package installed. Since there was no native Debian package installed, I
added the Ubuntu PPA and was hoping (*cough*) that it would work
reasonably well...
Kernel is 4.19.0-6-amd64.
Best regards,
--
Per Lundberg
5 years, 1 month