shouldn't the current directory be the default context for "podman build"?
by Robert P. J. Day
"man podman-build" suggests that the context argument is optional:
SYNOPSIS
podman build [options] [context]
podman image build [options] [context]
...
If no context directory is specified, then Podman will assume
the current working directory as the build context, which
should contain the Containerfile.
but if i have a directory with nothing but a Containerfile, i get:
$ podman build
Error: no context directory specified, and no containerfile specified
$
OTOH, specifying context of current directory:
$ podman build .
STEP 1: FROM alpine:latest
... etc etc ...
thoughts?
rday
1 week, 2 days
mqueue msg_max in rootless container
by Michael Ivanov
Hallo!
I'm trying to run my application in podman rootless container and I stumble
on following problem: my program needs /proc/sys/fs/mqueue/msg_max to be at
least 256, but in running container this value is just 10. When I try to
specify this parameter while running the image (--sysctl 'fs.mqueue.msg_max=256')
I get the following error:
Error: open /proc/sys/fs/mqueue/msg_max: Permission denied: OCI permission denied
and container is not created.
My host where container is being run has this parameter set to 256. How can I
expose current host setting for msg_max to my container?
Best regards,
--
\ / | |
(OvO) | Михаил Иванов |
(^^^) | |
\^/ | E-mail: ivans(a)isle.spb.ru |
^ ^ | |
1 year
Overriding run root "/tmp/containers-user-1253/containers" with "/home/myuser/rundir/containers" from database
by GHui Wu
I set "export XDG_RUNTIME_DIR=", so it will use /tmp/containers-user-1253/containers.
But sometimes it'll use /home/myuser/rundir/containers.
In debug log, there is "Overriding run root "/tmp/containers-user-1253/containers" with "/home/myuser/rundir/containers" from database".
Why is this happened.
time="2023-02-14T16:05:51+08:00" level=debug msg="Initializing boltdb state at /tmp/1253/share/containers/storage/libpod/bolt_state.db"
time="2023-02-14T16:05:51+08:00" level=debug msg="Using conmon from $PATH: "/home/myuser/.local/container/bin/conmon""
time="2023-02-14T16:05:51+08:00" level=debug msg="Initializing boltdb state at /tmp/1253/share/containers/storage/libpod/bolt_state.db"
time="2023-02-14T16:05:51+08:00" level=debug msg="Overriding run root "/tmp/containers-user-1253/containers" with "/home/myuser/rundir/containers" from database"
time="2023-02-14T16:05:51+08:00" level=debug msg="Using graph driver overlay"
time="2023-02-14T16:05:51+08:00" level=debug msg="Using graph root /tmp/1253/share/containers/storage"
time="2023-02-14T16:05:51+08:00" level=debug msg="Using run root /home/myuser/rundir/containers"
time="2023-02-14T16:05:51+08:00" level=debug msg="Using static dir /tmp/1253/share/containers/storage/libpod"
time="2023-02-14T16:05:51+08:00" level=debug msg="Using tmp dir /tmp/podman-run-1253/libpod/tmp"
time="2023-02-14T16:05:51+08:00" level=debug msg="Using volume path /tmp/1253/share/containers/storage/volumes"
time="2023-02-14T16:05:51+08:00" level=info msg="podman filtering at log level trace"
1 year, 9 months
Error: container creation timeout: internal libpod error
by GHui Wu
When I run the following command to start container, there is error "Error: container creation timeout: internal libpod error".
podman --log-level=debug run --rm --env-host --env-file envfile --userns=keep-id --network=host --pid=host --ipc=host --privileged -v ./Dmyjob:/Dmyjob:z -w /Dmyjob abacus:3.1.0 ./myjob.sh
The error log as following.
Error: container creation timeout: internal libpod error
Error: container creation timeout: internal libpod error
Error: container creation timeout: internal libpod error
Error: container creation timeout: internal libpod error
time="2023-01-24T11:15:16+08:00" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-01-24T11:15:16+08:00" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-01-24T11:15:16+08:00" level=warning msg="lstat : no such file or directory"
time="2023-01-24T11:15:16+08:00" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/cpuset/slurm/uid_7809/job_2186496/step_3/7be1f8e2a2af45e2fdff36b7a845a122986699a623891f6b13f73e331d6ea96f: device or resource busy"
time="2023-01-24T11:15:16+08:00" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/memory/slurm/uid_7809/job_2186496/step_3/7be1f8e2a2af45e2fdff36b7a845a122986699a623891f6b13f73e331d6ea96f: device or resource busy"
time="2023-01-24T11:15:16+08:00" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/cpuset/slurm/uid_7809/job_2186496/step_3/7be1f8e2a2af45e2fdff36b7a845a122986699a623891f6b13f73e331d6ea96f: device or resource busy"
time="2023-01-24T11:15:16+08:00" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/memory/slurm/uid_7809/job_2186496/step_3/7be1f8e2a2af45e2fdff36b7a845a122986699a623891f6b13f73e331d6ea96f: device or resource busy"
time="2023-01-24T11:15:16+08:00" level=error msg="Failed to remove paths: map[cpuset:/sys/fs/cgroup/cpuset/slurm/uid_7809/job_2186496/step_3/7be1f8e2a2af45e2fdff36b7a845a122986699a623891f6b13f73e331d6ea96f memory:/sys/fs/cgroup/memory/slurm/uid_7809/job_2186496/step_3/7be1f8e2a2af45e2fdff36b7a845a122986699a623891f6b13f73e331d6ea96f]"
time="2023-01-24T11:15:16+08:00" level=error msg="container not running"
time="2023-01-24T11:15:16+08:00" level=error msg="forwarding signal 15 to container 3adc0ebf2a42635b72d586e359422038bf1a05b10e908a22c8409dbe2868bd9b: error sending signal to container 3adc0ebf2a42635b72d586e359422038bf1a05b10e908a22c8409dbe2868bd9b: `/public3/home/scb6724/.local/container/sbin/runc kill 3adc0ebf2a42635b72d586e359422038bf1a05b10e908a22c8409dbe2868bd9b 15` failed: exit status 1"
1 year, 10 months
Error: failed to connect to container's attach socket: /xxxxx/attach: no such file or directory
by GHui Wu
I start podman as following command.
podman --log-level=debug run --rm --env-host --env-file envfile --userns=keep-id
--network=host --pid=host --ipc=host --privileged -v ./Dmyjob:/Dmyjob:z -w /Dmyjob
abacus:3.1.0 ./myjob.sh
But sometimes it output error "Error: failed to connect to container's attach socket: /xxxxx/attach: no such file or directory".
time="2023-02-24T15:56:41+08:00" level=debug msg="Successfully cleaned up container ab2d81ec95dd0025a588ad016fb2b79db8c4e1617f0b808f756a3d25d4fca2a4"
time="2023-02-24T15:56:41+08:00" level=error msg="Storage for container ab2d81ec95dd0025a588ad016fb2b79db8c4e1617f0b808f756a3d25d4fca2a4 has been removed"
time="2023-02-24T15:56:41+08:00" level=debug msg="Removing all exec sessions for container ab2d81ec95dd0025a588ad016fb2b79db8c4e1617f0b808f756a3d25d4fca2a4"
time="2023-02-24T15:56:41+08:00" level=debug msg="Container ab2d81ec95dd0025a588ad016fb2b79db8c4e1617f0b808f756a3d25d4fca2a4 storage is already unmounted, skipping..."
time="2023-02-24T15:56:41+08:00" level=info msg="Storage for container ab2d81ec95dd0025a588ad016fb2b79db8c4e1617f0b808f756a3d25d4fca2a4 already removed"
time="2023-02-24T15:56:41+08:00" level=debug msg="ExitCode msg: \"failed to connect to container's attach socket: /export/myuser/11727/1253/share/containers/storage/overlay-containers/ab2d81ec95dd0025a588ad016fb2b79db8c4e1617f0b808f756a3d25d4fca2a4/userdata/attach: no such file or directory\""
Error: failed to connect to container's attach socket: /export/myuser/11727/1253/share/containers/storage/overlay-containers/ab2d81ec95dd0025a588ad016fb2b79db8c4e1617f0b808f756a3d25d4fca2a4/userdata/attach: no such file or directory
1 year, 10 months
runc start b714ab8d87af579aea314ee36964e49a807ab5025099ff3b3e140df715bf914a failed: exit status 1
by GHui Wu
I start podman as following command.
podman --log-level=debug run --rm --env-host --env-file envfile --userns=keep-id --network=host --pid=host --ipc=host --privileged -v ./Dmyjob:/Dmyjob:z -w /Dmyjob abacus:3.1.0 ./myjob.sh
But sometimes it output error "runc start xxxxx failed: exit status 1".
time="2023-02-24T14:29:21+08:00" level=error msg="remove /export/myuser/14234/podman-run-1253/runc/b714ab8d87af579aea314ee36964e49a807ab5025099ff3b3e140df715bf914a/exec.fifo: no such file or directory"
time="2023-02-24T14:29:21+08:00" level=error msg="remove /export/myuser/14234/podman-run-1253/runc/58a04ffff99f6769136a36676ff69e3631f29e8353fc155b4a211c6a496ab58d/exec.fifo: no such file or directory"
time="2023-02-24T14:29:21+08:00" level=debug msg="unable to remove container b714ab8d87af579aea314ee36964e49a807ab5025099ff3b3e140df715bf914a after failing to start and attach to it"
time="2023-02-24T14:29:21+08:00" level=debug msg="unable to remove container 58a04ffff99f6769136a36676ff69e3631f29e8353fc155b4a211c6a496ab58d after failing to start and attach to it"
time="2023-02-24T14:29:21+08:00" level=debug msg="ExitCode msg: \"`/export/home/myuser/.local/container/sbin/runc start b714ab8d87af579aea314ee36964e49a807ab5025099ff3b3e140df715bf914a` failed: exit status 1\""
time="2023-02-24T14:29:21+08:00" level=debug msg="ExitCode msg: \"`/export/home/myuser/.local/container/sbin/runc start 58a04ffff99f6769136a36676ff69e3631f29e8353fc155b4a211c6a496ab58d` failed: exit status 1\""
Error: `/export/home/myuser/.local/container/sbin/runc start 58a04ffff99f6769136a36676ff69e3631f29e8353fc155b4a211c6a496ab58d` failed: exit status 1
Error: `/export/home/myuser/.local/container/sbin/runc start b714ab8d87af579aea314ee36964e49a807ab5025099ff3b3e140df715bf914a` failed: exit status 1
1 year, 10 months
Problems with routing in rootless podman
by Henrik Jacobsson
Hello.
We are running our application in rootless podman.
After some random time (a couple of hours - a couple of weeks), we lose the
network connectivity into the container.
Everything seems to work fine from inside the container to the rest of the
world (yum/dnf, ping, curl), but it looks like the routing stops working
when someone calls from the outside.
I set up a netcat listener (nc -lv), and called it on localhost (worked
fine) and on the tap-interface (long delays if the packet ever returned). I
also set up a tcpdump in a third screen – output below.
bash-4.4$ podman --version
podman version 4.2.0
bash-4.4$ uname -a
Linux podman-container 5.4.17-2136.315.5.el8uek.x86_64 #2 SMP Wed Dec 21
19:38:18 PST 2022 x86_64 x86_64 x86_64 GNU/Linux
bash-4.4$ cat /etc/os-release
NAME="Oracle Linux Server"
VERSION="8.7"
ID="ol"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="8.7"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Oracle Linux Server 8.7"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:8:7:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://bugzilla.oracle.com/"
ORACLE_BUGZILLA_PRODUCT="Oracle Linux 8"
ORACLE_BUGZILLA_PRODUCT_VERSION=8.7
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=8.7
# Testing communication using 'localhost' inside the container - works as
expected
[root@NC-Test_podman-container /]# nc -lv 10370
Listening on 0.0.0.0 10370
Connection received on localhost 47218
ping from server
ping from client
[root@NC-Test_podman-container /]# nc -v localhost 10370
nc: connect to localhost (::1) port 10370 (tcp) failed: Connection refused
Connection to localhost (127.0.0.1) 10370 port [tcp/*] succeeded!
ping from server
ping from client
# Testing communication using hostname - "some" packets arrives, but only
after a random delay of about 30-600 seconds
[root@NC-Test_podman-container /]# nc -lv 10370
Listening on 0.0.0.0 10370
server
Connection received on podman-container 59258
client
[root@NC-Test_podman-container /]# nc -v podman-container 10370
Connection to podman-container (10.11.12.102) 10370 port [tcp/*] succeeded!
client
server
[root@NC-Test_podman-container base_domain]# tcpdump -vv -X host
podman-container and port 10370
dropped privs to tcpdump
tcpdump: listening on tap0, link-type EN10MB (Ethernet), capture size
262144 bytes
12:41:49.080602 IP (tos 0x0, ttl 64, id 61404, offset 0, flags [DF], proto
TCP (6), length 47)
podman-container.56372 > podman-container-oob.10370: Flags [P.], cksum
0xdb21 (correct), seq 2129174302:2129174309, ack 1071210498, win 65480,
length 7
0x0000: 4500 002f efdc 4000 4006 7df1 0a00 0264 E../..@.@.}....d
0x0010: 0a31 b666 dc34 2882 7ee8 9f1e 3fd9 6002 .1.f.4(.~...?.`.
0x0020: 5018 ffc8 db21 0000 636c 6965 6e74 0a P....!..client.
12:41:49.080783 IP (tos 0x0, ttl 64, id 48821, offset 0, flags [none],
proto TCP (6), length 40)
podman-container-oob.10370 > podman-container.56372: Flags [.], cksum
0x2039 (correct), seq 1, ack 7, win 65535, length 0
0x0000: 4500 0028 beb5 0000 4006 ef1f 0a31 b666 E..(....@....1.f
0x0010: 0a00 0264 2882 dc34 3fd9 6002 7ee8 9f25 ...d(..4?.`.~..%
0x0020: 5010 ffff 2039 0000 P....9..
12:42:28.673431 IP (tos 0x0, ttl 64, id 49091, offset 0, flags [none],
proto TCP (6), length 40)
podman-container-oob.10370 > podman-container.51394: Flags [F.], cksum
0xf92e (correct), seq 946730519, ack 2284994989, win 65535, length 0
0x0000: 4500 0028 bfc3 0000 4006 ee11 0a31 b666 E..(....@....1.f
0x0010: 0a00 0264 2882 c8c2 386d f617 8832 41ad ...d(...8m...2A.
0x0020: 5011 ffff f92e 0000 P.......
12:42:28.673436 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP
(6), length 40)
podman-container.51394 > podman-container-oob.10370: Flags [R], cksum
0x27c1 (correct), seq 2284994989, win 0, length 0
0x0000: 4500 0028 0000 4000 4006 6dd5 0a00 0264 E..(..@.@.m....d
0x0010: 0a31 b666 c8c2 2882 8832 41ad 0000 0000 .1.f..(..2A.....
0x0020: 5004 0000 27c1 0000 P...'...
12:44:28.693154 IP (tos 0x0, ttl 64, id 49943, offset 0, flags [none],
proto TCP (6), length 47)
podman-container-oob.10370 > podman-container.56372: Flags [P.], cksum
0xcadb (correct), seq 1:8, ack 7, win 65535, length 7
0x0000: 4500 002f c317 0000 4006 eab6 0a31 b666 E../....@....1.f
0x0010: 0a00 0264 2882 dc34 3fd9 6002 7ee8 9f25 ...d(..4?.`.~..%
0x0020: 5018 ffff cadb 0000 7365 7276 6572 0a P.......server.
12:44:28.693174 IP (tos 0x0, ttl 64, id 61405, offset 0, flags [DF], proto
TCP (6), length 40)
podman-container.56372 > podman-container-oob.10370: Flags [.], cksum
0x2070 (correct), seq 7, ack 8, win 65473, length 0
0x0000: 4500 0028 efdd 4000 4006 7df7 0a00 0264 E..(..@.@.}....d
0x0010: 0a31 b666 dc34 2882 7ee8 9f25 3fd9 6009 .1.f.4(.~..%?.`.
0x0020: 5010 ffc1 2070 0000 P....p..
Kind regards
//Henrik
1 year, 10 months