mqueue msg_max in rootless container
by Michael Ivanov
Hallo!
I'm trying to run my application in podman rootless container and I stumble
on following problem: my program needs /proc/sys/fs/mqueue/msg_max to be at
least 256, but in running container this value is just 10. When I try to
specify this parameter while running the image (--sysctl 'fs.mqueue.msg_max=256')
I get the following error:
Error: open /proc/sys/fs/mqueue/msg_max: Permission denied: OCI permission denied
and container is not created.
My host where container is being run has this parameter set to 256. How can I
expose current host setting for msg_max to my container?
Best regards,
--
\ / | |
(OvO) | Михаил Иванов |
(^^^) | |
\^/ | E-mail: ivans(a)isle.spb.ru |
^ ^ | |
12 months
podman container storage backup
by Michael Ivanov
Greetings,
I make periodic backups of my laptop where I use some podman containers.
To perform a backup I just invoke rsync to copy my /home/xxxx/.local/share/containers
directory to nfs mounted filesystem.
Containers are running, but quiescent, no real activity occurs.
Is this a correct way to back up or is there anything special about
container directory to be taken into account? As far as I understand
some hash-named subdirectories are shared between different containers
and images using special kind of mounts, can this lead to duplicate
copies r inconsistencies?
Underlying fs is btrfs.
Thanks,
--
\ / | |
(OvO) | Михаил Иванов |
(^^^) | |
\^/ | E-mail: ivans(a)isle.spb.ru |
^ ^ | |
2 years, 1 month
Newbie network test?
by Loren Amelang
I've installed Podman on a Raspberry Pi Zero W, and want to test the
network connection. I found this:
pi@raspberrypi:~/tripod $ podman pull docker.io/library/httpd
pi@raspberrypi:~/tripod $ podman run -dt -p 8088:80/tcp
docker.io/library/httpd
54a005199e38260bae58a6a5437dd0fbde62f2f596b25d928fb346328cfc9e73
pi@raspberrypi:~/tripod $
I chose "8088:80" because incoming port 8080 is already in use and
working on that Pi. Seems that is valid?
It seems to run, but closes itself in under a minute:
pi@raspberrypi:~/tripod $ podman ps -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
54a005199e38 docker.io/library/httpd httpd-foreground 26 minutes ago
Exited (139) 26 minutes ago 0.0.0.0:8088->80/tcp hopeful_buck
787779b20e96 docker.io/library/httpd httpd-foreground 18 seconds ago
Exited (139) 16 seconds ago 0.0.0.0:8088->80/tcp pedantic_yonath
pi@raspberrypi:~/tripod $
pi@raspberrypi:~/tripod $ podman logs -l
pi@raspberrypi:~/tripod $
pi@raspberrypi:~/tripod $ podman top -l
Error: top can only be used on running containers
pi@raspberrypi:~/tripod $
slirp4netns is already the newest version (1.0.1-2).
Could someone please suggest what to check next?
Loren
2 years, 4 months
IPv6 only listener in rootless container
by Hendrik Haddorp
Hi,
I created a container that contains a service that is only listening for
tcp6 requests, so is IPv6 only. When starting the container rootless on
Fedora 36 with podman 4.1.0 I'm unable to connect to my service. However
when I start my service to listen just for tcp4 requests I can connect
to it using IPv4 and IPv6. So it looks like the port forwarding done by
podman always forwards the traffic as IPv4. Is that the correct? I could
not find any documentation on that. The problem looks however a bit like
https://github.com/containers/podman/issues/14491. That issue states
that there is a proxy (rootlessport) running that does the forwarding.
Are there more details available how this is done exactly?
Ideally I would like to be able to start a service that uses just tcp4
or tcp6 to only be accessible via tcp4 or tcp6. Podman should simply
keep the protocol when forwarding traffic and not translate it.
regards,
Hendrik
2 years, 5 months
Run VASP with Podman
by ugiwgh@qq.com
When I run hello_c, it runs success. But vasp_std runs failed.
When I run vasp_std on host(mpirun -np 20 ./vasp_std), it runs success.
$ podman run --env-host --env-file envfile --userns=keep-id --network=host --pid=host --ipc=host -v /tmp/podman-mpirun:/tmp/podman-mpirun:z -v ./bulk:/bulk -w /bulk 4756bc6d2be0 mpirun -np 20 ./hello_c
Hello, world, I am 13 of 20, (Open MPI v4.1.1, package: Open MPI root@6831ad9910ad Distribution, ident: 4.1.1, repo rev: v4.1.1, Apr 24, 2021, 112)
Hello, world, I am 16 of 20, (Open MPI v4.1.1, package: Open MPI root@6831ad9910ad Distribution, ident: 4.1.1, repo rev: v4.1.1, Apr 24, 2021, 112)
... ... ... ...
Hello, world, I am 9 of 20, (Open MPI v4.1.1, package: Open MPI root@6831ad9910ad Distribution, ident: 4.1.1, repo rev: v4.1.1, Apr 24, 2021, 112)
$ podman run --env-host --env-file envfile --userns=keep-id --network=host --pid=host --ipc=host -v /tmp/podman-mpirun:/tmp/podman-mpirun:z -v ./bulk:/bulk -w /bulk 4756bc6d2be0 mpirun -np 20 ./vasp_std
running on 20 total cores
distrk: each k-point on 20 cores, 1 groups
... ... ... ...
LDA part: xc-table for Pade appr. of Perdew
POSCAR, INCAR and KPOINTS ok, starting setup
[ga2210.para.bscc:106293] Read -1, expected 5184, errno = 1
[ga2210.para.bscc:106294] Read -1, expected 5184, errno = 1
[ga2210.para.bscc:106296] Read -1, expected 5184, errno = 1
[ga2210.para.bscc:106300] Read -1, expected 5184, errno = 1
[ga2210.para.bscc:106309] Read -1, expected 5184, errno = 1
... ... ... ...
VASP version: vasp.5.4.4.18Apr17-6-g9f103f2a3.
OpenMPI version: 4.1.1
$ podman version
Client: Podman Engine
Version: 4.0.2
API Version: 4.0.2
Go Version: go1.16.13
Git Commit: c99f9f1b6960b98158b7f5fc4e6b5ac1a10b3542
Built: Wed Mar 23 18:33:41 2022
OS/Arch: linux/amd64
2 years, 5 months
cgroups not configured for container
by ugiwgh@qq.com
There is somthing warning, when I add "--pid=host".
How can I get this warning gone?
OS: 8.3.2011
Podman: 2.2.1
[rsync@rsyncdk2 ~]$ podman run --rm --pid=host fb7ad16314ee sleep 3
WARN[0000] cannot toggle freezer: cgroups not configured for container
WARN[0000] lstat : no such file or directory
2 years, 5 months
Run OpenMPI with Podman
by ugiwgh@qq.com
I have set "btl_tcp_if_include = ib0" in ~/.openmpi/mca-params.conf.
When I run mpirun without podman, it runs right.
When I run mpirun with podman, the "btl_tcp_if_include = ib0" is invalid. But when I add "--mca btl_tcp_if_include ib0" to command line, it is valid.
$ mpirun -np 128 --hostfile mfile --mca btl_vader_single_copy_mechanism 3 --mca orte_tmpdir_base /tmp/podman-mpirun podman run --env-host --env-file envfile --userns=keep-id --network=host --pid=host --ipc=host -v /tmp/podman-mpirun:/tmp/podman-mpirun:z -v ./bulk:/bulk -w /bulk 4756bc6d2be0 vasp_std
running on 128 total cores
[ga2211][[19883,1],64][btl_tcp_endpoint.c:626:mca_btl_tcp_endpoint_recv_connect_ack] received unexpected process identifier [[19883,1],88]
... ... Here is hang. ... ...
VASP version: vasp.5.4.4.18Apr17-6-g9f103f2a3.
OpenMPI version: 4.1.1
$ podman version
Client: Podman Engine
Version: 4.0.2
API Version: 4.0.2
Go Version: go1.16.13
Git Commit: c99f9f1b6960b98158b7f5fc4e6b5ac1a10b3542
Built: Wed Mar 23 18:33:41 2022
OS/Arch: linux/amd64
2 years, 5 months
Invitation to discussion: Is Podman for Linux workstations like Podman for servers or like podman for Windows WSL and Mac?
by Pavel Sosin
Podman doesn't work on Fedora Workstations with GNome-basedGUI. It's a
fact. But How OCI container's cgroups soft machine can work in the forest
of cgroups created by GNome and all GUI applications. Every GUI session has
its own cgroups tree managed by systemd. It is very hard to make a
decision about a parent cgroup for the created container and resource
allocation that doesn't fit every scenario. There are too many variants:
user-session, gnome-terminal, gnome-terminal tab, etc. The server-client
nature of the Gnome GUI will keep the question about a side designated for
resource allocation forever. The Windows WSL and MAC implementation of
Podman follo different approach - creation of the singleton VM running
pre-installed Podman. Fedora workstations have built-in capability to run
VMs via machined and machinectl. The resource limits of this VM can be
easily tuned. Inside the VM OCI containers can be easily managed if the VM
is systemd-based like Fedora's images. User sessions in this VM are plain
remote sessions opened by the user core or core-less.
Such VM can be easily deployed to Fedora WS as a iso copied to btrfs volume.
I tried to run minimalistic smoke-test and got a workable Podman but the
Quemi machine inside Fedora WS for Intel looks slightly exotic. Maybe this
approach is more practical?
2 years, 5 months
podman intel mpi container
by ugiwgh@qq.com
I run podman with intel mpi. But there is something wrong.
Any help will be appreciated.
--GHui
$ mpirun --version
Intel(R) MPI Library for Linux* OS, Version 2021.5 Build 20211102 (id: 9279b7d62)
Copyright 2003-2021, Intel Corporation.
$ mpirun -np 2 podman run --env-host --env-file envfile --network=host --pid=host --ipc=host -w /exports -v .:/exports:z centos/ /exports/mpitest
[cli_0]: write_line error; fd=9 buf=:cmd=init pmi_version=1 pmi_subversion=1
:
system msg for write_line failure : Bad file descriptor
[cli_0]: Unable to write to PMI_fd
[cli_0]: write_line error; fd=9 buf=:cmd=get_appnum
:
system msg for write_line failure : Bad file descriptor
Abort(1090575) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(143):
MPID_Init(1221)......:
MPIR_pmi_init(130)...: PMI_Get_appnum returned -1
[cli_0]: write_line error; fd=9 buf=:cmd=abort exitcode=1090575
:
system msg for write_line failure : Bad file descriptor
Attempting to use an MPI routine before initializing MPICH
[cli_1]: write_line error; fd=10 buf=:cmd=init pmi_version=1 pmi_subversion=1
:
system msg for write_line failure : Bad file descriptor
[cli_1]: Unable to write to PMI_fd
[cli_1]: write_line error; fd=10 buf=:cmd=get_appnum
:
system msg for write_line failure : Bad file descriptor
Abort(1090575) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(143):
MPID_Init(1221)......:
MPIR_pmi_init(130)...: PMI_Get_appnum returned -1
[cli_1]: write_line error; fd=10 buf=:cmd=abort exitcode=1090575
:
system msg for write_line failure : Bad file descriptor
Attempting to use an MPI routine before initializing MPICH
^C[mpiexec(a)ja0909.para.bscc] Sending Ctrl-C to processes as requested
[mpiexec(a)ja0909.para.bscc] Press Ctrl-C again to force abort
2 years, 5 months