flooded with - Couldn't stat device /dev/char/10:200: No such file or directory
by lejeczek
Hi guys.
I'd like to ask around about some error messages journal
gets full of, namely:
...
SELinux: mount invalid. Same superblock, different security
settings for (dev mqueue, type mqueue)
Couldn't stat device /dev/char/10:200: No such file or directory
Couldn't stat device /dev/char/10:200: No such file or directory
Started libcontainer container
391b1013c06ea5abe461d9474ec3b8f2c8e902e9d4b0e0cbf5ea8b8b0394541f.
SELinux: mount invalid. Same superblock, different security
settings for (dev mqueue, type mqueue)
Couldn't stat device /dev/char/10:200: No such file or directory
libpod-238a14cc41bdb2826850c00907c249e43b0b3333c0a344f99920adddff5c38e3.scope:
Succeeded.
libpod-238a14cc41bdb2826850c00907c249e43b0b3333c0a344f99920adddff5c38e3.scope:
Consumed 168ms CPU time
libpod-391b1013c06ea5abe461d9474ec3b8f2c8e902e9d4b0e0cbf5ea8b8b0394541f.scope:
Succeeded.
libpod-391b1013c06ea5abe461d9474ec3b8f2c8e902e9d4b0e0cbf5ea8b8b0394541f.scope:
Consumed 150ms CPU time
Couldn't stat device /dev/char/10:200: No such file or directory
Started libcontainer container
391b1013c06ea5abe461d9474ec3b8f2c8e902e9d4b0e0cbf5ea8b8b0394541f.
SELinux: mount invalid. Same superblock, different security
settings for (dev mqueue, type mqueue)
Couldn't stat device /dev/char/10:200: No such file or directory
Couldn't stat device /dev/char/10:200: No such file or directory
Started libcontainer container
238a14cc41bdb2826850c00907c249e43b0b3333c0a344f99920adddff5c38e3.
SELinux: mount invalid. Same superblock, different security
settings for (dev mqueue, type mqueue)
Couldn't stat device /dev/char/10:200: No such file or directory
libpod-391b1013c06ea5abe461d9474ec3b8f2c8e902e9d4b0e0cbf5ea8b8b0394541f.scope:
Succeeded.
libpod-391b1013c06ea5abe461d9474ec3b8f2c8e902e9d4b0e0cbf5ea8b8b0394541f.scope:
Consumed 153ms CPU time
Couldn't stat device /dev/char/10:200: No such file or directory
Started libcontainer container
391b1013c06ea5abe461d9474ec3b8f2c8e902e9d4b0e0cbf5ea8b8b0394541f.
SELinux: mount invalid. Same superblock, different security
settings for (dev mqueue, type mqueue)
Couldn't stat device /dev/char/10:200: No such file or directory
libpod-238a14cc41bdb2826850c00907c249e43b0b3333c0a344f99920adddff5c38e3.scope:
Succeeded.
libpod-238a14cc41bdb2826850c00907c249e43b0b3333c0a344f99920adddff5c38e3.scope:
Consumed 160ms CPU time
...
Two questions really.
a) how critical those are?
b) how to fix the problem? Rather obvious one.
many thanks, L
3 years, 9 months
DIY networking for rootless containers/pods
by Rudolf Vesely
Hi Podman Developers and Users,
Thank you very much for Podman and related tools. It's a fantastic project.
I'm trying to convert my current container host VPS into a number of rootless pods and I'm thinking about the pods networking. Some pods will need to be able to communicate with each other (for example HAProxy has to be able to connect both WordPress and Nextcloud) and some don't (WordPress and Nextcloud don't need to talk to each other). From security least privilege principle: pods that don't need to communicate shouldn't be allowed to.
The obvious solution is to use default settings slirp4netns and listen (publish port) on 127.0.0.1 or maybe on a dedicated private IP created by "ip link add name something type dummy". That means that for example WordPress will listen on 8080 and Nextcloud on 8081 (more info in Brent Baude's article https://www.redhat.com/sysadmin/container-networking-podman).
As Dan Walsh often mentions in his Podman presentations one of the best things about Podman is that it's not just one tool - it's Podman/libpod, Buildah, Skopeo, CRI-O, RunC and they all do one thing and do it well which enables me to try some DIY networking.
DIY:
==========================================================
### create bridge using "ip"
$ sudo ip link add name bridge1 type bridge
$ sudo ip link set dev bridge1 up
$ sudo ip address add 10.11.22.1/24 dev bridge1
### or by "systemd-networkd"
$ sudo systemctl --now enable systemd-networkd
$ cat << EOF | sudo tee /etc/systemd/network/bridge1.netdev
[NetDev]
Name=bridge1
Kind=bridge
EOF
$ cat << EOF | sudo tee /etc/systemd/network/bridge1.network
[Match]
Name=bridge1
[Network]
Address=10.11.22.1/24
EOF
### run rootless container
$ sudo mkdir /test-www
$ echo "Hello, World!" | sudo tee /test-www/index.html
$ cont_id=$(podman run --net=none -d --volume=/test-www:/usr/share/nginx/html docker://docker.io/library/nginx:latest)
$ [[ ${cont_id} =~ ^[0-9a-z]{64}$ ]] &&
printf '%s\n' "OK: \"${cont_id}\""
> OK: "5811ac2e25dec942fd22c2e83657d103bbce199aa7775d7f4d10bf5c53af4778"
$ net_ns_name="cont-${cont_id}"
$ cont_pc_id=$(podman inspect -f '{{.State.Pid}}' "${cont_id}")
$ [[ ! -d /var/run/netns ]] &&
sudo mkdir -v /var/run/netns
$ sudo ln -sfTv "/proc/${cont_pc_id}/ns/net" "/var/run/netns/${net_ns_name}"
> '/var/run/netns/cont-5811ac2e25dec942fd22c2e83657d103bbce199aa7775d7f4d10bf5c53af4778' -> '/proc/1217/ns/net'
$ ip netns list
> cont-5811ac2e25dec942fd22c2e83657d103bbce199aa7775d7f4d10bf5c53af4778
$ sudo ip link add veth300 type veth peer name veth300p
$ sudo ip link set dev veth300 master bridge1
$ sudo ip link set veth300p netns "${net_ns_name}"
$ sudo ip -netns "${net_ns_name}" link set veth300p name eth0 # optional: rename peer in namespace
$ sudo ip link set dev veth300 up
$ sudo ip -netns "${net_ns_name}" link set dev eth0 up
$ sudo ip -netns "${net_ns_name}" address add 10.11.22.50/24 dev eth0
$ sudo ip -netns "${net_ns_name}" route add default via 10.11.22.1
### to make it work, the host has to have routing enabled
$ sudo sysctl -w net.ipv4.ip_forward=1
### and iptables/nftables configured
$ sudo nft add table ip nat
$ sudo nft add chain ip nat nat-prerouting "{ type nat hook prerouting priority -100; policy accept; }"
$ sudo nft add chain ip nat nat-postrouting "{ type nat hook postrouting priority 100; policy accept; }"
$ sudo nft add rule ip nat nat-prerouting iifname "eth0" tcp dport { 80, 8080, 8081 } counter dnat 10.11.22.50
$ sudo nft add rule ip nat nat-postrouting oifname "eth0" counter masquerade
### and to test that the container can go out
$ podman exec -it "${cont_id}" curl https://1.1.1.1/
> <a lot of html>
### and to access the container (the web server)
$ curl http://<container host public IP>/
> Hello, World!
==========================================================
For those that don't want to read the code:
1. create bridge
2. run container without slirp4netns (--net=none) => that means it has only localhost
3. create a network namespace for the container process
4. create virtual ethernet pair (VETH), move one interface into the new bridge and the second into the new network namespace
5. make it work by assigning IP addresses, default route in the new namespace, enabling routing on the host and NAT on the host firewall
Note: At this moment this is not possible for pods since pods in the current stable version of Podman don't support --net=none. But that will change in 3.0: https://github.com/containers/podman/issues/9165, https://github.com/mheon/libpod/commit/6bd3a6bcabda682243f531bacf3659b95d..., https://github.com/containers/podman/releases/tag/v3.0.0-rc3.
Thank you Matthew Heon!
The benefits I get by doing this:
1. Rootless containers, no need to run rootfull for this.
2. Easy to firewall - for example interfaces in one bridge can connect interfaces in another bridge but not in the opposite way
3. Easy to understand and visualize
4. Can be integrated with VLANs, Open vSwitch VXLANs and anything that uses bridges (QEMU VMs...)
Could you please tell me is this a good idea?
Thank you.
Kind regards,
Rudolf Vesely
3 years, 9 months
Rootless podman - access to volume (host directory mount) owned by different user
by geert@kobaltwit.be
Hi,
I have a shared directory "test" owned by "share" (and group "share"). "share" is also a real user id on the host system. All users that need access to this directory have been made members of the "share" group. This works fine on the host.
Now I need to set up a rootless container that will run an application requiring read and write access to that directory as well. That rootless container will be started by several users on the host (well, on multiple hosts really, but that's not relevant to this particular issue - I'm currently testing on a single host).
I have tried countless variations but I can't make it work.
My last attempt consists of setting up a container using this Dockerfile (simplified to only present the essence):
FROM registry.fedoraproject.org/fedora:32
RUN groupadd -g 1000 user1
RUN groupadd -g 1001 shared
RUN useradd -u1000 -g1000 -G1001 user1
RUN useradd -u1001 -g1001 shared
USER user1
So two users are defined inside the container and their uids and gids match those of the host. "user1" is also made member of the "shared" group with the intention to make the shared directory accessible for "user1". That reflects the permissions as on the host.
Running ls on the directory inside the container results in this output:
$ podman run -it --net=host -v ./test:/home/test:z --userns=host localhost/test-img ls -ld /home/test
drwxrwx---. 2 root nobody 4096 Feb 8 18:12 /home/test
Outside of the container this directory has the following ownerships:
$ ls -ld test/
drwxrwx---. 2 user1 share 4096 8 feb 19:12 test/
So some uid and gid remapping is going on and with ownership of root:nobody, my container user can't access the test directory. I would want the directory to have the same ownership inside the container, but I don't know how to get there. I thought the "--userns=host" option was for that purpose. but it still remaps the user and group for directory test. I have also tried "--userns=keep-id", but that makes no difference. Note that if I log in to the host as user "share" and run the container (changing the default container user to "share" as well), the /home/test directory is accessible inside the container.
How can I prevent podman from remapping the ownership of that mounted volume or if that's not possible what is the proper way to provide shared access to a mounted volume to a different user ?
Thank you,
Geert
P.S. For completeness, this experiment is with a simple local directory. In the final setup that "test" directory would have to be replaced with a locally mounted nfs share. I did somer experiments already and I can access nfs shares, but the same owner and group remapping prevent me from accessing that nfs share when running the container rootless from a user that's not the owner of that share. I hope the solution to my experiment will also fix it for an nfs share.
3 years, 9 months
Testing Podman 3.0 rc on WSL Fedora33 distro - networking remark
by Pavel Sosin
Both Rootfull and Rootless networking are OK. For the "combined"
networking, Rootfull server publishing to the host port 8080 and Rootless
client the following detail relevant for every VM with non-fixed IP
should be taken into account: VM has /etc/hosts file where both VM IPV4 and
IPV6 addresses are defined as "localhost". Curl localhost:8080 from the
rootless container will work. Does somebody think about IPV6 scenario? In
some countries, some areas ISPs are enforced to deploy IPV6 stack by every
customer demand.
3 years, 9 months
Build image from Pod Yaml like in docker-compose
by bobtruhla@seznam.cz
Hi Everybody,
I'm switching from Docker to Podman on my VPS and I'm trying to convert all docker-compose to Pod Yaml.
I know that Podman is supposed to use for `podman play kube` only Yaml generated by `podman generate kube` and not user generated Yaml. But it works just fine like for example demonstrated here: https://www.redhat.com/sysadmin/compose-podman-pods
I found project `Kompose` and I know that Podman 3.0 is supposed to support actual docker-compose but it's very clear to me that Pod Yaml is the right way to go.
The only thing I can't reproduce in Pod Yaml is `build: .` like this:
--------------------------
version: '3'
services:
web:
build: .
...
db:
image: mariadb
...
--------------------------
In other words this will not work:
--------------------------
apiVersion: v1
kind: Pod
metadata:
labels:
app: my-pod
name: my-pod
status: {}
spec:
restartPolicy: Always
containers:
- name: web
build: .
...
- name: db
image: mariadb
...
--------------------------
So the only way is to create `Containerfile`, `podman build .` and then define Yaml like this:
--------------------------
apiVersion: v1
kind: Pod
metadata:
labels:
app: my-pod
name: my-pod
status: {}
spec:
restartPolicy: Always
containers:
- name: web
image: sha256:307e5ce57d57472b6392f5027e0aa69c1090cd312e3429afdbd950d0d1fbae15
...
- name: db
image: mariadb
...
--------------------------
Could you please tell me is there a way how to build image from Pod Yaml like you can do with docker-compose?
Thank you.
Kind regards,
Bobes T.
3 years, 9 months
Podman running on WSL in sysadmin site article.
by Pavel Sosin
Regarding old Brent Braudi article Podman-windows-wsl2
<https://www.redhat.com/sysadmin/podman-windows-wsl2>
I installed Arkane-system genie, the systemd manager for WSL, cross-distro,
and don't need all these adjustments. The most important that Podman can
run with its default configuration of cgroups manager, log driver, etc. The
bonus is that real root user and non-root user login unlike the fake WSL
option -u user. The current Microsoft WSL Linux Kernel is 5.4 that is much
better than 4.9 one year ago.
The Podman 3 on WSL can be very useful when CNI issues will be solved.
3 years, 9 months
2.0 → 3.0 migration guide?
by Marcin Zajączkowski
Hi. I wonder, if there is any migration guide from 2.0 to 3.0 available?
I would like to know if there are any "common steps" that should be
performed when upgrading Podman (but I couldn't find any and the release
notes are quite extensive).
I've just upgraded from 2.2.1 to 3.0.0-0.1.rc1.fc33 and only restarted
the pod with the Podman new version. It's a simple pod with just one
service container exposing two ports, running in the roolless mode. It
started correctly, but after while, I've noticed that the ports are not
exposed at all. I recreated the pod and the container with the new
Podman (mounting the same local/host directory) and it works fine.
However, I wonder, if it is needed to recreate pod/container after the
2.x to 3.x migration?
I might provide commands used to create the oroginal pod/container, if
needed.
Marcin
--
https://blog.solidsoft.pl/ - Working code is not enough
3 years, 9 months