podman container storage backup
by Michael Ivanov
Greetings,
I make periodic backups of my laptop where I use some podman containers.
To perform a backup I just invoke rsync to copy my /home/xxxx/.local/share/containers
directory to nfs mounted filesystem.
Containers are running, but quiescent, no real activity occurs.
Is this a correct way to back up or is there anything special about
container directory to be taken into account? As far as I understand
some hash-named subdirectories are shared between different containers
and images using special kind of mounts, can this lead to duplicate
copies r inconsistencies?
Underlying fs is btrfs.
Thanks,
--
\ / | |
(OvO) | Михаил Иванов |
(^^^) | |
\^/ | E-mail: ivans(a)isle.spb.ru |
^ ^ | |
7 months, 2 weeks
cgroups not configured for container
by ugiwgh@qq.com
There is somthing warning, when I add "--pid=host".
How can I get this warning gone?
OS: 8.3.2011
Podman: 2.2.1
[rsync@rsyncdk2 ~]$ podman run --rm --pid=host fb7ad16314ee sleep 3
WARN[0000] cannot toggle freezer: cgroups not configured for container
WARN[0000] lstat : no such file or directory
11 months, 2 weeks
How to track down an IP address that is causing errors in an container
by Christopher.Miller@gd-ms.com
This is with Podman v3.4.2 on RHEL 8.1.
I have an IP address (10.88.0.49) that I don't recognize in a Grafana container log. This IP address isn't tied to any other containers that I'm running as rootful.
I'm getting errors tied to this IP address as I can only view the Grafana UI from my RHEL8 workstation, and when others try to access the UI, they get a type of banner screen stating Grafana isn't able to load its application file.
This is the error message: ERRO[03-28|12:22:46] Error writing to response logger=context err="write tcp 10.88.0.49:3000 -> 10.88.0.1:43250: write: broken pipe"
I'm not using a container orchestrator at this time, we just piloting Prometheus/Grafana, to see how it works out in our environment.
These are the containers and their IP addresses below (all being run as rootful):
Prometheus - 10.88.0.26
Prometheus Node Exporter - 10.88.0.25
Nexus - 10.88.0.34
Grafana - 10.88.0.53
Thanks
Chris Miller
Altron INC.
703-814-7647
Christopher.miller(a)altroninc.com
Christopher.Miller(a)gd-ms.com<mailto:Christopher.Miller@gd-ms.com>
1 year, 2 months
podman-remote client - experimental, or ready for production use?
by Adam Cmiel
Hello!
On my Fedora 35, I noticed that the podman-remote package (version 3.4.4) warns not to use in production yet.
$ dnf info podman-remote
...
Description : Remote client for managing podman containers.
:
: This experimental remote client is under heavy development. Please do not
: run podman-remote in production.
:
: podman-remote uses the version 2 API to connect to a podman client to
: manage pods, containers and container images. podman-remote supports ssh
: connections as well.
Is there any difference between podman-remote and podman --remote in this regard? Or are both equally experimental?
Has this changed in later releases? Could the 4.x versions be considered ready for production use?
1 year, 2 months
podman and DBUS_SESSION_BUS_ADDRESS
by Michael Traxler
Hello,
when I try to build an image I get the following error message:
% podman build -f tumbleweed_michael.txt -t opensuse/tumbleweed_michael
STEP 1/2: FROM opensuse/tumbleweed
STEP 2/2: RUN zypper ref
error running container: error from /usr/bin/runc creating container for [/bin/sh -c zypper ref]: time="2022-03-24T12:56:21+01:00" level=warning msg="unable to get oom kill count" error="openat2 /sys/fs/cgroup/system.slice/runc-buildah-buildah282650677.scope/memory.events: no such file or directory"
time="2022-03-24T12:56:21+01:00" level=error msg="runc create failed: unable to start container process: unable to apply cgroup configuration: unable to start unit \"runc-buildah-buildah282650677.scope\" (properties [{Name:Description Value:\"libcontainer container buildah-buildah282650677\"} {Name:Slice Value:\"system.slice\"} {Name:Delegate Value:true} {Name:PIDs Value:@au [17389]} {Name:MemoryAccounting Value:true} {Name:CPUAccounting Value:true} {Name:IOAccounting Value:true} {Name:TasksAccounting Value:true} {Name:DefaultDependencies Value:false}]): Interactive authentication required."
: exit status 1
Error: error building at STEP "RUN zypper ref": error while running runtime: exit status 1
My build file is:
FROM opensuse/tumbleweed
RUN zypper ref
If I then unset
DBUS_SESSION_BUS_ADDRESS
everything works as expected.
% echo $DBUS_SESSION_BUS_ADDRESS
unix:abstract=/tmp/dbus-GR9LL799YH,guid=37ca9dd6f1faeea14747aad2623af1ba
% unset DBUS_SESSION_BUS_ADDRESS
% podman build -f tumbleweed_michael.txt -t opensuse/tumbleweed_michael
STEP 1/2: FROM opensuse/tumbleweed
STEP 2/2: RUN zypper ref
...
COMMIT opensuse/tumbleweed_michael
--> 5ea2b965db6
Successfully tagged localhost/opensuse/tumbleweed_michael:latest
5ea2b965db6412368929e52d8c34e4574cc84feef2f2e7563c1f9225a60bb8b8
Is it obvious that DBUS_SESSION_BUS_ADDRESS has to be unset?
Greetings,
Michael
1 year, 2 months
Get cgroup in rootless container
by Carl Hörberg
When running a rootless container, how can one identify the cgroup in use from inside the container? /proc/self/cgroup is "0::/", but the "real" cgroup is something like "user-1000.slice/user@1000.service/user.slice/libpod-7126f828cd4389ca0a9e29a94e78af39c91f51f3f892a799cb7f3eeff067d1bd.scope/container".
One hacky way to find it out is to look at /run/.containerenv and get the container id there, then to look for a directory in /sys/fs/cgroup named libpod-$containerid.scope, but is there a more straight forward one?
In the end I would like to be able to read "memory.max" and "memory.current" in the cgroup dir.
1 year, 2 months
Error: writing blob: adding layer with blob ***** lsetxattr /: operation not supported
by ugiwgh@qq.com
I use lustrefs for graphRoot on centos7.9. But it report "operation not supported" error.
$ podman4 pull quay.io/centos/centos:centos7.9.2009
WARN[0000] Network file system detected as backing store. Enforcing overlay option `force_mask="700"`. Add it to storage.conf to silence this warning
Trying to pull quay.io/centos/centos:centos7.9.2009...
Getting image source signatures
Copying blob 2d473b07cdd5 done
Error: writing blob: adding layer with blob "sha256:2d473b07cdd5f0912cd6f1a703352c82b512407db6b05b43f2553732b55df3bc": Error processing tar file(exit status 1): lsetxattr /: operation not supported
$ podman4 version
Client: Podman Engine
Version: 4.0.2
API Version: 4.0.2
Go Version: go1.16.13
Git Commit: c99f9f1b6960b98158b7f5fc4e6b5ac1a10b3542
Built: Wed Mar 23 18:33:41 2022
OS/Arch: linux/amd64
1 year, 2 months
Error setting up pivot dir
by ugiwgh@qq.com
I run podman on lustre filesystem.
When I run "podman load -i centos7.9.2009.tar.gz", in order to import an image.
The following error is output.
------------------------------------
Getting image source signatures
Copying blob 174f56854903 done
ERRO[0003] Error while applying layer: ApplyLayer exit status 1 stdout: stderr: Error setting up pivot dir: mkdir /public1/home/wugh/.local/share/containers/storage/vfs/dir/174f5685490326fc0a1c0f5570b8663732189b327007e47ff13d2ca59673db02/.pivot_root527142738: permission denied
Error: payload does not match any of the supported image formats (oci, oci-archive, dir, docker-archive)
------------------------------------
I think there is something to be supported on lustre.
$ podman version
Version: 3.3.1
API Version: 3.3.1
Go Version: go1.16.13
Git Commit: 08e1bd24196d92e1b377d4d38480581cfa9bf7ac-dirty
Built: Tue Mar 15 15:02:28 2022
OS/Arch: linux/amd64
1 year, 2 months
Anyone seen Podman Exit Code 1 Error when pulling multiple images?
by Christopher.Miller@gd-ms.com
Running Podman v3.4.2 and when trying to pull multiple images from an on-prem with Podman we are seeing the following error message: Podman Exit Code 1
Along with the following error output:
ERRO[0000] Error refreshing volume 0d947f52c097215a516f417e1df5a1fdbf1014743a656ca2d8a8f039d226ad1c: error acquiring lock 3 for volume 0d947f52c097215a516f417e1df5a1fdbf1014743a656ca2d8a8f039d226ad1c: file exists
Right now if we log out of quay via CLI and then log back in, the errors stop.
A quick search finds this, not sure if I'm on the correct path or not:
https://docs.podman.io/en/latest/markdown/podman-container-exists.1.html
If so, these containers do exist in our on-prem registry.
Thanks
Chris Miller
Altron INC.
703-814-7647
Christopher.miller(a)altroninc.com
Christopher.Miller(a)gd-ms.com<mailto:Christopher.Miller@gd-ms.com>
1 year, 2 months
DHCP lease from physical network not working for container - network is down error
by Christopher.Miller@gd-ms.com
We're trying to pilot Prometheus services as a container in our enclave (along with the node exporter and Grafana).
This is with podman 3.4.2 on RHEL 8.1. I'm using the following URL as a reference to try and setup DHCP services from the physical network for the Prometheus container. We are doing it this way so anyone on the network with a web browser can reach the UI.
https://www.redhat.com/sysadmin/leasing-ips-podman
I setup a .conflist file under /etc/cni/net.d created the following file:
91-prometheus.conflist (just gave it a generic name, wasn't sure if there was a naming convention) <==================================
{
"cniVersion": "0.4.0",
"name": "prod_network", (name of prod_network) <==============================
"plugins": [
{
"type": "macvlan",
"master": "eno1",
"ipam": {
"type": "dhcp"
}
}
]
}
I enable and started the following .socket file
[user_a@computer_a net.d]$ sudo systemctl list-unit-files --type=socket | grep -i "podman"
io.podman.dhcp.socket enabled
[user_a@computer_a net.d]$ sudo systemctl status io.podman.dhcp.socket
? io.podman.dhcp.socket - DHCP Client for CNI
Loaded: loaded (/usr/lib/systemd/system/io.podman.dhcp.socket; enabled; vendor preset: disabled)
Active: active (running) since Fri 2022-02-25 13:41:44 EST; 1 weeks 3 days ago
Listen: /run/cni/dhcp.sock (Stream)
CGroup: /system.slice/io.podman.dhcp.socket
Feb 25 13:41:44 computer_a systemd[1]: Listening on DHCP Client for CNI.
[user_a@computer_a net.d]$ sudo systemctl is-enabled io.podman.dhcp.socket
enabled
[user_a@computer_a net.d]$ sudo systemctl status io.podman.dhcp.service
? io.podman.dhcp.service - DHCP Client CNI Service
Loaded: loaded (/usr/lib/systemd/system/io.podman.dhcp.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2022-03-07 15:11:45 EST; 1h 35min ago
Main PID: 49378 (dhcp)
Tasks: 7 (limit: 45874)
Memory: 9.1M
CGroup: /system.slice/io.podman.dhcp.service
??49378 /usr/libexec/cni/dhcp daemon
Mar 07 15:16:19 computer_a dhcp[49378]: 2022/03/07 15:16:19 network is down
Mar 07 15:16:19 computer_a dhcp[49378]: 2022/03/07 15:16:19 retrying in 3.131274 seconds
Mar 07 15:16:32 computer_a dhcp[49378]: 2022/03/07 15:16:32 no DHCP packet received within 10s
Mar 07 15:16:32 computer_a dhcp[49378]: 2022/03/07 15:16:32 retrying in 7.313039 seconds
Mar 07 15:16:49 computer_a dhcp[49378]: 2022/03/07 15:16:49 no DHCP packet received within 10s
Mar 07 15:16:49 computer_a dhcp[49378]: 2022/03/07 15:16:49 retrying in 15.601824 seconds
Mar 07 15:17:15 computer_a dhcp[49378]: 2022/03/07 15:17:15 no DHCP packet received within 10s
Mar 07 15:17:15 computer_a dhcp[49378]: 2022/03/07 15:17:15 retrying in 32.030425 seconds
Mar 07 15:17:58 computer_a dhcp[49378]: 2022/03/07 15:17:58 no DHCP packet received within 10s
Mar 07 15:17:58 computer_a dhcp[49378]: 2022/03/07 15:17:58 retrying in 64.627280 seconds
[user_a@computer_a net.d]$ sudo systemctl is-enabled io.podman.dhcp.service
enabled
[user_a@computer_a net.d]$ sudo podman run -dit --name tcs_prometheus --net=prod_network -p 9090:9090 --privileged -v /etc/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml docker.io/bitnami/prometheus:latest
ERRO[0164] error loading cached network config: network "prod_network" not found in CNI cache
WARN[0164] falling back to loading from existing plugins on disk
The container never runs, just shows started status and then outputs the ERRO and WARN. Since it doesn't run, can't look at the logs for it.
Where is the best place to start to troubleshoot this? As I followed the directions from the article step-by-step.
Also is there a better way to present a container to our prod network that is running local on my RHEL workstation?
Thanks
Chris Miller
Altron INC.
703-814-7647
Christopher.miller(a)altroninc.com
Christopher.Miller(a)gd-ms.com<mailto:Christopher.Miller@gd-ms.com>
1 year, 2 months