Storage for container c1a8d39e5276909827451b1f53c000b5c644a3266824416a55bc381a3d84abc1 has been removed
by GHui Wu
When I run "podman run --rm --env-host --userns=keep-id --network=host --pid=host --ipc=host --privileged xxximage xxx.sh", there is output the following error.
time="2023-02-15T10:59:55+08:00" level=error msg="Storage for container c1a8d39e5276909827451b1f53c000b5c644a3266824416a55bc381a3d84abc1 has been removed"
time="2023-02-15T10:59:55+08:00" level=error msg="Storage for container c2a4a75804a56bba5c10c90d60a35bb21a7f55ee3ffa9e078bc672817e7d533c has been removed"
time="2023-02-15T10:59:55+08:00" level=error msg="Storage for container f41301358da8715e9443c075e7e59a73d85c5188c1dbccf7e2d7722fd059b43d has been removed"
time="2023-02-15T10:59:55+08:00" level=error msg="Storage for container b4e4d087ba7726bda471e1b912efa9a64aedaf39bb2fa3e41bdc915488a40542 has been removed"
time="2023-02-15T10:59:55+08:00" level=error msg="Storage for container 798a62751f66f10bbf4e44fa83d00a7e2b78d8036045271009cec1245d6f0464 has been removed"
time="2023-02-15T10:59:55+08:00" level=error msg="Storage for container 935464db3cab0840b9eafb98035aae29912dc531a12d626fd97804b1575c1e8c has been removed"
time="2023-02-15T10:59:55+08:00" level=error msg="Storage for container 8dcf505273da9f8a487993bb80a676e070784a896667aa47980ee3b75725212a has been removed"
time="2023-02-15T10:59:55+08:00" level=error msg="Storage for container 23abed966bf082f454bf926cbe2af386e5698d3a1e9ca961c70fa66c0a0d2dab has been removed"
time="2023-02-15T10:59:55+08:00" level=error msg="Storage for container 6275e312a951db5e091c5648ffa3f2e5b88b0a55b696d6b83c55bf6144e953b1 has been removed"
time="2023-02-15T10:59:55+08:00" level=error msg="Storage for container d9760b91ed141e12dc326d1f2920af8a84d85cb080b7b9fedafcec61444b43ee has been removed"
time="2023-02-15T10:59:55+08:00" level=error msg="Storage for container 389f6e50edf5f487b8386d182b89521bda58aaab9d6565409ad576597df6f001 has been removed"
time="2023-02-15T10:59:55+08:00" level=error msg="Storage for container ec8d4d7a2a09f4d217f13ae15bc3d594e406dde65b6cd25de7f7803167679624 has been removed"
time="2023-02-15T10:59:55+08:00" level=error msg="Storage for container a13e0f3d5e94c70c08a475f27cb9becf84495d796e0aa6365ce81dd2dbbdf0f7 has been removed"
time="2023-02-15T10:59:55+08:00" level=error msg="Storage for container edab35249b42a319512b88b6a5df2e16b513a2ba0ccb02354a94db50080a21f1 has been removed"
time="2023-02-15T10:59:55+08:00" level=error msg="Storage for container d6d853edf39618831cf3a1558c9dd1cdc6e9c8a366f3fc1c34b9ec00ace8cab8 has been removed"
time="2023-02-15T10:59:55+08:00" level=error msg="Storage for container 4fd8d85ba941fa0b3d32322fd9601791e2337603abf07b32aa540843cfe30811 has been removed"
time="2023-02-15T10:59:55+08:00" level=error msg="Storage for container b81b9c20ab89d7063cd2bcca9eff156f6dbe3923b273607b6470ef232b8647ec has been removed"
time="2023-02-15T10:59:56+08:00" level=error msg="Storage for container 888c4f5fe0fa25f659f791b61c26c48a1c225cc06b0305fdb23d81dcf1810c84 has been removed"
time="2023-02-15T10:59:56+08:00" level=error msg="Storage for container 20b8412411b019eb9a002a8c65477ebf793409a87b88df5fb539035b096593ff has been removed"
time="2023-02-15T10:59:56+08:00" level=error msg="Storage for container 34d30f095dd26943c9ef4bc0d7ec733c97eb0ae9af2a1bbf0476882c74b4945d has been removed"
time="2023-02-15T10:59:56+08:00" level=error msg="Storage for container 6e5a5218bb705b1448bed6732374ac1188dd66266bb427efef3d3c03397968d3 has been removed"
time="2023-02-15T10:59:56+08:00" level=error msg="Storage for container 65eabb6181d89b027c9c4a1b36121fc3bd09bd463935a529be87a58c952f972e has been removed"
time="2023-02-15T10:59:56+08:00" level=error msg="Storage for container 139e049b7dc7bc1ff73aae79ebed110659981907e7de4ec45f952928339f724f has been removed"
time="2023-02-15T10:59:56+08:00" level=error msg="Storage for container 2152ff8e0adbbf04309219b6482eec1b62a50390da54a5bfe31212c20b29a9e5 has been removed"
time="2023-02-15T10:59:56+08:00" level=error msg="Storage for container 60219f2cf9fc8ebf6130150f84385cec76ae47b8c40f0121621ebf0285be2b28 has been removed"
time="2023-02-15T10:59:56+08:00" level=error msg="Storage for container f50eea42d5a84f3d4967d9fefb9c6943060bcd9fa131bf28d4fffc6316ced1bc has been removed"
time="2023-02-15T10:59:56+08:00" level=error msg="Storage for container 2de2d56e8c6b58532d6d1f04b8ec87156f32ded1679ea9ddf0db90e8f0bd9da3 has been removed"
time="2023-02-15T10:59:56+08:00" level=error msg="Storage for container a37e787e54bb729139cc32ce635bc32fa36fca1afb17faffae8179042ab468bd has been removed"
time="2023-02-15T10:59:56+08:00" level=error msg="Storage for container 064e09108a357948539d11f6d5a771d0f5e3298c0240e0a59198eb447ae5a897 has been removed"
time="2023-02-15T10:59:56+08:00" level=error msg="Storage for container c5ee48c7bce0a4adb531b141f2b4c6f8d1caf603b3af1da1df73a5b6616c6c69 has been removed"
time="2023-02-15T10:59:56+08:00" level=error msg="Storage for container 3a49cde1a7089672934c23f14117b9f1b84dfa561df1abac50460148221ad953 has been removed"
time="2023-02-15T10:59:56+08:00" level=error msg="Storage for container 51e60a1704152607aaf9ed4b3d043042665f1bb33a1b6aa1fd79a18bb418e3b5 has been removed"
1 year, 9 months
Improving our CI-driven, rootless, systemd user service-based container deployment with Quadlet?
by jklaiho@iki.fi
Hi all. Late last year, we managed to deploy one of our web applications as a rootless Podman container and a systemd user service on a Ubuntu 22.04 VPS, using Podman 4.2.0, also building a full GitLab CI pipeline around it.
Now Podman 4.4.0 is out, featuring Quadlet. There's no mention of the release on podman.io, let alone any docs for the Podman-integrated version of Quadlet that I can find. (Any ETA on these? I only realized it was out thanks to this mailing list.)
Below I'll describe our deployment workflow and show an annotated version of the systemd service we use. What I'd like to know is: is there a way of using Quadlet for this use case? Would it be an improvement over the current setup in terms of using systemd/Podman "optimally"?
It would especially be nice if Quadlet could give us a working Type=notify unit file, to eliminate the need for PID files. I was previously unable to get it to work (I no longer remember why), and had to use Type=forking instead.
- - - - -
Container deployment is done with an Ansible playbook, run by GitLab CI. It connects to the VPS, updates the image from our private registry, builds an entirely new container out of it with a unique suffix (appname-<imagetag>_<githash>) and templates out a unit file with a matching name (the Jinja2 template is based on 'podman generate systemd' output). The old systemd service is stopped and disabled, and the new one started and enabled.
We do this because we want deploys to be "atomic", minimizing downtime. Building a new container instead of updating the old one lets us quickly revert to the previous version if the new container is faulty somehow. (Old containers and their unit files are eventually removed with a cron job.)
Here's the unit file, with some annotations about our changes:
[Unit]
Description=AppName
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers
[Service]
# No Environment=PODMAN_SYSTEMD_UNIT=%n clause, because we don't use podman-auto-update
Restart=on-failure
TimeoutStopSec=10
# Don't try to start unless the container exists that this service is named after.
ExecCondition=/bin/bash -c "/usr/bin/podman container inspect %N -f '{{.State.Status}}' 2> /dev/null"
# Symlink the PID file to a more convenient location (not strictly necessary,
# looks nicer here in PIDFile and ExecStopPost).
ExecStartPre=/bin/bash -c "ln -sf \
$XDG_RUNTIME_DIR/containers/vfs-containers/$(podman container inspect %N -f '{{.Id}}')/userdata/conmon.pid \
$XDG_RUNTIME_DIR/.%N_conmon.pid"
# Type=notify would be nicer
Type=forking
PIDFile=%t/.%N_conmon.pid
ExecStart=/usr/bin/podman start %N
# This pattern of running 'podman stop' in both ExecStop and ExecStopPost
# is from podman-generate-systemd, but I never understood the reasoning for it.
ExecStop=/usr/bin/podman stop --ignore -t 10 %N
ExecStopPost=/usr/bin/podman stop --ignore -t 10 %N
# Clean up the PID file symlink
ExecStopPost=/bin/rm -f $XDG_RUNTIME_DIR/.%N_conmon.pid
[Install]
# Saves us from having to deal with the full unit name with the image tag and the
# git hash; the symlink to this name is replaced to point to the new unit file
# during the Ansible deployment.
Alias=appname.service
WantedBy=default.target
- - - - -
This is pretty robust for our purposes, but my systemd and overall Podman knowledge is limited, so I don't know what I could be doing better. Quadlet has a rather different philosophy overall than what we're used to, but can it be leveraged in this workflow, for CI-driven replacements of rootless containers running as systemd user services?
1 year, 9 months
can't get final child's PID from pipe: EOF
by GHui Wu
When I execute "podman run xxx xxx.sh", sometimes there is error.
Error: Storage for container 53a7dd0ae112ebf4d1f24c9c9cc06f0925c684c872770c7ea65f87762094f2df has been removed"
Error: OCI runtime error: runc: runc create failed: unable to start container process: can't get final child's PID from pipe: EOF
1 year, 9 months
podman issue inside github actions
by Aleksandar Kostadinov
Hello, I have an issue starting a container in the background inside github
actions.
The image for the container was just built by the buildah action and is not
yet pushed.
The idea is to smoke-test the image before pushing to the repository.
What I see is
> podman run -d --name=searchd --rm -u 14:0 -p 9306:9306
--platform=linux/amd64 ghcr.io/3scale/searchd:porta
> Trying to pull ghcr.io/3scale/searchd:porta...
> Error: initializing source docker://ghcr.io/3scale/searchd:porta: reading
manifest porta in ghcr.io/3scale/searchd: manifest unknown
> Error: Process completed with exit code 125.
You can see the full pipeline here:
https://github.com/3scale/searchd/actions/runs/4147557472/jobs/7174577031
Thank you!
1 year, 9 months
What is rundir?
by GHui Wu
When I pull an image, there is an error.
Login Succeeded!
Trying to pull xxx.com/name:7.0...
Getting image source signatures
Copying blob sha256:b294330b903102b99b9272e8c639135e74bc7935a32d0b8a06443b86779ed546
Copying blob sha256:e5a33881590f6c79e508c14a7a529207fbb473d352ff3ea6f02c5cc8c685502f
Copying blob sha256:d9ba75c043daa80149b5d7bfe3a8732ea56ea5f7bf7e6baf50f213352dde872f
Copying config sha256:093e23f8df9aa2891b3e87581b2fc136569db41ef01617b0c35a13bafb6a1c20
Writing manifest to image destination
Storing signatures
093e23f8df9aa2891b3e87581b2fc136569db41ef01617b0c35a13bafb6a1c20
093e23f8df9aa2891b3e87581b2fc136569db41ef01617b0c35a13bafb6a1c20
Error: chown /home/myuser/rundir/containers/overlay-containers/6c47fca6915e0e1491b1f29e3e4e1e1dd6b53620591ff65d9cc61094fc4912e0/userdata: operation not permitted
1 year, 9 months
Shipping Podman RCs in Distros
by Lokesh Mandvekar
Hi all,
Opening up an earlier rather private discussion RE: "Fedora and upstream RC
releases" to get feedback from the community, especially distro users /
maintainers other than Fedora / CentOS, including maintainers of homebrew,
snap, etc.
Podman upstream usually releases RCs before any minor version bump, and
there's approximately a month between the first RC and the .0 release.
Now Fedora has a policy of consumable updates which says any build that
gets submitted to the testing repos should be good enough for the stable
repos.
Ref:
https://docs.fedoraproject.org/en-US/fesco/Updates_Policy/#consumable-upd...
Currently, Podman RCs aren't always a 100% release-worthy which means
Fedora can't ship an RC build even to its testing repos and that means a
month's worth of community testing lost. Similar situation with CentOS
Stream.
So, here's what I'm hoping to know:
1. Do other distros ship Podman RCs soon as they are available or do you
wait until the final release?
2. Would it be easier for distros to ship RCs if upstream ensured every RC
is good enough to be released or does the "RC" in the tag itself scare
users away? In which case we could perhaps do away with RC tagging.
Thanks,
--
Lokesh
Libera, GitLab, GitHub, Fedora: lsm5
Matrix: @lsm5:lsm5.ems.host
GPG: 0xC7C3A0DD
https://keybase.io/lsm5
1 year, 9 months
Use host proxy inside container
by Mehdi Haghgoo
Hello,
I need to use a network proxy running with socks as socks5://127.0.0.1:1090 on my host system, inside the container running with podman.
How can I tell Podman to use that proxy inside the container as well?Does Podman support this?
1 year, 9 months