After reboot, Container not responding to connection requests
by Jacques Jessen
Running Podman as root and created a container for Symantec's HSM Agent.
When manually started, it reports as working:
[root@PoC ~]# podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b53be5503ca7 localhost/symantec_hsm_agent:2.1_269362 catalina.sh run 4 minutes ago Up 4 minutes 0.0.0.0:8080->8080/tcp, 0.0.0.0:8082->8082/tcp, 0.0.0.0:8443->8443/tcp symhsm_agent
[root@PoC ~]# podman stats
ID NAME CPU % MEM USAGE / LIMIT MEM % NET IO BLOCK IO PIDS CPU TIME AVG CPU %
b53be5503ca7 symhsm_agent 3.55% 216MB / 4.112GB 5.25% 1.93kB / 1.09kB 249.2MB / 0B 29 3.759969275s 3.55%
You can successfully access the 8080, 8082, 8443 ports with a browser.
However, if the server is rebooted, while Podman will show results as above that it is working, from a browser you will be told:
ERR_CONNECTION_TIMED_OUT
If you manually Stop and Start the container, you can successfully access the 8080, 8082, 8443 ports with a browser.
Given there's no change in the configuration, this feels like there's a timing issue with the initial start. I used the Podman provided response to create the Service file:
[root@PoC ~]# podman generate systemd --new --name symhsm_agent
# container-symhsm_agent.service
# autogenerated by Podman
[Unit]
Description=Podman container-symhsm_agent.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStart=/usr/bin/podman run \
--cidfile=%t/%n.ctr-id \
--cgroups=no-conmon \
--rm \
--sdnotify=conmon \
--replace \
-d \
--name symhsm_agent \
-p 8443:8443 \
-p 8082:8082 \
-p 8080:8080 \
-v /opt/podman/:/usr/local/luna symantec_hsm_agent:2.1_269362
ExecStop=/usr/bin/podman stop \
--ignore -t 10 \
--cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm \
-f \
--ignore -t 10 \
--cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all
[Install]
WantedBy=default.target
Having to manually login and restart the container kind of defeats the purpose.
Thoughts and feedback appreciated.
2 weeks, 6 days
quay.io podman/buildah/skopeo image safety
by Chris Evich
All,
On August 23rd it was discovered that the credentials for several robot
service accounts with write-access to the container-images could have
leaked. Upon discovery, the credentials were invalidated. The earliest
possible leak opportunity was around March 10th, 2022.
While the investigation is ongoing, initial inspection of the images
seem to indicate it is unlikely any credentials had actually been
discovered and/or used to manipulate images. Nevertheless, out of an
abundance of caution, all possibly-affected images will be disabled.
quay.io/containers/podman : tags v3 - v4
quay.io/containers/buildah : tags v1.23.1 - v1.31.0
quay.io/containers/skopeo : tags v1.5.2 - v1.13.1
quay.io/podman/stable : tags v1.6 - v4.6.0
quay.io/podman/hello:latest SHA256 afda668e706a (<= Aug 2, 2023)
quay.io/buildah/stable : tags v1.23.3 - 1.31.0
quay.io/skopeo/stable : tags v1.3.0 - 1.13.1
We realize this issue has the potential to impact not only direct, but
also indirect use, such as base-images. The safety and integrity of
these images has and must take priority. At this time, all images have
been disabled. We will restore originals and/or rebuild fresh copies
based on further safety analysis.
We expect analysis to be complete and/or known-safe images restored,
before Sept. 8th. Though please keep in mind the research is ongoing,
and the situation remains somewhat fluid. When the examination work is
complete, or if any manipulation is discovered, we will issue further
updates.
Thank you in advance for your patients and understanding.
3 weeks, 1 day
Ansible `template` tasks and rootless podman volume content management
by Chris Evich
Hey podman community,
While exploring Ansible management of rootless podman on a remote host,
I ran into a stinky volume-contents idempotency issue. I have an
idea[0] on how to solve this, but thought I'd reach out and see if/how
others have dealt with this situation.
---
Here's the setup:
1. I'm running an Ansible playbook against a host for which I ONLY have
access to a non-root (user) account.
2. The playbook configures `quadlet` for `systemd` management of a
configuration (podman) volume and a pod with several containers in it
running services.
3. The contents of the podman volume are 10-30 configuration files,
owned by several different UIDs/GIDs within the allocated
user-namespace. For example, some files are owned by $UID:$GID, others
may be 100123:100123, and others could be 100321:100321 (depending on
the exact user-namespace allocation details).
4. Ansible uses the 'template' module to manage 10-30 configuration
files and directories destined for the rootless podman volume. Ref:
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/templ...
5. When configuration files "change", Ansible uses a handler to restart
the pod. Ref:
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_handlers...
---
The problem:
The 'template' module knows nothing about user-namespaces. Because it's
running as a regular user, it can't `chown` the files into the
user-namespace range (permission denied). So the template module is
CONSTANTLY (and needlessly) triggering the handler to restart the pod
(due to file ownership differences). Also as you'd expect, when
`template` sets the file's UID/GID wrong, the containerized services
fail on restart.
---
Idea[0]: (untested) For the `template` task, set
`ansible_python_interpreter` to a wrapper script that execs `podman
unshare /usr/bin/python3 "$@"`.
--
Chris Evich (he/him), RHCA III
Senior Quality Assurance Engineer
If it ain't broke, your hammer isn't wide 'nough.
1 month, 1 week
Podman v4.6.1 Released!
by Ashley Cui
Hi all,
Podman v4.6.1 <https://github.com/containers/podman/releases/tag/v4.6.1> has
been released! This is a small bugfix release with a few changes.
Changes
- When looking up an image by digest, the entire repository of the
specified value is now considered. This aligns with Docker's behavior since
v20.10.20. Previously, both the repository and the tag was ignored and
Podman looked for an image with only a matching digest. Ignoring the name,
repository, and tag of the specified value can lead to security issues and
is considered harmful.
Quadlet
- Quadlet now selects the first Quadlet file found when multiple
Quadlets exist with the same name.
API
- Fixed a bug in the container kill endpoint to correctly return 409
when a container is not running (#19368).
Misc
- Updated Buildah to v1.31.2
- Updated the containers/common library to v0.55.3
Feel free to try it out!
1 month, 1 week
podman slowly shows logs on windows
by Александр Илюшкин
Hey guys, I've switched from docker to podman and I noticed that command
podman logs <container name> works extremely slow
What should be done to fix this?
--
С уважением,
А.И.
1 month, 1 week
# in environment ?
by lejeczek
Hi guys.
Do you use # in your envs?
I wonder if it's just me having issues with those.
For a test, to reproduce the issue, 'ghost' web solution
would be easy & quick:
-> $ podman run -dt ...................... --env
database__client=mysql --env
database__connection__host=11.1.0.1 --env
database__connection__user=ghostadm --env
database__connection__password='xyz#admghost' --env
database__connection__database=ghost_xyz --env
url=https://ghost.xyz
So far all I've tried with 'database__connection__password'
failed, quoting &| escaping.
I often use # - does anybody have a way to make it work?
many thanks, L.
1 month, 1 week
Should I run podman-based systemd services as root?
by Mark Raynsford
Hello!
I'm aware of the age-old advice of not running services as root; I've
been administering UNIX-like systems for decades now.
If you follow the advice given in, for example, this page:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_at...
... What you'll get is a redis container running as root (unless the
redis image drops privileges itself - I don't know, I've never run it).
I've set up a few production systems running services that are inside
podman containers. I'm lucky enough to be using 98% software that can
run inside completely unprivileged containers. For all of these
containers, I've run each container under its own user ID. The systemd
unit for each, for example, does something along these lines:
[Service]
Type=exec
User=_cardant
Group=_cardant
ExecStart=/usr/bin/podman run ...
However, doing things this way is a little messy. For example, if for
some reason I want to do something like `podman exec` in a container, I
have to `sudo -u _cardant podman exec ...`. `podman ps` will obviously
only show me the containers running for the current user. Additionally,
any images downloaded from the registry for each service will
effectively end up in the home directory of each service user,
complicating storage accounting somewhat. The UIDs/GIDs are yet another
thing I have to manage, even though they don't have any useful meaning
(they don't identify people, they're solely there because the
containers have to run as _something_). Containers also leak internal
UID/GID values (from the /etc/subuid ranges) into the filesystem, which
can complicate things.
Additionally, there are some containers that stubbornly make it awkward
to run as a non-root user despite not actually needing privileges. The
PostgreSQL image is a good example; you can run it as a non-root user
and it'll switch to another UID inside the container and then that
UID/GID will end on the database files that are inevitably mounted
inside the container. You'll also have to match these unpredictable
weird UID/GIDs if you want to supply the container with TLS keys/certs,
because postgres will refuse to open them unless the UID/GID matches.
You can't get around this by telling postgres to run as UID 0; it'll
refuse, even though UID 0 inside the container isn't UID 0 outside of
it when running unprivileged.
I'm running all of these services on systems that have SELinux in
enforcing mode. My understanding is that containers will all have the
container_t domain and therefore even if they all ran as root, a
compromised container would not be able to do any meaningful harm to
the system.
I'm therefore not certain if the usual "don't run as root" advice
applies as containers don't have the same security properties
(especially when combined with SELinux).
I feel like it'd simplify things if I could safely run all of
the containers as root. At the very least, I'd be able to predict
UID/GID values inside the containers from outside!
I can't get any clear advice on how people are expected to run podman
containers in production. All of the various bits of documentation in
Linux distributions that talk about running under systemd never
bother to talk about UIDs or GIDs. Any documentation on running podman
rootless seems to only talk about it in the context of developers
running unprivileged containers on their local machines for
experimentation/development. If you set up containers via Fedora
Server's cockpit UI, you'll get containers running as root everywhere.
What is the best practice here?
--
Mark Raynsford | https://www.io7m.com
1 month, 2 weeks