# in environment ?
                                
                                
                                
                                    
                                        by lejeczek
                                    
                                
                                
                                        Hi guys.
Do you use # in your envs?
I wonder if it's just me having issues with those.
For a test, to reproduce the issue, 'ghost' web solution 
would be easy & quick:
-> $ podman run -dt ...................... --env 
database__client=mysql --env 
database__connection__host=11.1.0.1 --env 
database__connection__user=ghostadm --env 
database__connection__password='xyz#admghost' --env 
database__connection__database=ghost_xyz --env 
url=https://ghost.xyz
So far all I've tried with 'database__connection__password' 
failed, quoting &| escaping.
I often use # - does anybody have a way to make it work?
many thanks, L.
                                
                         
                        
                                
                                2 months, 3 weeks
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                        
                                
                                
                                        
                                                
                                        
                                        
                                        RunRoot & mistaken IDs
                                
                                
                                
                                    
                                        by lejeczek
                                    
                                
                                
                                        Hi guys.
I experience this:
-> $ podman images
WARN[0000] RunRoot is pointing to a path 
(/run/user/1007/containers) which is not writable. Most 
likely podman will fail.
Error: creating events dirs: mkdir /run/user/1007: 
permission denied
-> $ id
uid=2001(podmania) gid=2001(podmania) groups=2001(podmania) 
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
I think it might have something to do with the fact that I 
changed UID for the user, but why would this be?
How troubleshoot & fix it, ideally without system reboot?
many thanks, L.
                                
                         
                        
                                
                                3 months, 4 weeks
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                        
                                
                                
                                        
                                                
                                        
                                        
                                        shouldn't the current directory be the default context for "podman  build"?
                                
                                
                                
                                    
                                        by Robert P. J. Day
                                    
                                
                                
                                        
  "man podman-build" suggests that the context argument is optional:
  SYNOPSIS
       podman build [options] [context]
       podman image build [options] [context]
...
       If  no  context directory is specified, then Podman will assume
       the current working  directory  as  the  build  context,  which
       should contain the Containerfile.
but if i have a directory with nothing but a Containerfile, i get:
  $ podman build
  Error: no context directory specified, and no containerfile specified
  $
OTOH, specifying context of current directory:
  $ podman build .
  STEP 1: FROM alpine:latest
  ... etc etc ...
thoughts?
rday
                                
                         
                        
                                
                                10 months, 3 weeks
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                        
                                
                                
                                        
                                                
                                        
                                        
                                        mqueue msg_max in rootless container
                                
                                
                                
                                    
                                        by Michael Ivanov
                                    
                                
                                
                                        Hallo!
I'm trying to run my application in podman rootless container and I stumble
on following problem: my program needs /proc/sys/fs/mqueue/msg_max to be at
least 256, but in running container this value is just 10. When I try to
specify this parameter while running the image (--sysctl 'fs.mqueue.msg_max=256')
I get the following error:
  Error: open /proc/sys/fs/mqueue/msg_max: Permission denied: OCI permission denied
and container is not created.
My host where container is being run has this parameter set to 256. How can I
expose current host setting for msg_max to my container?
Best regards,
-- 
 \   / |			           |
 (OvO) |  Михаил Иванов                    |
 (^^^) |                                   |
  \^/  |      E-mail:  ivans(a)isle.spb.ru   |
  ^ ^  |                                   |
                                
                         
                        
                                
                                1 year, 11 months
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                                
                                 
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        After reboot, Container not responding to connection requests
                                
                                
                                
                                    
                                        by Jacques Jessen
                                    
                                
                                
                                        Running Podman as root and created a container for Symantec's HSM Agent.
When manually started, it reports as working:
[root@PoC ~]# podman ps
CONTAINER ID  IMAGE                                    COMMAND          CREATED        STATUS        PORTS                                                                   NAMES
b53be5503ca7  localhost/symantec_hsm_agent:2.1_269362  catalina.sh run  4 minutes ago  Up 4 minutes  0.0.0.0:8080->8080/tcp, 0.0.0.0:8082->8082/tcp, 0.0.0.0:8443->8443/tcp  symhsm_agent
[root@PoC ~]# podman stats
ID            NAME          CPU %       MEM USAGE / LIMIT  MEM %       NET IO           BLOCK IO      PIDS        CPU TIME      AVG CPU %
b53be5503ca7  symhsm_agent  3.55%       216MB / 4.112GB    5.25%       1.93kB / 1.09kB  249.2MB / 0B  29          3.759969275s  3.55%
You can successfully access the 8080, 8082, 8443 ports with a browser.
However, if the server is rebooted, while Podman will show results as above that it is working, from a browser you will be told:
ERR_CONNECTION_TIMED_OUT
If you manually Stop and Start the container, you can successfully access the 8080, 8082, 8443 ports with a browser.
Given there's no change in the configuration, this feels like there's a timing issue with the initial start.  I used the Podman provided response to create the Service file:
[root@PoC ~]# podman generate systemd --new --name symhsm_agent
# container-symhsm_agent.service
# autogenerated by Podman
[Unit]
Description=Podman container-symhsm_agent.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStart=/usr/bin/podman run \
        --cidfile=%t/%n.ctr-id \
        --cgroups=no-conmon \
        --rm \
        --sdnotify=conmon \
        --replace \
        -d \
        --name symhsm_agent \
        -p 8443:8443 \
        -p 8082:8082 \
        -p 8080:8080 \
        -v /opt/podman/:/usr/local/luna symantec_hsm_agent:2.1_269362
ExecStop=/usr/bin/podman stop \
        --ignore -t 10 \
        --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm \
        -f \
        --ignore -t 10 \
        --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all
[Install]
WantedBy=default.target
Having to manually login and restart the container kind of defeats the purpose.
Thoughts and feedback appreciated.
                                
                         
                        
                                
                                2 years, 2 months
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                                
                                 
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        quay.io podman/buildah/skopeo image safety
                                
                                
                                
                                    
                                        by Chris Evich
                                    
                                
                                
                                        All,
On August 23rd it was discovered that the credentials for several robot 
service accounts with write-access to the container-images could have 
leaked.  Upon discovery, the credentials were invalidated.  The earliest 
possible leak opportunity was around March 10th, 2022.
While the investigation is ongoing, initial inspection of the images 
seem to indicate it is unlikely any credentials had actually been 
discovered and/or used to manipulate images.  Nevertheless, out of an 
abundance of caution, all possibly-affected images will be disabled.
quay.io/containers/podman : tags v3 - v4
quay.io/containers/buildah : tags v1.23.1 - v1.31.0
quay.io/containers/skopeo : tags v1.5.2 - v1.13.1
quay.io/podman/stable : tags v1.6 - v4.6.0
quay.io/podman/hello:latest SHA256 afda668e706a (<= Aug 2, 2023)
quay.io/buildah/stable : tags v1.23.3 - 1.31.0
quay.io/skopeo/stable : tags v1.3.0 - 1.13.1
We realize this issue has the potential to impact not only direct, but 
also indirect use, such as base-images.  The safety and integrity of 
these images has and must take priority.  At this time, all images have 
been disabled.  We will restore originals and/or rebuild fresh copies 
based on further safety analysis.
We expect analysis to be complete and/or known-safe images restored, 
before Sept. 8th.  Though please keep in mind the research is ongoing, 
and the situation remains somewhat fluid.  When the examination work is 
complete, or if any manipulation is discovered, we will issue further 
updates.
Thank you in advance for your patients and understanding.
                                
                         
                        
                                
                                2 years, 2 months
                        
                        
                 
         
 
        
            
        
        
        
            
        
        
        
            
        
        
        
                
                        
                                
                                 
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        Ansible `template` tasks and rootless podman volume content  management
                                
                                
                                
                                    
                                        by Chris Evich
                                    
                                
                                
                                        Hey podman community,
While exploring Ansible management of rootless podman on a remote host, 
I ran into a stinky volume-contents idempotency issue.  I have an 
idea[0] on how to solve this, but thought I'd reach out and see if/how 
others have dealt with this situation.
---
Here's the setup:
1. I'm running an Ansible playbook against a host for which I ONLY have 
access to a non-root (user) account.
2. The playbook configures `quadlet` for `systemd` management of a 
configuration (podman) volume and a pod with several containers in it 
running services.
3. The contents of the podman volume are 10-30 configuration files, 
owned by several different UIDs/GIDs within the allocated 
user-namespace. For example, some files are owned by $UID:$GID, others 
may be 100123:100123, and others could be 100321:100321 (depending on 
the exact user-namespace allocation details).
4. Ansible uses the 'template' module to manage 10-30 configuration 
files and directories destined for the rootless podman volume.  Ref: 
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/templ...
5. When configuration files "change", Ansible uses a handler to restart 
the pod.  Ref: 
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_handlers...
---
The problem:
The 'template' module knows nothing about user-namespaces.  Because it's 
running as a regular user, it can't `chown` the files into the 
user-namespace range (permission denied).  So the template module is 
CONSTANTLY (and needlessly) triggering the handler to restart the pod 
(due to file ownership differences).  Also as you'd expect, when 
`template` sets the file's UID/GID wrong, the containerized services 
fail on restart.
---
Idea[0]: (untested) For the `template` task, set 
`ansible_python_interpreter` to a wrapper script that execs `podman 
unshare /usr/bin/python3 "$@"`.
-- 
Chris Evich (he/him), RHCA III
Senior Quality Assurance Engineer
If it ain't broke, your hammer isn't wide 'nough.
                                
                         
                        
                                
                                2 years, 2 months
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                        
                                
                                
                                        
                                                
                                        
                                        
                                        Podman v4.6.1 Released!
                                
                                
                                
                                    
                                        by Ashley Cui
                                    
                                
                                
                                        Hi all,
Podman v4.6.1 <https://github.com/containers/podman/releases/tag/v4.6.1> has
been released! This is a small bugfix release with a few changes.
Changes
   - When looking up an image by digest, the entire repository of the
   specified value is now considered. This aligns with Docker's behavior since
   v20.10.20. Previously, both the repository and the tag was ignored and
   Podman looked for an image with only a matching digest. Ignoring the name,
   repository, and tag of the specified value can lead to security issues and
   is considered harmful.
Quadlet
   - Quadlet now selects the first Quadlet file found when multiple
   Quadlets exist with the same name.
API
   - Fixed a bug in the container kill endpoint to correctly return 409
   when a container is not running (#19368).
Misc
   - Updated Buildah to v1.31.2
   - Updated the containers/common library to v0.55.3
Feel free to try it out!
                                
                         
                        
                                
                                2 years, 2 months