mqueue msg_max in rootless container
by Michael Ivanov
Hallo!
I'm trying to run my application in podman rootless container and I stumble
on following problem: my program needs /proc/sys/fs/mqueue/msg_max to be at
least 256, but in running container this value is just 10. When I try to
specify this parameter while running the image (--sysctl 'fs.mqueue.msg_max=256')
I get the following error:
Error: open /proc/sys/fs/mqueue/msg_max: Permission denied: OCI permission denied
and container is not created.
My host where container is being run has this parameter set to 256. How can I
expose current host setting for msg_max to my container?
Best regards,
--
\ / | |
(OvO) | Михаил Иванов |
(^^^) | |
\^/ | E-mail: ivans(a)isle.spb.ru |
^ ^ | |
11 months, 2 weeks
Recommended way to manage events.log file
by Dale Baley
Hi, we rely on podman events via file for our workflows. Is there a recommended way to truncate/rotate/move the events.log file without losing potential event logs while doing so? journald isn't an option.
Thanks in advance
3 years, 3 months
Podman's GitHub upstream branch has been renamed!
by Tom Sweeney
Hi All,
Just a quick note aimed mostly towards our contributors. The 'master'
branch on the Podman GitHub Repository
(https://github.com/containers/podman) has been renamed to 'main'. If
you have a local clone of the repository, then you should do the following:
git branch -m master main
git fetch origin
git branch -u origin/main main
git remote set-head origin -a
Then the hardest part will then be retraining the muscle memory in your
fingers to type main now! FWIW, the Buildah and Skopeo projects, along
with most of the other projects in the the Containers organization on
GitHub have also been changed.
Best Wishes,
t
3 years, 4 months
environment variables in exec session not visible?
by James Miller
Hi,
I have a problem with an environment variable that I am passing into an
exec session. The command has worked in the past but recently, I am unable
to pass an environment variable that a called executable can see properly.
If I run, podman exec -e VAR='bob' -it some_cont bash -c "env", the
environment is printed out and includes the environment variable VAR='bob'.
But I can not run podman exec .... bash -c "echo $VAR" successfully, nor in
my current situation am I able to run podman exec -e
PASSWORD="$var_I_just_read" .... bash -c "mysql -uroot -p${PASSWORD}... ".
Because the env variable PASSWORD is not present, the mysql command asks
for a password. This was certainly working ok previously, but doesn't seem
to function now.
I have tried a bunch of different permutations, including running the
command with real variables instead of environment variables, and it works
ok. Also, I am sure that I used to be able to run 'podman exec -e
SOMEVAR='Bob' -it cont_name bash -c "echo $SOMEVAR" and get Bob output.
What am I doing wrong?
MTIA, James
--
James Stewart Miller Bsc(hons) Psych.
3 years, 4 months
run podman without isolation
by Hendrik Haddorp
Hi,
I want to run a build job inside a podman container. This is only done
to have better control on what tools and versions of those are
installed. I'm not interested in any isolation or security and would
ideally like my my user id, groups and so on to stay the same as on the
host. So far things look quite promising when using these flags:
--cgroups=disabled
--net=host
--annotation=run.oci.keep_original_groups=1
--security-opt label=disable
Is there any easier / better way to achieve this kind of thin "isolation"?
regards,
Hendrik
3 years, 4 months
Trouble with Podman secrets with v 3.2
by James Miller
Hi, I have v3.2 podman installed, but am having difficulty with the new
secrets --type=env.
I create the secret ok as file, but podman secret create secret_name
--env=true $env_name fails.
When I create the secret as file, no matter whether it is JSON or simple
variable='thing', when I create the container using the command
Podman run -dit --secret=secret_name,type=env --name=container_name
image_id
and then exec into the running container with Podman exec -it
container_name bash, there is no environment variable named secret_name.
Am I missing something?
Regards
James
3 years, 4 months
rootless podman, docker-credential-gcloud, and snaps
by Ioan Rogers
Hi,
I'm on Ubuntu, and I've recently encountered an issue when trying to use rootless podman with the docker-credential-gcloud helper installed via snap.
This works fine when using the official google-cloud-sdk apt packages, and it used to work with snap packages until last October.
Here's what I see now:
```
$ podman pull gcr.io/private/image
Trying to pull gcr.io/private/image...
2021/02/01 13:19:17.474248 cmd_run.go:994: WARNING: cannot create user data directory: cannot create "/root/snap/google-cloud-sdk/166": mkdir /root/snap: permission denied
cannot create user data directory: /root/snap/google-cloud-sdk/166: Permission denied
error getting credentials - err: exit status 1, out: ``
Error: unable to pull gcr.io/private/image: Error initializing source docker://gcr.io/private/image:latest: error getting username and password: error getting credentials - err: exit status 1, out: ``
```
So it looks like the credential helper is being executed as root now. I'm not sure in which component the problem lies, or where I should file an issue.
Any pointers would be appreciated.
Thanks
Ioan Rogers
Sent with ProtonMail Secure Email.
3 years, 4 months
Podman restore container failed
by Ali Hamieh
*HI,*
*When I migrated a podman container from a google cloud rhel 8.3 vm to a
local rhel 8.3 vm, I got the following error when restoring (Checkpoint and
restore podman uses CRIU): *
persmision:1: Error (criu/files-reg.c:2182): File . has bad mode 040755
(expect 040555)
(00.343223) 1: Error (criu/files.c:1357): Can't open root
(00.343659) Error (criu/cr-restore.c:1560): 163252 exited, status=1
(00.343707) Warn (criu/cr-restore.c:2469): Unable to wait 163252: No child
processes
(00.343974) mnt: Switching to new ns to clean ghosts
(00.344242) Error (criu/cr-restore.c:2483): Restoring FAILED.
*And from the local rhel 8.3 vm to the google cloud rhel 8.3 vm, I got:*
bad mode 040555 (expect 040755)
*So bad mode in reverse.*
*Any ideas on how to do a workaround? not necessarily a permanent fix.*
*The container is a podman container: quay.io/adrianreber/counter
<http://quay.io/adrianreber/counter>*
--
Best regards,
Ali Hamieh, PhD
*LinkedIn <https://www.linkedin.com/in/ali-hamieh-phd/>ResearchGate
<https://www.researchgate.net/profile/Ali_Hamieh>*
3 years, 4 months