rootless container needs to access private key stored on host
by Mikhaël MYARA
dear all,
I work on a podman container for postfix + dovecot. On my host, the
encrypt keys (including the private key) are stored in
/etc/letsencrypt/live/xxxxx.xxx/, and these keys have to be used by
both postfix and dovecot.
However the "/etc/letsencrypt/live" folder is only accessible by
root, so that when I share the /etc/letsencrypt folder using the -v
option, the container has no access to the live folder. Of course, if I
do awful things like chmod 777 on the /etc/letsencrypt/live folder
everything is ok. But of course it is not a good way for that.
I wanted to know what I should do to avoid this chmod 777 while
working with a rootless container. Can I map the volume using root ?
(and if so is it a good idea ?) Should I play with groups on the host
(= a group called like "encrypters", that may contain only root and the
user that runs the container ?) Or a root process that performs copies
of the keys ?
I also have seen the "--secret" option for podman I did not
understad If it would solve my problem. Please also notice that the
"let's encrypt" keys are re-generated sometimes because they have a 1
month lifetime.
If there is some guideline somewhere about this topic please show me.
My host is ubuntu 22.04, and the podman version is 3.4.4. I don't use
SE linux for now.
Thanks a lot,
Mike
2 years, 2 months
delayed autostart of containers - ?
by lejeczek
Hi guys.
Would you know how, if possible at all, to delay an
autostart of a container?
My specific scenario is such in which containers-
auto-started by systemd - reside under a net mount-point
which is mounted at later stage by ha/pcs(so containers fail
to start @boot)
I'd hope that it's doable without extra & "external"
scripts/tools.
many thanks, L
2 years, 2 months
don't understand how ip work in rootless mode
by Mikhaël MYARA
Dear all,
I started with Docker a few weeks ago and understood security issues
coming from the root daemon. I saw that podman was close to Doker (and
it is true, my Dockerfiles worked without modification) and solved this
security issue.
With podman, things work well as long as I use my images / containers
in root mode, using sudo. However nothing works in user mode.
I guess that for security reasons, it would be better, by far, to run
containers in user mode. And I cannot understand how it works.
In root mode, typing "ip a" exhibits an eth0 network card, with an
ip. And when I use this ip with the considered port fron the outside
of the container (i.e. from the main OS), it works
In rootless mode, the same command gives a tap0 interface instead,
with another ip on another sob network I guess.
now if I force the usage of the podman network (in rootless mode),
with --network podman, now I get a eth0 network interface, on the same
sub network as in root mode. It seems to correspond to the cni-podman0
network on the host OS.
However, when I do :
telnet 10.88.0.02 8080
from the podman container, it works, whereas from the host OS, it does
not work, whereas the interface responds to ping from the host.
Can someone help ?
Regards,
Mike
2 years, 2 months
uidmap/gidmap in Pod Yaml
by Rudolf Vesely
Hello Everybody,
I often run containers with the following mapping:
podman run --uidmap 0:1:1000 --gidmap 0:1:1000 --uidmap 1000:0:1 --gidmap 1000:0:1 --uidmap 1001:1002:64535 --gidmap 1001:1002:64535 --name foo -it localhost/bar bash
The reason is that "bar" image is build from containerfile that has user "foobar" and that user is configured to run processes ("USER foobar" in the containerfile) in containers run from the image.
The mapping above makes sure that foobar user has same ID in the container host as the user that runs containers on that host (my user account).
Could you please tell me is it possible to specify such uidmap/gidmap in Pod yaml?
I tried to run a container with the given uidmap/gidmap in a Pod and then "podman generate kube abc" but that didn't give me any extra properties that would configure uidmap/gidmap.
Thank you.
Kind regards,
Rudolf Vesely
2 years, 2 months
Issues with added on/committed to docker.io/bioconductor/bioconductor_docker images
by Johannes Graumann
Hi,
In a bit for reproducible/revisitable data analysis I have been
operating as follows for a while (first with docker, now with podman):
1) Get the bioconductor-provided image providing an RStudio Server
installation and all OS-level libraries potentially required to install
any R packages from CRAN & bioconductor:
> podman pull docker.io/bioconductor/bioconductor_docker
2) Work:
2.a) > podman run -d -p 127.0.0.1:8787:8787 -v /tmp:/tmp -e ROOT=TRUE
-e DISABLE_AUTH=TRUE docker.io/bioconductor/bioconductor_docker
2.b) open browser, log onto 127.0.0.1:8787 and analyze data using
RStudio/bioconductor, etc.
3) After finishing: (attempt to) save the entire toolchain:
3.a) > podman commit <CONTAINER ID> <PIRVATEREGISTRY/PROJECTID:latest>
3.b) > podman push <PIRVATEREGISTRY/PROJECTID:latest>
This has served me extremely well. In my field people frequently come
back with minor adaptation requests YEARS after the original analysis
(usually plotting prior to publication) and I can just pull the image
and provide them with those edits in the context of EXACTLY the same
tool chain irrespective of where Linux, Libraries, R, R packages, my
own in house code etc. meanwhile has moved.
Recently, however, I have had issues with not being able to connect to
the RStudio server in containers from images retrieved from our private
registry. The container will start fine, but No matter what I do, no
RStudio ever starts/is accessible and I have not succeeded in debugging
this behavior.
When going through step 3.a) today, I noted the following warnings and
was wondering whether I may have overlooked them and they might be at
the base of my issues to access RStudio after a comitt/push/pull cycle:
> podman push WARN[0024] archive: skipping
> "/home/user/.local/share/containers/storage/overlay/a1785f29a13373408
> 1f82f42b53ba8e20a7b0e2d8d2fd3cdc69303d4c681aa96/merged/run/rstudio-
> server/rstudio-rserver/session-server-rpc.socket" since it is a
> socket
> WARN[0024] archive: skipping
> "/home/user/.local/share/containers/storage/overlay/a1785f29a13373408
> 1f82f42b53ba8e20a7b0e2d8d2fd3cdc69303d4c681aa96/merged/run/rstudio-
> server/rstudio-rsession/rstudio-d" since it is a socket
Is this likely at the basis of my woes and how might it be fixed?
Sincerely, Joh
2 years, 2 months
Issues with added on/committed to docker.io/bioconductor/bioconductor_docker images
by Johannes Graumann
Hi,
In a bit for reproducible/revisitable data analysis I have been
operating as follows for a while (first with docker, now with podman):
1) Get the bioconductor-provided image providing an RStudio Server
installation and all OS-level libraries potentially required to install
any R packages from CRAN & bioconductor:
> podman pull docker.io/bioconductor/bioconductor_docker
2) Work:
2.a) > podman run -d -p 127.0.0.1:8787:8787 -v /tmp:/tmp -e ROOT=TRUE
-e DISABLE_AUTH=TRUE docker.io/bioconductor/bioconductor_docker
2.b) open browser, log onto 127.0.0.1:8787 and analyze data using
RStudio/bioconductor, etc.
3) After finishing: (attempt to) save the entire toolchain:
3.a) > podman commit <CONTAINER ID> <PIRVATEREGISTRY/PROJECTID:latest>
3.b) > podman push <PIRVATEREGISTRY/PROJECTID:latest>
This has served me extremely well. In my field people frequently come
back with minor adaptation requests YEARS after the original analysis
(usually plotting prior to publication) and I can just pull the image
and provide them with those edits in the context of EXACTLY the same
tool chain irrespective of where Linux, Libraries, R, R packages, my
own in house code etc. meanwhile has moved.
Recently, however, I have had issues with not being able to connect to
the RStudio server in containers from images retrieved from our private
registry. The container will start fine, but No matter what I do, no
RStudio ever starts/is accessible and I have not succeeded in debugging
this behavior.
When going through step 3.a) today, I noted the following warnings and
was wondering whether I may have overlooked them and they might be at
the base of my issues to access RStudio after a comitt/push/pull cycle:
> podman push WARN[0024] archive: skipping
> "/home/user/.local/share/containers/storage/overlay/a1785f29a13373408
> 1f82f42b53ba8e20a7b0e2d8d2fd3cdc69303d4c681aa96/merged/run/rstudio-
> server/rstudio-rserver/session-server-rpc.socket" since it is a
> socket
> WARN[0024] archive: skipping
> "/home/user/.local/share/containers/storage/overlay/a1785f29a13373408
> 1f82f42b53ba8e20a7b0e2d8d2fd3cdc69303d4c681aa96/merged/run/rstudio-
> server/rstudio-rsession/rstudio-d" since it is a socket
Is this likely at the basis of my woes and how might it be fixed?
Sincerely, Joh
2 years, 2 months
Rootless container with --uidmap: root loses privileges inside the container
by jklaiho@iki.fi
I've had quite a lot of success with running rootless Podman containers in a Ubuntu 22.04 Vagrant box. They're able to connect to services running on the host, and by using the --uidmap parameter, I've been able to make the container user write to bound volumes from the host with the privileges of the non-root host user that is running the service.
One last hurdle remains: I have a container running as a systemd user service as a non-root user, but internally the container runs as root. I'm using --uidmap 0:0:1 so that when the container's root user writes to bound host volumes, on the host they appear to have been created by the non-root service user.
What surprised me is that when this UID mapping is in place, the root user seems to lose root privileges inside the container. I was trying to install redis-tools to debug a Redis connection issue inside the running container, and ran 'apt update' as the container root user. This failed with errors:
E: setgroups 65534 failed - setgroups (22: Invalid argument)
E: setegid 65534 failed - setegid (22: Invalid argument)
E: seteuid 100 failed - seteuid (22: Invalid argument)
rm: cannot remove '/var/cache/apt/archives/partial/*.deb': Permission denied
Reading package lists... Done
W: chown to _apt:root of directory /var/lib/apt/lists/partial failed - SetupAPTPartialDirectory (22: Invalid argument)
W: chown to _apt:root of directory /var/lib/apt/lists/auxfiles failed - SetupAPTPartialDirectory (22: Invalid argument)
E: setgroups 65534 failed - setgroups (22: Invalid argument)
E: setegid 65534 failed - setegid (22: Invalid argument)
E: seteuid 100 failed - seteuid (22: Invalid argument)
E: Method gave invalid 400 URI Failure message: Failed to setgroups - setgroups (22: Invalid argument)
E: Method gave invalid 400 URI Failure message: Failed to setgroups - setgroups (22: Invalid argument)
E: Method http has died unexpectedly!
E: Sub-process http returned an error code (112)
If I run the container without the --uidmap parameter, this command starts working again, but naturally I lose the user mapping I described above.
Honestly, I'm probably able to rebuild the image that the container uses in such a way that its application runs as a non-root user (and then I'll just use e.g. --uidmap 1000:0:1, which I've found to work elsewhere), but I'm clearly missing something about the UID mapping functionality with an in-container root user, because I don't understand what about it is causing these errors. Any ideas?
2 years, 2 months