shouldn't the current directory be the default context for "podman build"?
by Robert P. J. Day
"man podman-build" suggests that the context argument is optional:
SYNOPSIS
podman build [options] [context]
podman image build [options] [context]
...
If no context directory is specified, then Podman will assume
the current working directory as the build context, which
should contain the Containerfile.
but if i have a directory with nothing but a Containerfile, i get:
$ podman build
Error: no context directory specified, and no containerfile specified
$
OTOH, specifying context of current directory:
$ podman build .
STEP 1: FROM alpine:latest
... etc etc ...
thoughts?
rday
1 week, 4 days
image signing
by Hendrik Haddorp
Hi,
is OpenPGP the only supported image signing open supported by podman /
skopeo or are there other options? Using OpenGPG works quite fine for me
so far but in the end we are trying to sign an image using an IBM 4765
crypto card and so far have not figured out how this can play together.
thanks,
Hendrk
3 years, 9 months
can not run ubi7-init systemd container, fedora systemd container works fine
by Jan Hutař
Hello!
I have issue with running "ubi7-init" based container. When I base mine
container on "fedora", it works fine:
$ cat Containerfile
FROM fedora
RUN dnf -y install httpd; dnf clean all; systemctl enable httpd
EXPOSE 80
CMD [ "/sbin/init" ]
and then:
$ sudo podman build -f Containerfile
$ sudo podman run -ti -p 80:80 20185593d0f96c4dee56e351eae4754cdd429679c1b645dae1b6f24880ce33fc
systemd v246.6-3.fc33 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +ZSTD +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=unified)
[...]
[ OK ] Started The Apache HTTP Server.
[...]
But when I try the same with ubi7-init based container (or rhel7-init):
$ cat Containerfile
FROM registry.access.redhat.com/ubi7/ubi-init
RUN echo -e "[repo1]\nname=repo1\nbaseurl=http://repos.example.com/RHEL-7/7.9/Server/x8..." >/etc/yum.repos.d/repo1.repo; yum -y install httpd; yum clean all; systemctl enable httpd
EXPOSE 80
CMD [ "/sbin/init" ]
it fails:
$ sudo podman run -ti -p 80:80 d872b16b8d0f9718c60420e3569cb4d5ddd16053fb72903e70d7b62ba3f34964
Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted
[!!!!!!] Failed to mount API filesystems, freezing.
And same with privileged:
$ sudo podman run -ti -p 80:80 --privileged=true d872b16b8d0f9718c60420e3569cb4d5ddd16053fb72903e70d7b62ba3f34964
Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted
[!!!!!!] Failed to mount API filesystems, freezing.
I have these versions:
$ rpm -q fedora-release-common podman
fedora-release-common-33-3.noarch
podman-2.2.1-1.fc33.x86_64
$ sudo podman version
Version: 2.2.1
API Version: 2.1.0
Go Version: go1.15.5
Built: Tue Dec 8 15:37:50 2020
OS/Arch: linux/amd64
Please, any idea on what I'm doing wrong?
Thank you in advance and happy new year!
Regards,
Jan
--
Jan Hutar Performance Engineering
jhutar(a)redhat.com Red Hat, Inc.
3 years, 11 months
Getting Docker Discourse running with Podman
by Philip Rhoades
People,
I can run the discourse image with docker, export the container and
import it as an image into podman.
The script that manages docker discourse containers is:
/var/discourse/launcher
and is attached. It would be good if it were possible to just replace
all the occurrences of "docker" with "podman", fix version numbers etc
and be able to use the script - but can any gurus see dockerisms in the
script that will cause podman gotchas for this idea?
Thanks,
Phil.
--
Philip Rhoades
PO Box 896
Cowra NSW 2794
Australia
E-mail: phil(a)pricom.com.au
3 years, 11 months
Single pod,multiple networks
by fugkco
Hello all,
I have a pod that has a service running that has to run on a VPN. I've been able to make this setup work and I'm able to access the service on my local network too.
The set up is:
> podman pod create --name=mypod --share net -p 8080:8080
> podman run -d --name=vpn --cap-add=NET_ADMIN --device /dev/net/tun --restart unless-stopped openvpn
> podman run -d --name=myservice --restart unless-stopped myservice
I've now figured out that the container `myservice` may also need a non-vpn connection. Therefore I'd like to add an additional nic to the container, that _isn't_ running over the VPN.
Is there a solution at all for this?
Failing that, I can setup a small proxy within the same pod that I can point `myservice` to. Would it be possible to ensure said proxy doesn't run over the VPN?
Note, I'm aware that I could potentially run aforementioned proxy on a separate pod, and then point myservice to the proxy pod, though I'd like to avoid that if possible.
Happy to provide additional clarifications.
Thanks
3 years, 11 months
Podman Deployment Hardening Patterns
by Andrew G. Dunn
Greetings, thanks for this awesome tool and growing community!
We've been deploying podman, using systemd with podman directly (instances,
and pods), or podman kube. We've been internally talking about a couple topics
related to hardening and have been neglecting to find a place to initaite
discussion, as it's a bit meta in nature. The mailing list looks like the
right place!
Everything proposed here comes down to personal preference, but the reason we
wanted to share our discussion with the community is to explore what the sane
defaults should be for users of podman.
# rootless, rootless as non-root
Brian Smith via this video [0] uses the terminology "rootles-podman-as-non-
root". We understand that (likely niavely) to be "shifting twice":
- once from root on metal, to user on metal
- once more from root and user coliding to root in the container being
remapped off user (using subuid/subgid)
Discussion here [1] shows that if you were to attempt to user systemd-sysuers
or systemd-tmpfiles to package something (using podman) you'd not be able to
set up the subuid/subgid mappings. Poettering goes on to point out that
subuid/subgid as implemented has flaws [2].
We've been deploying as "rootless-podman-as-non-root" but have recently been
considering removing the subuid/subgid configurations as the podman instances
for us are already running under "system" users (e.g. `/sbin/nologin`). We
can't seem to grasp the specific advantage when doing a "systems" deplyoment,
and there is a distinct disadvantage when having to deal with file permissions
that are uid/gid shifted (pressing one to use more permissive permissions than
what would typically be necessary).
We realize that "rootless-podman-as-non-root" is valuable for things like
toolbox [3], where the example would be a non-root user wanting to run a
container mapped once again off their namespaces (e.g. a browser or something
of high risk).
# systemd unit hardening
systemd itself has a _lot_ of hardening features available [4], one can make a
unit wrapping podman and then examine it via `systemd-analyze security
unit.service`. As podman has a `podman generate systemd` it'd be extremely
interesting to have some discussion on how these features of systemd could be
enumerated/used by default.
# seccomp/eBPF/selinux
There is already some documentation on generating seccomp profiles [5], as
well as udica [6]. These seem to be very powerful tools to create instance
specific isolation for deployments. We're very interested in these, but we're
wondering how to practically apply these techniques for something that is a
complex monolithic ontainer (e.g. gitlab).
# Questions
Does the podman community have a line to the systemd community to talk about
leveraging subuid/subgid, is there a more systemd focused formalism for
accomplishing this shift?
What does the "shifting twice" accomplish for you when deploying a system
style service? (as opposed to the other hardening options mentioned below)
Will the podman community consider systemd to be a "first class" deplyoment,
and split that style of deployments into "systems" and "users" where on
"systems" we can go far deeper into the expected defaults/patterns (toolbox
handling the "users" use case well)?
Would someone working on selinux/udica consider a complex container use case
(e.g. keycloak from RedHat itself, or gitlab as an upstream partner) for the
generation of profiles?
What are the patterns with generating profiles with udica? Would it be most
reasonable to generate these profiles on a test system, generating a profile
each time you instance the container, then deploying those profiles to
production?
We're mainly just wanting to hear from folks who are deploying podman as to
how they are using these tools, and what other tooling/techniques may be out
there that we could be looking at. Thanks for considering the inquiry!
[0]: http://www.youtube.com/watch?v=ZgXpWKgQclc&t=7m40s
[1]: https://github.com/systemd/systemd/issues/13717#issuecomment-711167021
[2]: https://github.com/systemd/systemd/issues/13717#issuecomment-539476282
[3]: https://github.com/containers/toolbox
[4]: https://www.freedesktop.org/software/systemd/man/systemd.exec.html
[5]: https://podman.io/blogs/2019/10/15/generate-seccomp-profiles.html
[6]: https://github.com/containers/udica
4 years
broken networking for published ports
by Brian Fallik
Hi,
I must have jinxed myself when I emailed this list a few days ago about how
well Podman had been working for me. Earlier today I let Gnome Software
Center update my Fedora 33 system. After the update grafana alerted me
about an unreachable service and I confirmed that both of my podman
services had fallen off the network.
Podman runs two sets of containers on this machine:
* a Prometheus pod containing several containers for prometheus, grafana,
and nginx; the pod publishes port 443/tcp on the host ("-p 443")
* a CoreDNS container; this container exposes port 53/udp and 9153/tcp
("-p 10.100.10.5:53:53/udp -p 9153")
and suddenly none of these ports were accessible over network or even
locally on the host.
After some fumbling I realized that some of the ports weren't being
published like they used to:
# podman ps
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
fa71bff884bc docker.io/coredns/coredns:latest -conf
/root/Coref... 4 seconds ago Up 4 seconds ago 0.0.0.0:34595->9153/tcp
coredns
f034c62577a2 docker.io/prom/prometheus:latest
--config.file=/et... 12 hours ago Up 12 hours ago 0.0.0.0:37683->443/tcp
prometheus
You can see that podman is listening on 34595 instead of 9153. This port
seems to be randomly assigned each time I restart the container.
I can workaround the above TCP issue by specifying both src and dest ports,
e.g. "-p 9153:9153". I scanned the recent release notes, open github
issues, and some docs but can't understand why "-p 9153" suddenly stopped
working like it had been. Any ideas?
The bigger problem is that the UDP port for DNS is completely broken. I
intentionally publish 53 to a specific IP so that CoreDNS only handles
lookups from the external interface but "-p 10.100.10.5:53:53" doesn't work
anymore:
# dig @10.100.10.5 coredns.io
...
;; connection timed out; no servers could be reached
and I don't see any evidence of the UDP mapping at all in podman or netstat:
# netstat -aun | grep 10.100.10.5
udp 0 0 10.100.10.5:68 10.100.10.1:67
ESTABLISHED
udp 0 0 10.100.10.5:41443 172.217.10.227:443
ESTABLISHED
udp 0 0 10.100.10.5:58091 142.250.64.106:443
ESTABLISHED
udp 0 0 10.100.10.5:46088 142.250.64.110:443
ESTABLISHED
udp 0 0 10.100.10.5:58834 172.217.197.189:443
ESTABLISHED
# podman port -a | grep -v tcp
#
I'm not 100% either of these commands would be expected to show the UDP
mapping. But DNS lookups are broken and I don't know how to fix this.
I'm not sure what was upgraded earlier today that might have caused this
behavior change. I also haven't seen any relevant errors in any of the
obvious logs.
# podman --version
podman version 2.2.1
Any help would be appreciated!
Thx,
brian
4 years
working with manifest lists
by Hendrik Haddorp
Hi,
I'm trying to create a manifest list, so a multi arch image. I started
by using skopeo to copy busybox:1.32 for two different architectures to
my local container storage. I gave them locally different tags. After
that I tried to create a manifest list using "podman manifest list". The
first issue I noticed is that
http://docs.podman.io/en/latest/markdown/podman-manifest-create.1.html
shows a different syntax to what I get in podman 2.1.1 as I can not
specify more then one image in the command. Anyhow, so I first create
the list with the first image and then use "podman manifest add" to add
the second image. Doing a "podman images" command I see that my manifest
list image size increased a bit but is still just a few KB. pushing the
manifest list now results in a failure. Should those commands work or am
I missing something? I would also have expected that I can create the
manifest list with skopeo, ideally as part of a copy call, but I didn't
find anything on that in skopeo. Is there a better approach for
combining multiple images to one?
thanks,
Hendrik
4 years
Podman API v1.0 (varlink) and libpod.conf removal notice
by Tom Sweeney
In preparation for the Podman v3.0 release, targeted for delivery on
Fedora 33 in late January and other distributions soon thereafter, the
varlink library that supported the Podman v1.0 API has been removed from
the upstream codebase. In addition to that, the libpod.conf file that
was the original file use for holding Podman configuration variables and
values has also been removed from the upstream codebase.
Starting with Podman v2.0, a new Podman v2.0 RESTFul API was provided to
replace the v1.0 API. Around that same time, the containers.conf file
was created to take the place of the libpod.conf file. Any changes to
the libpod.conf file that you have made should be transferable to the
containers.conf file.
For more information, please see the recent post on podman.io
(https://podman.io/blogs/2020/12/11/remove-varlink-libpod-conf-notice.html).
Thanks to all in the community who helped Podman reach this milestone.
4 years
podman systemd demo
by Ed Haynes
I put together a small podman systemd demo for one of my customers and
would be happy for comments or suggestions. It's here:
https://github.com/edhaynes/podman_systemd_demo
In my case the customer is pretty new to both podman and also the idea of
using systemd to manage things so I wanted to keep it pretty simple and
spell things out. Let me know what you think -
Ed
podman_systemd_demo
Showing podman integration with Systemd to manage lifecycle of container
For this project I created a vm based on fedora33 to act as a sandbox. Go
into the fedora vm and git clone this project to run locally.
Purpose is to show how podman can easily use systemd to manage lifecycle of
a container. Think of a small edge device, too small to run kubernetes, but
you would like to run containerized applications on it so that you can
isolate application dependencies from the OS. The OS is minimal and just
enough to run containers, but you would like for containers to restart if
they crash and also restart automatically on reboot. For this example I'm
running redis, an in-memory key value database as an example.
This demo should be run as root - in fedora
sudo su -
There are 3 scripts.
"launch_redis_container.sh" will pull the redis container, then set
appropriate SELinux permissions. The containerized redis server is launched
and mapped to the normal redis networking ports. Then 'podman generate
systemd' creates a systemd unit file based on this container which is
enabled and started. Now your containerized database is running and systemd
is managing its lifecycle as a normal linux service.
At this point the status of the redis-server will be shown (press "q" to
get out).
"test_redis_container.sh" exercises the redis database api by setting a
value and then retrieving it to show the database is working. The database
is then killed using pkill and you're shown how systemd creates a new
container to replace it and also the recovered database is working. The
systemd unit file also specifies the container to restart at startup so if
you like restart the VM and verify the database is still working.
cleanup.sh stops the redis-server, disables the service, and cleans up the
systemd unit file and the container so you can run this demo again from the
top if you like.
--
Ed Haynes
SOLUTIONS ARCHITECT
Red Hat <https://www.redhat.com/>
ehaynes(a)redhat.com *M: (978)-551-0057 *
TRIED. TESTED. TRUSTED.
4 years