Update on Podman V2
A few weeks ago, we made an announcement about the development of
Podman V2. In the announcement, we mentioned that the state of
upstream code would be jumbled for a while and that we would be
temporarily disabling many of our CI/CD tests. The upstream
development team has been hard at work, and we are starting to see that
work pay off.
Today, we are very excited to announce:
The local Podman v2 client is complete. It is passing all of its
rootfull and rootless system and integration tests.
The CI/CID tests have been reenabled upstream and are run with each
pull request submission. We are now hard at work finishing up some of
the core podman-remote functions. Once those functions are complete,
we can then begin to run our podman-remote system and integration tests
to catch any regressions.
We have re-enabled the autobuilds for Podman v2 in Fedora rawhide. As
mentioned earlier, the Podman remote client is not complete, so that
binary is temporarily being removed from the RPM. It will be re-added
when the remote client is complete. As a corollary, the Windows and
OS/X clients are also not being compiled or tested. This will occur
once the remote client for Linux is complete.
We encourage you to pull the latest upstream Podman code and exercise
it with your use cases to help us protect against regressions from
Podman v1. We hope to make a full Podman v2.0 release in several
weeks, once we are confident it is stable. We look forward to hearing
what you think, and please do not hesitate to raise issues and comments
on this in our [GitHub repository](
https://github.com/containers/libpod/issues), our Freenode IRC channel
`#podman`, or to the Podman mailing list.
We’re very excited to bring Podman v2.0 to you as it offers a lot more
flexibility through it’s new REST API interface and adds several
enhancements to the existing commands. If your project builds on top
of Podman, we would especially love to have you test this new version
out so we can ensure complete compatibility with Podman v1.0 and
address any issues found ASAP.
Note: This announcement was first released to the Podman mailing
list. If you are not yet a member of that community, please join us by
sending an email to [podman-join(a)lists.podman.io](mailto:
podman-join(a)lists.podman.io?subject=subscribe) with the word
“subscribe” as the title.
(moving thread back to list, excuse the lack of trimming)
On 5/26/20 1:18 PM, Felder, Christian wrote:
>> On 26. May 2020, at 19:43, Gordon Messmer <gordon.messmer(a)gmail.com
>> <mailto:email@example.com>> wrote:
>> On 5/26/20 1:51 AM, Felder, Christian wrote:
>>> When using podman run -p … DNAT rules in the forward chain are
>>> automatically created for allowing traffic to the container/pod.
>> I think you might be mixing up two different things. When I run
>> "podman run -p" I see a new rule in the PREROUTING chain of the "nat"
>> table. I don't see any rules in the FORWARD chain of the "filter" table.
> Sorry. You’re right and I mixed things up. Indeed there is a new rule
> in PREROUTING and this is the rule which bypasses the INPUT chain.
> I cannot explicitly configure ports on the INPUT chain as the packets
> are forwarded to the CNI-HOSTPORT-DNAT target directly.
Because you're doing DNAT, you should be looking at the FORWARD chain,
not the INPUT chain. As far as I can tell, testing on my system, the
FORWARD chain requires an explicit rule to allow external access to
I do have a system that doesn't require an explicit rule, because the
destination network is part of firewalld's "trusted" zone. It seems
likely that you also have a rule that is allowing forwarded traffic.
Can you post the complete output of "iptables -L -nv" and "iptables -L
-nv -t nat" somewhere that we can view them, if you don't see the rule
now? (ip6tables if you're testing connections from an external host
over IPv6, of course)
>> There are no DNAT rules in the FORWARD chain.
> I mixed this up with adding an explicit rule to the top of the FORWARD
> chain as described in this firewalld issue regarding docker.
> e.g. firewall-cmd --direct --add-passthrough ipv4 -I FORWARD 1 -s
> 10.88.2.0/24 -p tcp --dport 636 -j DROP
> but this wouldn’t help either. Let’s forget about that ;-)
>>> Unfortunately this bypasses the input chain which is usually used to
>>> explicitly allowing external traffic for a specific service/port.
>>> Using podman run -p … the port is world-wide accessible though.
>> That isn't true on my system, as far as I can tell.
>>> One solution is to just bind to the loopback interface using -p
>>> 127.0.0.1:XXX:XXX which will ensure that the port is just available
>>> on the
>>> host system but on the other hand this does not allow using ssh
>>> tunnelling for authorised external access.
>> Why wouldn't it allow ssh tunneling?
> You’re right I can tunnel traffic to it, e.g. using
> ssh -M -S ~/.ssh/ssh-ldap3 -f root@centos-8 -L 6636:localhost:636 -N
> which would allow me to access the container from my system on port
> 6636 which runs on port 636 on the host centos-8.
> My conclusion:
> It’s probably the easiest option to publish to loopback if I wan’t to
> limit access just to the host whereas publishing to all interfaces
> would still allow external traffic as well without explicitly
> configuring the firewall because of the PREROUTING mechanism.
> I can still ssh-tunnel traffic to that service when binding to loopback.
> Imho people may not be aware of the INPUT chain being bypassed.
I hope this message finds you guys well. I’ve a question regarding CNI and podman run’s publish flag (-p).
When using podman run -p … DNAT rules in the forward chain are automatically created for allowing traffic to the container/pod.
Unfortunately this bypasses the input chain which is usually used to explicitly allowing external traffic for a specific service/port.
Using podman run -p … the port is world-wide accessible though.
One solution is to just bind to the loopback interface using -p 127.0.0.1:XXX:XXX which will ensure that the port is just available on the
host system but on the other hand this does not allow using ssh tunnelling for authorised external access.
What are best practices for having a container's/pod’s port exposed to the host but having explicitly control whether this should be
accessible world-wide or not?
Just note I am using podman on CentOS 8 (podman-1.6.4-4.module_el8.1.0+298+41f9343a.src.rpm)
Thanks in advance.
is OpenPGP the only supported image signing open supported by podman /
skopeo or are there other options? Using OpenGPG works quite fine for me
so far but in the end we are trying to sign an image using an IBM 4765
crypto card and so far have not figured out how this can play together.
I can run the discourse image with docker, export the container and
import it as an image into podman.
The script that manages docker discourse containers is:
and is attached. It would be good if it were possible to just replace
all the occurrences of "docker" with "podman", fix version numbers etc
and be able to use the script - but can any gurus see dockerisms in the
script that will cause podman gotchas for this idea?
PO Box 896
Cowra NSW 2794
I got a really good question on my blog  and I'm wondering if
anybody else has started down this path? In a nutshell, the Docker CIS
benchmark looks for files in certain place, etc so it's really targeted
towards Docker, but I don't see any reason why we couldn't take it and
build an equivalent for Podman.
Not sure when I might have time to tackle this, but figured it was worth
seeing if anybody had started any work around this?
Moving Wordpress, Mediawiki and Request Tracker into containers:
Product Management - Containers, Red Hat Enterprise Linux & OpenShift
Using Azure Pipelines with Red Hat Universal Base Image and Quay.io:
Hi podman team,
I wanted to try out Fedora CoreOS for a couple of upcoming projects so I
installed it on bare metal and logged in via ssh. I can start a container
detached (as my logged in user) and then verify that the server is running
but when I logout of the ssh session, the container stops. From looking at
the logs, it appears that the container process is getting SIGTERM Which I
assume means the container was stopped gracefully. But by what? How do I
stop this behavior? If I detach a container, I would like it to outlive my
session. This doesn’t happen when I sudo to root and start the container,
only when running as the non-root user. Any suggestions?