shouldn't the current directory be the default context for "podman build"?
by Robert P. J. Day
"man podman-build" suggests that the context argument is optional:
SYNOPSIS
podman build [options] [context]
podman image build [options] [context]
...
If no context directory is specified, then Podman will assume
the current working directory as the build context, which
should contain the Containerfile.
but if i have a directory with nothing but a Containerfile, i get:
$ podman build
Error: no context directory specified, and no containerfile specified
$
OTOH, specifying context of current directory:
$ podman build .
STEP 1: FROM alpine:latest
... etc etc ...
thoughts?
rday
1 week, 5 days
Podman hanging on start, ps, and erroring out with slirp4netns on run
by Eric Gustavsson
Hi all,
I have unit file generated by podman running, though as soon as I run it
there's issues with running any other command that needs to do something
with containers. podman ps for example will be completely unresponsive and
not return anything, even after waiting minutes. Not only that, but even
running podman start x by itself will hang or creating new containers
This is with Fedora 30 and Kernel 5.1.8-300.fc30.x86_64
spytec@KeyraGuest1:~$ podman --version
podman version 1.7.0
spytec@KeyraGuest1:~$ podman start bitwarden -a
^C
spytec@KeyraGuest1:~$ sudo systemctl start bitwarden
^C
spytec@KeyraGuest1:~$ sudo systemctl status bitwarden
[... output omitted...]
Jan 31 13:53:14 KeyraGuest1 systemd[1]: Starting Podman
container-bitwarden.service...
spytec@KeyraGuest1:~$ sudo systemctl stop bitwarden
spytec@KeyraGuest1:~$ podman ps
^C
spytec@KeyraGuest1:~$ ps auxww | grep podman
spytec 1097 0.0 0.8 62816 33808 ? S 13:52 0:00 podman
spytec 1171 0.0 1.3 681944 55064 ? Ssl 13:53 0:00
/usr/bin/podman start bitwarden
spytec 1178 0.0 1.4 755824 56680 ? Sl 13:53 0:00
/usr/bin/podman start bitwarden
spytec 1224 0.0 0.0 9360 880 pts/0 S+ 13:54 0:00 grep
--color=auto podman
spytec@KeyraGuest1:~$ journalctl -u bitwarden | tail -n 5
Jan 31 13:51:50 KeyraGuest1 systemd[1]: bitwarden.service: Failed with
result 'exit-code'.
Jan 31 13:51:50 KeyraGuest1 systemd[1]: Failed to start Podman
container-bitwarden.service.
Jan 31 13:53:14 KeyraGuest1 systemd[1]: Starting Podman
container-bitwarden.service...
Jan 31 13:54:26 KeyraGuest1 systemd[1]: bitwarden.service: Succeeded.
Jan 31 13:54:26 KeyraGuest1 systemd[1]: Stopped Podman
container-bitwarden.service.
spytec@KeyraGuest1:~$ ps auxww | grep podman
spytec 1097 0.0 0.8 62816 33808 ? S 13:52 0:00 podman
spytec 1171 0.0 1.3 682008 55064 ? Ssl 13:53 0:00
/usr/bin/podman start bitwarden
spytec 1178 0.0 1.4 755824 56680 ? Sl 13:53 0:00
/usr/bin/podman start bitwarden
spytec 1235 0.0 0.0 9360 816 pts/0 S+ 13:55 0:00 grep
--color=auto podman
spytec@KeyraGuest1:~$ kill 1181
spytec@KeyraGuest1:~$ kill 1097
spytec@KeyraGuest1:~$ podman ps -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
baa2f3d6ed39 docker.io/bitwardenrs/server:latest /bitwarden_rs 3 weeks
ago Created 0.0.0.0:8080->8080/tcp bitwarden
And creating a whole new container
spytec@KeyraGuest1:~$ podman run -d --name test postgres
Trying to pull docker.io/library/postgres...
[... output omitted...]
Writing manifest to image destination
Storing signatures
Error: slirp4netns failed: "/usr/bin/slirp4netns: unrecognized option
'--netns-type=path'\nUsage: /usr/bin/slirp4netns [OPTION]... PID
TAPNAME\nUser-mode networking for unprivileged network namespaces.\n\n-c,
--configure bring up the interface\n-e, --exit-fd=FD
specify the FD for terminating slirp4netns\n-r, --ready-fd=FD
specify the FD to write to when the network is configured\n-m, --mtu=MTU
specify MTU (default=1500, max=65521)\n--cidr=CIDR
specify network address CIDR (default=10.0.2.0/24)\n--disable-host-loopback
prohibit connecting to 127.0.0.1:* on the host namespace\n-a,
--api-socket=PATH specify API socket path\n-6, --enable-ipv6
enable IPv6 (experimental)\n-h, --help show this help and
exit\n-v, --version show version and exit\n"
Thanks,
Eric Gustavsson, RHCSA
He/Him/His
Software Engineer
Red Hat <https://www.redhat.com>
IM: Telegram: @SpyTec
E1FE 044A E0DE 127D CBCA E7C7 BD1B 8DF2 C5A1 5384
<https://www.redhat.com>
4 years, 10 months
oddities re: explanations for libpod.conf, mounts.conf, storage.conf
by Robert P. J. Day
was trying to summarize the possibilities for config files:
- mounts.conf
- storage.conf
- libpod.conf
and ended up falling into a maze of twisty passages, all alike. (all
this is based on packaged podman-1.7.0 on fedora 31).
first, from "man podman", those three files are explained thusly:
libpod.conf (/usr/share/containers/libpod.conf)
libpod.conf is the configuration file for all tools using libpod to
manage containers, when run as root. Administrators can override the
defaults file by creating `/etc/containers/libpod.conf`. When Podman
runs in rootless mode, the file `$HOME/.config/containers/libpod.conf`
is created and replaces some fields in the system configuration file.
mounts.conf (/usr/share/containers/mounts.conf)
The mounts.conf file specifies volume mount directories that are
automatically mounted inside containers when executing the `podman
run` or `podman start` commands. Administrators can override the
defaults file by creating `/etc/containers/mounts.conf`.
When Podman runs in rootless mode, the file $HOME/.config/con‐
tainers/mounts.conf will override the default if it exists.
Please refer to containers-mounts.conf(5) for further details.
storage.conf (/etc/containers/storage.conf)
storage.conf is the storage configuration file for all
tools using containers/storage
When Podman runs in rootless mode, the file
`$HOME/.config/containers/storage.conf` is used instead of the system
defaults.
so the first inconsistency(?) is that the first two files have a
possibly default version under /usr/share/containers, while
storage.conf does not -- according to "man podman", the default file
will be in /etc/containers. it's not clear why the inconsistency.
the next oddity is that the first two files are contributed by two
different fedora packages:
$ rpm -qf /usr/share/containers/libpod.conf
podman-1.7.0-2.fc31.x86_64
$ rpm -qf /usr/share/containers/mounts.conf
containers-common-0.1.40-4.fc31.x86_64
that may not sound like a big deal but it does allow for some
inconsistency in how those two packages either supply or explain
things, which is what you see when you "man containers-storage.conf":
Distributions often provide a /usr/share/containers/stor‐
age.conf file to define default storage configuration. Adminis‐
trators can override this file by creating /etc/contain‐
ers/storage.conf to specify their own configuration. The stor‐
age.conf file for rootless users is stored in the $HOME/.con‐
fig/containers/storage.conf file.
so "man podman" does *not* mention the possibility of
/usr/share/containers/storage.conf, while "man
containers-storage.conf" *does*.
finally, i thought, "oh, just RTFS" to see what is really happening,
and in libpod/config/config.go (amd i am *not* a go expert), it seems
like libpod.conf is supported being under /usr/share/containers, but i
don't see where the other two files (mounts.conf, storage.conf) are
similarly supported, although i could be just misreading badly.
i'm just trying to clarify the configuration file possibilities.
rday
4 years, 10 months
pathless execution of iptables
by Julen Landa Alustiza
Good morning everyone
I'm using duply on my homelab backup system and noticed today that when
I include a podman run blablabla on a pre script, this is trying to call
'iptables' without any path, and results that duply does not add
/usr/sbin to the $PATH on the pre scripts execution environment when is
executed as root, so the iptables call ends with an error:
Output: time="2020-01-23T02:00:02+01:00" level=error msg="Error adding
network: failed to locate iptables: exec: \"iptables\": executable file
not found in $PATH"
time="2020-01-23T02:00:02+01:00" level=error msg="Error while adding pod
to CNI network \"podman\": failed to locate iptables: exec:
\"iptables\": executable file not found in $PATH"
Error: error configuring network namespace for container
fbaad57e6a9d1894624b67cb2f3e9d8483af56bf71680befabfbb85fd589e640: failed
to locate iptables: exec: "iptables": executable file not found in $PATH
So I'm asking here... should we rely on $PATH to reach iptables
executable, or hardcode /usr/sbin as it path? If an attacker managed to
alter my $PATH, could open an attack vector due to this $PATH thing and
use a malicious iptables located on a different path with higher
precedence on my $PATH ?
Regards,
4 years, 10 months
Strange password behavior with mysql container
by Sebastiaan
Hi there,
I'm fairly new to Podman so apologies if my question seems simple, but seem to be experiencing some odd behavior with Mysql and no amount of googling has yielded answers.
I'm using Podman 1.70 on Fedora 31.
If i create a container using the following command:
podman run -dt --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=<password> mysql
it creates a running container without a problem. If i perform an inspection I can also see that the MYSQL_ROOT_PASSWORD environment variable is set correctly.
Logging into the container is another story.
If I try to do: podman exec -it mysql mysql -u root -p
I get a login prompt, but entering my password results in
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
Error: non zero exit code: 1: OCI runtime error
However, if I do:
podman exec -it mysql bash
and get into a bash prompt inside the container, and then do: mysql -u root -p ( i.e. effectively the exact same command )
I am able to login with the exact same password.
It seems when I try to use podman exec to go straight to the mysql commandline, it is garbling or mangling my password. Could it be something locale or charmap related? Or is it a bug?
Same issue if I try to connect using PHPMyadmin, adminer, wordpress etc - it just won't let me login using my password ( which I know does work ).
Any tips on how I could get this to work? Surely people have successfully managed to get mysql working with podman?
Kind Regards
Sebastiaan Stoffels
4 years, 10 months
systemd example for pods?
by Brian Fallik
Hi,
Does anyone have an example they can share of managing a pod in systemd?
I'm trying to follow the guidelines outlined in
https://www.redhat.com/sysadmin/podman-shareable-systemd-services
which work great for containers but now I'm unsure how to proceed for a pod.
Ultimately I'm trying to deploy prometheus alongside nginx as a reverse
proxy, and deploying both containers inside a pod seems easiest from a
networking perspective and also sensible logic since the two containers are
coupled into a single, functional unit.
For the case of the pod, what PID does systemd track? Is it the pause
container? If so, how does that happen? `podman pod create` doesn't seem to
accept --conmon-pidfile args like `podman run` does.
I also tried using the output of `podman generate systemd` but that seems
tied to a specific pod instance. Ideally I'd find something more generic
like the pattern I extracted from the blog post above.
Thanks in advance,
brian
4 years, 11 months
Release 1.7.1
by alexander.v.litvak@gmail.com
Is there a chance 1.7.1 will be released any time soon?
4 years, 11 months
Cannot set startup script when saving image - very confused
by Dev Linux
I have a running container, and I wish to save it as an image, specifying a
startup script (bash script) when it is launched.
Inside the container, before saving to image, I have created a file:
/home/data/bin/run.sh
The first line in /home/data/bin/run.sh is:
#!/bin/bash
*I save the image with this command:*
*$podman commit --change "CMD=/home/data/bin/run.sh" <container_id>
<img_name>*
Then I try to run the image, instantiating a container, and I get an error:
*/bin/sh: /bin/sh: cannot execute binary file*
I don't understand why it is using */bin/sh*. At the very top of my *run.sh
I have: #!/bin/bash*
---
I tried something slightly different, saving the image using the following
command, and I get the same error message:
*$podman commit --change "CMD=/bin/bash /home/data/bin/run.sh" $ID $NAME*
Any ideas would be greatly appreciated, thank you.
4 years, 11 months
Creating an image from a container issue with <none> images created.
by Dev Linux
If I have a running container, and get the ID by issuing this command:
$podman ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
bc41dd5034b8 registry.redhat.io/ubi8:latest /bin/bash 25 hours ago Up
About an hour ago tender_curran
---
And then create an image from that container, giving it the same name of an
already existing image ($podman <id_of_container> <name_of_image>:
$podman commit bc41dd5034b8 ubi8-template
It appears that the previously existing image, with the same name is kept,
with an entry like so:
$podman images
<none> <none> b3cb81594745 About an hour ago
1.35 GB
---
Is it OK to delete this <none> image? or is it attached in any way to the
container that replaced it?
---
Why is it created? I simply want to overwrite destructively the image with
the same name. Is there a command line switch that will prevent these
<none> images?
4 years, 11 months
couple questions about "podman pause"
by Robert P. J. Day
first, what is the difference between "podman pause" and "podman
stop", particularly since "man podman-pause" seems to confuse the two:
OPTIONS
--all, -a
Pause all running containers.
EXAMPLE
... snip ...
Pause all running containers.
podman stop -a
^^^^
so anyone reading "man podman-pause" is going to think that "stop" is
a synonym, at least reading that example.
in addition, "man podman-stop" doesn't help the situation by not
distinguishing between paused and stopped containers:
OPTIONS
--all, -a
Stop all running containers. This does not include paused con‐
tainers.
second issue:
$ podman pause musing_knuth
Error: pause is not supported for rootless containers
$
even if this is true, there is nothing in "man podman-pause" to
suggest that this might occur, which is definitely grounds for
confusion.
rday
4 years, 11 months