shouldn't the current directory be the default context for "podman build"?
by Robert P. J. Day
"man podman-build" suggests that the context argument is optional:
SYNOPSIS
podman build [options] [context]
podman image build [options] [context]
...
If no context directory is specified, then Podman will assume
the current working directory as the build context, which
should contain the Containerfile.
but if i have a directory with nothing but a Containerfile, i get:
$ podman build
Error: no context directory specified, and no containerfile specified
$
OTOH, specifying context of current directory:
$ podman build .
STEP 1: FROM alpine:latest
... etc etc ...
thoughts?
rday
1 week, 4 days
image signing
by Hendrik Haddorp
Hi,
is OpenPGP the only supported image signing open supported by podman /
skopeo or are there other options? Using OpenGPG works quite fine for me
so far but in the end we are trying to sign an image using an IBM 4765
crypto card and so far have not figured out how this can play together.
thanks,
Hendrk
3 years, 9 months
Getting Docker Discourse running with Podman
by Philip Rhoades
People,
I can run the discourse image with docker, export the container and
import it as an image into podman.
The script that manages docker discourse containers is:
/var/discourse/launcher
and is attached. It would be good if it were possible to just replace
all the occurrences of "docker" with "podman", fix version numbers etc
and be able to use the script - but can any gurus see dockerisms in the
script that will cause podman gotchas for this idea?
Thanks,
Phil.
--
Philip Rhoades
PO Box 896
Cowra NSW 2794
Australia
E-mail: phil(a)pricom.com.au
3 years, 11 months
concurrent podman invocations hang
by Hendrik Haddorp
Hi,
I'm using podman 2.1.1 and noticed an odd behavior of podman. I created
a test image that takes several minutes to stop when the container got
signaled to stop. So when I call podman stop with a long timeout the
call hangs a few minutes until the container stops. When I use a second
terminal while the podman stop call is wating for the container to stop
some podman calls hang as well. For example a podman inspect on the same
container or a simple podman ps hangs. If I use ctrl-c to kill the
earlier podman stop call then those hanging commands continue straight
away. Are things supposed to behave that way?
regards,
Hendrik
4 years
Fwd: podman question
by Tom Sweeney
Daniel,
Sorry about not getting back to you sooner. IDK right off the top of my
head, but I've spun this off to the Podman mailing list, I'm sure folks
monitoring that will have a thought or three.
t
-------- Forwarded Message --------
Subject: podman question
Date: Wed, 18 Nov 2020 16:26:32 -0500
From: Daniel Pivonka <dpivonka(a)redhat.com>
To: Tom Sweeney <tsweeney(a)redhat.com>
Hi Tom,
One of my coworkers pointed me to you about a podman issue I'm having.
I'm hoping you can help me or point me in the right direction.
I work on the ceph orchestration team and I'm facing an issue when
trying to deploy containers from an authenticated registry where podman
can't seem to access the registry login info.
I'm trying to run containers from systemd in a way similar to this
<https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_at...>
The image im trying to use comes from registry.redhat.io
<http://registry.redhat.io>
so as a test i ran podman login first
then starting my service with this unit file
[Unit]
Description=Redis container
[Service]
Restart=always
ExecStart=/bin/podman run --rm --ipc=host --net=host --name
ceph-a112bd2e-29d1-11eb-81b2-525400ea3cbb-node-exporter.vm-00 --user
65534 -d --conmon-pidfile
/run/ceph-a112bd2e-29d1-11eb-81b2-525400ea3cbb(a)node-exporter.vm-00.service-pid
--cidfile
/run/ceph-a112bd2e-29d1-11eb-81b2-525400ea3cbb(a)node-exporter.vm-00.service-cid
-e
CONTAINER_IMAGE=registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5
<http://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5>
-e NODE_NAME=vm-00 -v /proc:/host/proc:ro -v /sys:/host/sys:ro -v
/:/rootfs:ro
registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5
<http://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5>
--no-collector.timex
ExecStop=/usr/bin/podman stop -t 2 redis_server
[Install]
WantedBy=local.target
this is similar to the unit.run file that ceph would use for its services.
the service fails though and the journalctl log show that podman was not
able to pull the image because of a failed authentication
[root@vm-00 system]# journalctl -u test.service
-- Logs begin at Wed 2020-11-18 21:04:45 UTC, end at Wed 2020-11-18
21:14:22 UTC. --
Nov 18 21:14:20 vm-00 systemd[1]: Started Redis container.
Nov 18 21:14:21 vm-00 podman[9652]: 2020-11-18 21:14:21.066551744
+0000 UTC m=+0.234565900 system refresh
Nov 18 21:14:21 vm-00 podman[9652]: Trying to pull
registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5.
<http://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5.>..
Nov 18 21:14:21 vm-00 podman[9652]: unable to retrieve auth token:
invalid username/password: unauthorized: Please login to the Red Hat
Registry using your Customer Portal credentials. Further
instructions ca>
Nov 18 21:14:21 vm-00 podman[9652]: Error: unable to pull
registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5
<http://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5>:
unable to pull image: Error initializing source
docker://registry.redhat.io/openshift4/
<http://registry.redhat.io/openshift4/>>
Nov 18 21:14:21 vm-00 systemd[1]: test.service: Main process exited,
code=exited, status=125/n/a
Nov 18 21:14:21 vm-00 systemd[1]: test.service: Failed with result
'exit-code'.
Nov 18 21:14:21 vm-00 systemd[1]: test.service: Service
RestartSec=100ms expired, scheduling restart.
i did a little more debugging and it seems that systemd does not know
where the auth file is
Nov 18 21:19:09 vm-00 systemd[1]: Started Redis container.
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="Reading configuration file
\"/usr/share/containers/libpod.conf\""
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="Merged system config
\"/usr/share/containers/libpod.conf\": &{{false false false false
false true} 0 { [] [] []} docker:// runc map[crun:[/usr/bin/crun
/usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun
/bin/crun /run/current-system/sw/bin/crun]
kata-fc:[/usr/bin/kata-fc] kata->
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="Using conmon: \"/usr/bin/conmon\""
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="Initializing boltdb state at
/var/lib/containers/storage/libpod/bolt_state.db"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="Using graph driver overlay"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="Using graph root /var/lib/containers/storage"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="Using run root /var/run/containers/storage"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="Using static dir /var/lib/containers/storage/libpod"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="Using tmp dir /var/run/libpod"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="Using volume path /var/lib/containers/storage/volumes"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="Set libpod namespace to \"\""
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="[graphdriver] trying provided driver \"overlay\""
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="cached value indicated that overlay is supported"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="cached value indicated that metacopy is being used"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="cached value indicated that native-diff is not
being used"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=warning msg="Not using native diff for overlay, this may cause
degraded performance for building images: kernel has
CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="backingFs=extfs, projectQuotaSupported=false,
useNativeDiff=false, usingMetacopy=true"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="Initializing event backend journald"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=warning msg="Error initializing configured OCI runtime
kata-qemu: no valid executable found for OCI runtime kata-qemu:
invalid argument"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=warning msg="Error initializing configured OCI runtime
kata-fc: no valid executable found for OCI runtime kata-fc: invalid
argument"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="using runtime \"/usr/bin/runc\""
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=warning msg="Error initializing configured OCI runtime crun:
no valid executable found for OCI runtime crun: invalid argument"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=warning msg="Error initializing configured OCI runtime
kata-runtime: no valid executable found for OCI runtime
kata-runtime: invalid argument"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=info msg="Found CNI network podman (type=bridge) at
/etc/cni/net.d/87-podman-bridge.conflist"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=warning msg="Default CNI network name podman is unchangeable"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="parsed reference into
\"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5\
<http://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5\>""
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="reference
\"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5\
<http://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5\>"
does not resolve to an image ID"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="parsed reference into
\"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5\
<http://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5\>""
Nov 18 21:19:09 vm-00 podman[10481]: Trying to pull
registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5.
<http://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5.>..
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="reference rewritten from
'registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5
<http://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5>'
to 'registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5
<http://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5>'"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="Trying to access
\"registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5\
<http://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5\>""
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="Credentials not found"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="Using registries.d directory
/etc/containers/registries.d for sigstore configuration"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg=" Using \"default-docker\" configuration"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg=" No signature storage configuration found for
registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5
<http://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5>"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="Looking for TLS certificates and private keys in
/etc/docker/certs.d/registry.redhat.io <http://registry.redhat.io>"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="GET https://registry.redhat.io/v2/"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="Ping https://registry.redhat.io/v2/ status 401"
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="GET
https://registry.redhat.io/auth/realms/rhcc/protocol/redhat-docker-v2/aut..."
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="Server response when trying to obtain an access
token: \n\"unauthorized: Please login to the Red Hat Registry using
your Customer Portal credentials. Further instructions can be found
here: https://access.redhat.com/RegistryAuthentication\""
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="Accessing
\"registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5\
<http://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5\>"
failed: unable to retrieve auth token: invalid username/password:
unauthorized: Please login to the Red Hat Registry using your
Customer Portal credentials. Further instructions can be found here:
https://access.redhat.c>
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=debug msg="Error pulling image ref
//registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5
<http://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5>:
Error initializing source
docker://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5
<http://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5>:
unable to retrieve auth token: invalid username/password:
unauthorized: Please login to the Red Hat Registr>
Nov 18 21:19:09 vm-00 podman[10481]: unable to retrieve auth
token: invalid username/password: unauthorized: Please login to the
Red Hat Registry using your Customer Portal credentials. Further
instructions can be found here:
https://access.redhat.com/RegistryAuthentication
Nov 18 21:19:09 vm-00 podman[10481]: time="2020-11-18T21:19:09Z"
level=error msg="unable to pull
registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5
<http://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5>:
unable to pull image: Error initializing source
docker://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5
<http://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5>:
unable to retrieve auth token: invalid username/password:
unauthorized: Please login to the Red >
Nov 18 21:19:09 vm-00 systemd[1]: test.service: Main process exited,
code=exited, status=125/n/a
Nov 18 21:19:09 vm-00 systemd[1]: test.service: Failed with result
'exit-code'.
Nov 18 21:19:09 vm-00 systemd[1]: test.service: Service
RestartSec=100ms expired, scheduling restart.
running 'podman login --get-login registry.redhat.io
<http://registry.redhat.io>' always shows im logged in though.
Are you aware of any reason why it seems like when running a
container from systemd it cant access the auth file to pull the
container first?
If you need anymore info or want to see it happen live im more than
happy to set up a meeting or something just let me know.
Thank you!
-Daniel Pivonka
4 years, 1 month
Podman Community Meeting - Tues December 1, 2020 - 11:00 a.m. Eastern (UTC-5)
by Tom Sweeney
Hi All,
The agenda for the next Podman Community Meeting has just been posted:
https://podman.io/community/meeting/agenda/
The meeting is happening on Tuesday December 1, 2020 at 11:00 a.m.
Eastern (UTC -5) and the the topics for the meeting are:
* Introducing Network Aliases - Matt Heon
* Podman Split Brain API - Jhon Honce
* Demo of the new containers.conf usage - Dan Walsh
* Open Forum and Discussion - All
The meeting will be held on BlueJeans and we'll record and post the
recording on podman.io. Video conference and more details on the Agenda
page.
Hope to see you there!
t
4 years, 1 month
Multi-arch with podman manifests
by Alexander Wellbrock
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Hey there, since buildx is not a thing yet to build multi-arch images on one host I'd like to make use of the multi-arch feature of registries like docker.io and quay.io.
I'm using the podman manifest command for this purpose but struggle to get it to work properly. I hope someone can point me in the right direction.
I've get it to work (kind-of) by building two images on two different hosts architecture wise and then used podman manifest create to create a multi-arch manifest of both images. I then used podman manifest push to get it onto quay.io. This works and I'm able to use it, but unfortunately there is a catch right now.
If I'm doing this multiple times, it complains that since a manifest for example-image:latest is already present. So it only work if the manifest does not already exist. I can't update it with newer images. Only if I remove the manifest :latest manually beforehand it obviously works again, but I'd like to avoid that. Apart from that it makes sense I suppose, because I'd rather like to update the manifest with new images then to override the old manifest, right?
I then tried to use podman manifest add, remove etc. Podman then complains about manifest version not supported. E.g. I tried to update the actual images per architecture and then add the new ones to the manifest which I couldn't get to work. I'm highly confused about the podman manifest command man page.
Best regards!
-----BEGIN PGP SIGNATURE-----
Version: BCPG v1.65
iQEcBAABCgAGBQJfq66WAAoJEDnPLmYnk6vNAIcH/AsrUWqd0lh0yB5hBp6oRQK3
+VZclXmJxyMraQJo3IQuJOqmuF44A9sjOw3uSttt2UfY2mxIU/g4rqrFHXmUj9Vl
TlmMK/JllvkBRnese93OiXLlJn/Xj/sqp1CBTQt52OG4vkYvbDvraXIb4xDTTqan
GsyBsc22AydNSlKLBRs7X5fFo1BtQhOApr1X+e2vkOgBuO1xh5Y5qKEHO6OlN9B2
IH3GFuZZsDk8u9y7kAf5g/jfownp0wzeag/m+SQ8CqM0CF1CWTh1ViidX1Qya19d
AWwGEDRImO+1Rrhu4p+3VqBXFh9Ylj/gUlJJiX1wxtGstSymfOLi3Nc3ELEAo10=
=uQw7
-----END PGP SIGNATURE-----
4 years, 1 month