shouldn't the current directory be the default context for "podman build"?
by Robert P. J. Day
"man podman-build" suggests that the context argument is optional:
SYNOPSIS
podman build [options] [context]
podman image build [options] [context]
...
If no context directory is specified, then Podman will assume
the current working directory as the build context, which
should contain the Containerfile.
but if i have a directory with nothing but a Containerfile, i get:
$ podman build
Error: no context directory specified, and no containerfile specified
$
OTOH, specifying context of current directory:
$ podman build .
STEP 1: FROM alpine:latest
... etc etc ...
thoughts?
rday
1 week, 5 days
RunRoot & mistaken IDs
by lejeczek
Hi guys.
I experience this:
-> $ podman images
WARN[0000] RunRoot is pointing to a path
(/run/user/1007/containers) which is not writable. Most
likely podman will fail.
Error: creating events dirs: mkdir /run/user/1007:
permission denied
-> $ id
uid=2001(podmania) gid=2001(podmania) groups=2001(podmania)
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
I think it might have something to do with the fact that I
changed UID for the user, but why would this be?
How troubleshoot & fix it, ideally without system reboot?
many thanks, L.
10 months, 4 weeks
ro sysfs doesn't affect spc_t?
by Peter Hunt
Hey team,
I've got some odd behavior on a podman in Openshift use case I am trying to
figure out. I am trying to run podman in openshift without privilege, extra
capabilities and ideally a custom SELinux label that isn't `spc_t`. I have
managed to adapt the `container_engine_t` type to get past any denials, but
now I'm hitting an issue where the sysfs of the container is read only:
I am running with this yaml:
```
apiVersion: v1
kind: Pod
metadata:
name: no-priv
annotations:
io.kubernetes.cri-o.Devices: "/dev/fuse"
spec:
containers:
- name: no-priv-rootful
image: quay.io/podman/stable
args:
- sleep
- "1000000"
securityContext:
runAsUser: 1000
seLinuxOptions:
type: "container_engine_t"
```
and using a container-selinux based on
https://github.com/haircommander/container-selinux/tree/engine_t-improvem...
when I run this container, and then run podman inside, I get this error:
```
$ oc exec -ti pod/no-priv-rootful -- bash
[podman@no-priv-rootful /]$ podman run ubi8 ls
WARN[0005] Path "/run/secrets/etc-pki-entitlement" from
"/etc/containers/mounts.conf" doesn't exist, skipping
Error: crun: set propagation for `sys`: Permission denied: OCI permission
denied
```
What I find odd, and what is the subject of this email, is that when I
adapt the selinux label to be "spc_t":
```
type: "spc_t"
```
the container runs fine. There are no denials in AVC when I run
`container_engine_t`, but clearly something is different. Can anyone help
me identify what is happening?
Thanks
Peter
--
Peter Hunt, RHCE
They/Them or He/Him
Senior Software Engineer, Openshift
Red Hat <https://www.redhat.com>
<https://www.redhat.com>
1 year
lsetxattr with GlusterFS ?
by lejeczek
Hi guys
I'm trying to run a container with some vols on a GlusterFS
volume:
-> $ { export _NAME="ko.xyz"; export
_PATH=/00-APKI//${_NAME}; echo; mkdir -p
${_PATH}/{,root,media,files,apps,themes,images,settings,data,public};
}; podman run -dt --restart=always --volume
${_PATH}/config.production.json:/var/lib/ghost/config.production.json
--volume ${_PATH}/root:/root:z
Error: lsetxattr /00-APKI/ko.xyz/media: operation not supported
..
GF vol is:
Volume Name: APKI
Type: Replicate
Volume ID: b90bc19a-9636-44f7-9b72-453ca9713b6a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 10.1.1.100:/devs/00.GLUSTERs/APKI
Brick2: 10.1.1.101:/devs/00.GLUSTERs/APKI
Brick3: 10.1.1.99:/devs/00.GLUSTERs/APKI-arbiter (arbiter)
Options Reconfigured:
performance.client-io-threads: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
cluster.granular-entry-heal: on
storage.owner-uid: 2002
storage.owner-gid: 2002
cluster.shd-max-threads: 3
features.cache-invalidation-timeout: 900
performance.cache-invalidation: on
performance.nl-cache: on
performance.nl-cache-timeout: 600
performance.parallel-readdir: on
performance.readdir-ahead: on
performance.stat-prefetch: on
cluster.self-heal-daemon: enable
_autofs_ makes the moutpoing with:
/00-APKI
-fstype=glusterfs,capability,kernel-writeback-cache=1,acl,log-file=/var/log/glusterfs/mount.APKI.log
10.1.1.100,10.1.1.99,10.1.1.101:/APKI
I fiddled with both autofs & gluster but cannot figure it
out - would somebody know what's wrong/missing here?
I suspect it's due to SE labeling - which would be great to
have, naturally.
many thanks, L.
1 year
subuids in sync across ?
by lejeczek
Hi guys.
I thought this must be trivial & common yet I failed to find
here on the list & the net, any info on:
how to keep subuid & subgid in sync across different PCssystems?
Would it be simply a matter of keeping all relevant /etc
bits in sync - naturally I'm thinking all the bits which are
important to _podman_
many thanks, L.
1 year
docker registry token authentication and podman
by Михаил Иванов
Hallo, not sure if this is the right place for such question, but still.
I'm trying to set up a token authentication for docker registry and using
podman login to test it. As per description podman should receive
'401 Unauthorized' error and headers in the reply should contain
'Www-Authenticate:' entry. As far as I understand, podman should then
automatically try to access url, specified in this entry to get the token.
But it just terminates with 401 error. I verified registry access with curl
and I see, that Www-Authenticate is present.
Should podman actually request the token automatically or do I misunderstand this?
Best regards,
--
Michael Ivanov
1 year
Podman 4.7.2 can't run imported containers by a service user. Is it a bug?
by Hans F
Hi folks,
My storage config looks like:
# /etc/containers/storage.conf
[storage]
driver = "overlay"
graphroot = "/custom/path/root/data"
rootless_storage_path = "/custom/path/$USER/data"
runroot = "/run/containers/storage
And I have "service" users (that are not to supposed to be used as normal users) with such config:
# /etc/passwd
foobar:x:5000:100::/var/empty:/usr/sbin/nologin
I can run a container like this:
su foobar
podman run -d docker.io/library/debian:bookworm sleep infinity
but I can't import a container and run it:
podman load < /tmp/image.tar.gz
podman image ls
podman run -d 9ff9136eaaab sleep infinity
Error: mkdir /var/empty/.cache: operation not permitted
Testing this as a "normal" user (user with writable home directory) I noticed that Podman creates the following file:
ls -lA .cache/containers/short-name-aliases.conf.lock
-rw-r--r-- 1 me users 0 Dec 3 16:45 .cache/containers/short-name-aliases.conf.lock
Obviously that can't work with a "service" user since it doesn't have writable home.
Could you please advise is this a bug? Should I create an issue on github?
Thank you.
Hans
1 year