[Podman] Re: podman image for ngninx
by Manish Srivastava
It seems like your system is attempting to connect to Debian repositories (
deb.debian.org) to fetch packages while running a SUSE Linux distribution.
This could be due to misconfigured package repositories or a specific
configuration pointing to Debian repositories rather than SUSE's.
Verify your package repository configuration in /etc/zypp/repos.d/. Ensure
that only SUSE repositories are listed there. Remove any Debian repository
entries if present.
If the problem persists, reviewing the logs in /var/log/zypper.log or
/var/log/messages might provide more details on what's causing the
attempted connections to Debian repositories.
On Wed, Dec 6, 2023 at 10:06 PM Matthias Apitz <guru(a)unixarea.de> wrote:
>
> Hello,
>
> I'm trying to build a podman image as described here:
>
> https://docs.podman.io/en/latest/Introduction.html
>
> with the command:
>
> podman build -t nginx https://git.io/Jf8ol
>
> on SuSE LINUX SLES 15 SP5. This fails with the attached nohup log. It
> fails mostly due to this:
> ...
> Adding system user `nginx' (UID 101) ...
> Adding new user `nginx' (UID 101) with group `nginx' ...
> Not creating home directory `/nonexistent'.
> + apt-get update
> Err:1 http://deb.debian.org/debian buster InRelease
> Connection failed [IP: 146.75.118.132 80]
> Err:2 http://deb.debian.org/debian-security buster/updates InRelease
> Connection failed [IP: 146.75.118.132 80]
> Err:3 http://deb.debian.org/debian buster-updates InRelease
> Connection failed [IP: 146.75.118.132 80]
> Reading package lists...
> W: Failed to fetch http://deb.debian.org/debian/dists/buster/InRelease
> Connection failed [IP: 146.75.118.132 80]
> ...
>
> What can I do?
>
> Thanks
>
> matthias
>
> --
> Matthias Apitz, ✉ guru(a)unixarea.de, http://www.unixarea.de/
> +49-176-38902045
> Public GnuPG key: http://www.unixarea.de/key.pub
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
1 year, 4 months
[Podman] Re: Networking between KVM and containers
by Paul Holzinger
Hi Sven,
There is a dhcp plugin that you can use instead of the host-local ipam
plugin.
https://www.cni.dev/plugins/current/ipam/dhcp/
---
Paul
On Wed, May 10, 2023 at 10:17 PM Sven Schwermer via Podman <
podman(a)lists.podman.io> wrote:
> Hi,
>
> I have a host running Debian Bullseye (Podman v3.0.1). On that host, I run
> an OPNsense VM via KVM. The goal is to create a (virtual) network
> connection between that VM and one or more Podman containers.
>
> So far, I have created a dedicated bridge network for the VM via this
> network definition:
>
> <network connections='1'>
> <name>services</name>
> <uuid>884d7543-91b0-4752-93b7-7efc6633d733</uuid>
> <bridge name='virbr1' stp='on' delay='0'/>
> <mac address='52:54:00:78:f8:79'/>
> <ip address='192.168.50.1' netmask='255.255.255.0'>
> </ip>
> </network>
>
> I then created this network for Podman:
>
> [
> {
> "cniVersion": "0.4.0",
> "name": "services",
> "plugins": [
> {
> "ipam": {
> "gateway": "192.168.50.2",
> "routes": [
> {
> "dst": "0.0.0.0/0"
> }
> ],
> "subnet": "192.168.50.0/24",
> "type": "host-local"
> },
> "master": "virbr1",
> "type": "macvlan"
> }
> ]
> }
> ]
>
> The container is is started like so:
>
> podman run --network=services --ip=192.168.50.10 [...]
>
> This does work, however, it doesn't seem ideal. Is there a better way to
> achieve networking between VM and containers? Is there a way to make Podman
> actually configure networking by making DHCP requests (to the OPNsense VM)?
> That way, DNS would be configured properly as well.
>
> Any pointers are welcome 😄
>
> Thanks, Sven
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
2 years
[Podman] Re: How to build image for own jar file
by Александр Илюшкин
Hi, mate.
I believe you can use this answer on SO
https://stackoverflow.com/a/35062090 replacing `docker` with `podman`
as it fully supports docker API.
So I would write a file named `Dockerfile`:
FROM openjdk:11
MAINTAINER t.schneider(a)getgoogleoff.me
COPY~/.mpw-gui/masterpassword-gui.jar /home/masterpassword-gui.jar
CMD ["java","-jar","/home/masterpassword-gui.jar"]
Notice that I used FROM openjdk:11, you don't have to build your own
separate openjdk image as it's already built by guys from openjdk,
please use your current project version of JDK for it:
https://hub.docker.com/_/openjdk
Build your image:
podman build -t imageName .
Now invoke your program inside a container:
podman run --name myProgram imageName
Now restart your program by restarting the container:
podman restart myProgram
Your program changed? Rebuild the image!:
podman rmi imageName
podman build -t imageName .
Additionally, usually we don't build images by hand, we use maven or
gradle for this.
For instance, google created a tool called JIB, which creates OCI
images with java programs automatically:
https://cloud.google.com/java/getting-started/jib
Also, we use this maven plugin to build docker image with jar file of
our project without writing Dockerfile at all: https://dmp.fabric8.io/
It should work the same way with both docker and podman.
вт, 28 нояб. 2023 г. в 02:02, Thomas <t.schneider(a)getgoogleoff.me>:
>
> Hello,
>
> I have successfully build docker image "sapmachine", a build of OpenJDK.
>
> Now I want to build my own image to run my own jar file.
> This jar file is located in ~/.mpw-gui/masterpassword-gui.jar, and with
> locally installed OpenJDK I would run this command: java -jar
> .mpw-gui/masterpassword-gui.jar
>
> Could you please advise how to build my own image for this java application?
>
> THX
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
--
С уважением,
А.И.
1 year, 6 months
[Podman] Re: fs.mqueue.msg_max rootless problem
by Lewis Gaul
For the record I made one small mistake - the user namespace should not be
entered.
[centos@localhost ~]$ podman create --rm -it --name ctr_foo --ipc private
busybox
9e9addf1ffaf88933c277c4f6cf1983cb68e69e23778da432f6a9d1b6a0d2ee6
[centos@localhost ~]$ podman init ctr_foo
ctr_foo
[centos@localhost ~]$ ctr_pid=$(podman inspect -f '{{.State.Pid}}' ctr_foo)
[centos@localhost ~]$ sudo nsenter --target $ctr_pid --ipc sysctl
fs.mqueue.msg_max=64
fs.mqueue.msg_max = 64
[centos@localhost ~]$ podman start -a ctr_foo
/ # sysctl fs.mqueue
fs.mqueue.msg_default = 10
fs.mqueue.msg_max = 64
fs.mqueue.msgsize_default = 8192
fs.mqueue.msgsize_max = 8192
fs.mqueue.queues_max = 256
But yes I understand this isn't always going to be a suitable approach, I
think the fix needs to be in the kernel (and I'm now unclear whether it has
been fixed or not since Giuseppe said in the "mqueue msg_max in rootless
container" email thread that nothing has changed in v6.7).
Regards,
Lewis
On Wed, 29 Nov 2023 at 19:02, Михаил Иванов <ivans(a)isle.spb.ru> wrote:
> Hallo, thanks for advice!
>
> But sorry, for me it did not work:
>
> podman create --name ctest --pod test --ipc private --cap-add=SYS_PTRACE --init --replace test-image
> container=99425540b8e3544409e4086cf1a44b04cf9f402f1d7505f807324dce71eb2373
> podman init test
> test
> podman inspect -f '{{.State.Pid}}' test
> pid=2157674
> sudo nsenter --target 2157674 --user --ipc sysctl fs.mqueue.msg_max=64
> sysctl: permission denied on key "fs.mqueue.msg_max"
>
> Anyway, even if it would work, this method would not be appropriate in my case,
> since eventually my containers should be run from quadlet (which in turn uses
> podman kube play). Shell is used only during development.
>
> Best regards,
>
> On 29.11.2023 18:10, Lewis Gaul wrote:
>
> Hi,
>
> I think this is the same thing I raised in
> https://github.com/containers/podman/discussions/19737?
>
> This seems to be a kernel limitation - I'm not sure where the mqueue
> limits come from when creating a new IPC namespace, but it doesn't inherit
> the limits from the parent namespace and the root user within the user
> namespace does not have permissions to modify the limits. This was
> supposedly fixed in a recent kernel version although I haven't tested it.
>
> The workaround I'm currently using (requiring sudo permissions) is along
> the lines of:
> podman create --ipc private --name ctr_foo ...
> podman init ctr_foo
> ctr_pid=$(podman inspect -f '{{.State.Pid}}' ctr_foo)
> sudo nsenter --target $ctr_pid --user --ipc sysctl fs.mqueue.msg_max=64
> podman start ctr_foo
>
> Obviously this isn't ideal, and I'd be open to alternatives...
>
> Regards,
> Lewis
>
> On Mon, 27 Nov 2023 at 12:23, Daniel Walsh <dwalsh(a)redhat.com> wrote:
>
>> On 11/27/23 02:04, Михаил Иванов wrote:
>>
>> Hallo,
>>
>> For me rootful works:
>>
>> island:container [master]> cat /proc/sys/fs/mqueue/msg_max
>> 256
>>
>> $ podman run alpine ls -ld /proc/sys/fs/mqueue/msg_max
>> -rw-r--r-- 1 nobody nobody 0 Nov 27 12:10
>> /proc/sys/fs/mqueue/msg_max
>>
>> Since it is owned by real root, a rootless user can not write to it. I
>> guess we could ague this is a bug with the kernel. mqeueu/msg_max should be
>> owned by root of the user namespace as opposed to real root.
>>
>> ## Rootful:
>> island:container [master]> sudo podman run --sysctl=fs.mqueue.msg_max=64 --rm centos cat /proc/sys/fs/mqueue/msg_max
>> 64
>>
>> ## Rootless:
>> island:container [master]> podman run --sysctl=fs.mqueue.msg_max=64 --rm centos cat /proc/sys/fs/mqueue/msg_max
>> Error: crun: open `/proc/sys/fs/mqueue/msg_max`: Permission denied: OCI permission denied
>>
>> ## What rootless gets by default (changed as compared to host setting!):
>> island:container [master]> podman run --rm centos cat /proc/sys/fs/mqueue/msg_max
>> 10
>>
>> Rgrds,
>>
>> On 25.11.2023 20:17, Daniel Walsh wrote:
>>
>> On 11/25/23 10:44, Михаил Иванов wrote:
>>
>> Hallo,
>>
>> Is it possible to get podman to propagate current host fs.mqueue.msg_max
>> value to rootless container? I can do that if I specify --ipc host when
>> running the container, but this also exposes other ipc stuff from host
>> to container, including shared memory, which I do not want.
>>
>> If I specify --sysctl fs.mqueue.msg_size=64 to podman it gives me
>> "OCI permission denied" error, even when my host setting (256) is greater
>> than requested value.
>>
>> Thanks,
>> --
>> Micvhael Ivanov
>>
>>
>> _______________________________________________
>> Podman mailing list -- podman(a)lists.podman.io
>> To unsubscribe send an email to podman-leave(a)lists.podman.io
>>
>> The way you attempted is correct. Might not be allowed for rootless
>> containers.
>>
>> I attempted this in a rootful container and it blows up for me.
>>
>>
>> podman run --sysctl fs.mqueue.msg_size=64 alpine echo hi
>> Error: crun: open `/proc/sys/fs/mqueue/msg_size`: No such file or
>> directory: OCI runtime attempted to invoke a command that was not found
>>
>>
>> _______________________________________________
>> Podman mailing list -- podman(a)lists.podman.io
>> To unsubscribe send an email to podman-leave(a)lists.podman.io
>>
>>
>> _______________________________________________
>> Podman mailing list -- podman(a)lists.podman.io
>> To unsubscribe send an email to podman-leave(a)lists.podman.io
>>
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
1 year, 6 months
[Podman] Re: What to use instead of RemapUsers/RemapUid/RemapGid in Quadlet now?
by Valentin Rothberg
Thanks for reaching out!
The following commit has removed the fields from the documentation:
https://github.com/containers/podman/commit/f6a50311c56d
The fields have been deprecated in favor of the new `UserNS` field which is
more symmetric to the CLI. The old fields are still functional but we
decided to drop them from the docs to not encourage use.
Kind regards,
Valentin
On Fri, Sep 1, 2023 at 1:28 PM <jklaiho(a)iki.fi> wrote:
> I'm running a bunch of rootless Podman containers. I noticed that the
> RemapUsers, RemapUid and RemapGid options (and possibly others that I
> haven't used) disappeared from the documentation of podman-systemd.unit in
> 4.5.0.
>
> I barely and partially understood what the options did in the 4.4.0 days
> when we started using them, but got them working through trial and error.
>
> Here's what we have across the board right now in our Quadlet generators.
> They still work in 4.5.0, but I'm assuming they'll go away eventually:
>
> RemapUsers=manual
> RemapUid=0:0:1
> RemapUid=100:1:1
> RemapGid=0:0:1
> RemapGid=65534:1:1
>
> With the 0:0:1 options, the root user/group inside the containers are
> mapped to the regular (non-root) host user/group. We need this, since the
> container bind mounts volumes from the host and must appear to the host as
> the regular user while doing so.
>
> The 100:1:1 and 65534:1:1 options have to do with the special _apt user in
> Debian-based containers; apt drops privileges to that user in some
> circumstances. I couldn't tell you why remapping those are needed, but not
> having them caused problems when installing packages inside the containers.
>
> What Quadlet options in Podman >=4.5.0 would be equivalent to the above
> legacy options?
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
1 year, 8 months
[Podman] Re: Follow-up: Rootless storage usage
by Giuseppe Scrivano
Reinhard Tartler <siretart(a)gmail.com> writes:
> On Tue, Jan 24, 2023 at 2:08 PM Daniel Walsh <dwalsh(a)redhat.com> wrote:
>
> On 1/24/23 03:47, Reinhard Tartler wrote:
>
> Dan,
>
> In Debian, I've chosen to just go with the upstream defaults:
> https://github.com/containers/storage/blob/8428fad6d0d3c4cded8fd7702af36a...
>
> This file is installed verbatim to /usr/share/containers/storage.conf.
>
> Is there a better choice? Does Fedora/Redhat provide a default storage.conf from somewhere else?
>
> Thanks,
> -rt
>
> That should be fine. Fedora goes with that default as well. Does debian support rootless overlay by default?
>
> If not then it would fail over to VFS if fuse-overlayfs is not installed.
>
> I'm a bit confused about what you mean with that.
>
> In Debian releases that ship podman 4.x we have at least Linux kernel 6.0. The fuse-overlayfs package is installed by default, but users may opt to not
> install it by configuring apt to not install "Recommends" by default.
>
> What else is required for rootless overlay?
>
> Also, if I follow this conversation, then it appears that the default storage.conf requires modification in line 118 (to uncomment the mount_program
> option) in order to actually use fuse-overlayfs. I would have expected podman to use fuse-overlayfs if it happens to be installed, and fallback to direct
> mount if not. I read Michail's email thread that this appears to be not the case and he had to spend a lot of effort figuring out how to install an
> appropriate configuration file. Maybe I'm missing something, but I wonder what we can do to improve the user experience?
what issue do you see if you use native overlay?
Podman prefers native overlay if it is available, since it is faster.
If not, it tries fuse-overlays and if it is not available, it falls back
to vfs.
Could you try from a fresh storage though? If fuse-overlayfs was
already used, then Podman will continue using it even if native overlay
is available, since the storage metadata is slightly different.
Thanks,
Giuseppe
2 years, 4 months
[Podman] Re: docker registry token authentication and podman
by Михаил Иванов
Hallo Miloslav,
I run registry and cesanta/docker_auth in a single pod.
Registry listens on port 5004, docker_auth - on port 5005.
Ports are bublished with same numbers. Access to registry
is preformed through apachei2 proxy which runs on the same
system. Access to docker_auth service is performed directly
to port 5005 using unencrypted http protocol.
Authentication in registry is configured as follows:
auth:
token:
realm:http://regtest-auth.intern.local:5005/auth
service: regtest-auth.intern.local
issuer: "ACME auth server - aa8AhshuoCh5eade"
rootcertbundle: /certs/sign-ca.pem
Corresponding part on docker auth is configured as follows:
server:
addr: ":5005"
token:
issuer: "ACME auth server - aa8AhshuoCh5eade"
expiration: 900
certificate: "/config/sign-ca.pem"
key: "/config/ra-private.pem"
I run podman login to test authenticationnas follows:
island:podman [v4.7]> strace -f -o /tmp/podman.trace -s 16384 ./bin/podman --log-level debug login regtest.intern.local
INFO[0000] ./bin/podman filtering at log level debug
DEBU[0000] Called login.PersistentPreRunE(./bin/podman --log-level debug login regtest.intern.local)
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/ivans/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver vfs
DEBU[0000] Using graph root /home/ivans/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1007/containers
DEBU[0000] Using static dir /home/ivans/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1007/libpod/tmp
DEBU[0000] Using volume path /home/ivans/.local/share/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] [graphdriver] trying provided driver "vfs"
DEBU[0000] Initializing event backend file
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument
DEBU[0000] Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/crun"
INFO[0000] Setting parallel job count to 25
DEBU[0000] Loading registries configuration "/home/ivans/.config/containers/registries.conf"
DEBU[0000] No credentials matching regtest.intern.local found in /run/user/1007/containers/auth.json
DEBU[0000] No credentials matching regtest.intern.local found in /home/ivans/.config/containers/auth.json
DEBU[0000] No credentials matching regtest.intern.local found in /home/ivans/.docker/config.json
DEBU[0000] No credentials matching regtest.intern.local found in /home/ivans/.dockercfg
DEBU[0000] No credentials for regtest.intern.local found
Username: ivans
Password:
DEBU[0028] Looking for TLS certificates and private keys in /etc/docker/certs.d/regtest.intern.local
DEBU[0028] GEThttps://regtest.intern.local/v2/
DEBU[0028] Pinghttps://regtest.intern.local/v2/ status 401
DEBU[0028] GEThttps://regtest.intern.local/v2/
PARSE HEADER [[Bearer realm="http://regtest-auth.intern.local:5005/auth",service="regtest-auth.intern.local"]]
VALUE: [bearer], PARAMETER: [0xc00052c3f0]
CHALLENGES: [[[1/1]0xc000a861f8]]
bearer
[realm] => [http://regtest-auth.intern.local:5005/auth]
[service] => [regtest-auth.intern.local]
DEBU[0029] error logging into "regtest.intern.local": unable to retrieve auth token: invalid username/password: unauthorized
Error: logging into "regtest.intern.local": invalid username/password
DEBU[0029] Shutting down engines
I have added some test messages to podman to verify that it receives
WWW-Authenticate header with correct parameters. I also captured the
traffic to registry and docker_auth with following command:
tcpdump -ni any port 5005 or port 5004 -s 0 -w /tmp/auth.dump
Capture shows only one HTTP exchange:
GET /v2/ HTTP/1.1
Host: regtest.intern.local
User-Agent: containers/5.28.0 (github.com/containers/image)
Authorization: Basic aXZhbnM6TGVuYSMyMDc0
Docker-Distribution-Api-Version: registry/2.0
Accept-Encoding: gzip
X-Forwarded-Proto: https
X-Forwarded-For: 10.255.225.67
X-Forwarded-Host: regtest.intern.local
X-Forwarded-Server: regtest.intern.local
Connection: Keep-Alive
HTTP/1.1 401 Unauthorized
Content-Type: application/json; charset=utf-8
Docker-Distribution-Api-Version: registry/2.0
Www-Authenticate: Bearer realm="http://regtest-auth.intern.local:5005/auth",service="regtest-auth.intern.local"
Date: Wed, 06 Dec 2023 20:13:36 GMT
Content-Length: 87
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}
As far as I understand podman should try to connect to url specified by realm
in WWW-Authenicate header and request a token from it. But as I see form
capture no such attempt is detected. I also verified this in strace output,
the only tcp connect attempts reported are following:
1 2624033 connect(7, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("10.194.99.71")}, 16 <unfinished ...>
2 2624033 connect(7, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("10.194.99.42")}, 16 <unfinished ...>
3 2624042 connect(7, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("10.194.99.71")}, 16 <unfinished ...>
4 2624042 connect(7, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("10.194.99.42")}, 16 <unfinished ...>
(port 53 is evidently name resolver)
I tried this with podman from os (4.7.2, debian unstable) and podman build
from v4.7 branch (version reported - 4.7.3-dev) with same results.
Rgrds,
On 06.12.2023 17:31, Miloslav Trmac wrote:
> st 6. 12. 2023 v 15:08 odesílatel Daniel Walsh <dwalsh(a)redhat.com> napsal:
>
> On 12/5/23 07:16, Михаил Иванов wrote:
>> I'm trying to set up a token authentication for docker registry and using
>> podman login to test it. As per description podman should receive
>> '401 Unauthorized' error and headers in the reply should contain
>> 'Www-Authenticate:' entry. As far as I understand, podman should then
>> automatically try to access url, specified in this entry to get the token.
>> But it just terminates with 401 error. I verified registry access with curl
>> and I see, that Www-Authenticate is present.
>>
>> Should podman actually request the token automatically or do I misunderstand this?
>
> I don’t know, please provide the full HTTP request/response dumps, and
> Podman’s --log-level=debug logs.
>
> At the very least, note that the initial /v2/ “API presence check”
> request must fail with a 401, not just individual accesses to specific
> data.
> Mirek
>
> _______________________________________________
> Podman mailing list --podman(a)lists.podman.io
> To unsubscribe send an email topodman-leave(a)lists.podman.io
1 year, 5 months
[Podman] podman build stops with NO SPACE left on device
by Matthias Apitz
Hello,
I'm creating a podman container on RedHat 8.1 which should run our
application server on SuSE SLES15 SP6. The build was fine, but a second
build to add some more components stops with the following details:
$ podman -v
podman version 4.9.4-rhel
$ podman build -t sles15-sp6 suse
suse/Dockerfile:
FROM registry.suse.com/bci/bci-base:15.6
LABEL maintainer="Matthias Apitz <guru(a)unixarea.de>"
...
#
# sisis-pap
#
RUN cd /home/sisis/install ; tar xzf sisis-pap-V7.3-linux-pkg-tar.gz ; cd sisis-pap ; ./install
...
Installation beendet.
Hinweise zum weiteren Vorgehen entnehmen Sie bitte
der Freigabemitteilung FGM-sisis-pap-V7.3.htm
Installation erfolgreich beendet
(the 4 German lines are coming out at the end of the above script
'./install'; i.e. the software of the tar archive was unpacked and
installed fine, but the error is while writing the container after this
step to disk)
Error: committing container for step {Env:[PATH=/bin:/usr/bin:/usr/local/bin] Command:run Args:[cd /home/sisis/install ; tar xzf sisis-pap-V7.3-linux-pkg-tar.gz ; cd sisis-pap ; ./install] Flags:[] Attrs:map[] Message:RUN cd /home/sisis/install ; tar xzf sisis-pap-V7.3-linux-pkg-tar.gz ; cd sisis-pap ; ./install Heredocs:[] Original:RUN cd /home/sisis/install ; tar xzf sisis-pap-V7.3-linux-pkg-tar.gz ; cd sisis-pap ; ./install}: copying layers and metadata for container "a11a6ce841891057fb53dfa276d667a938764a6a63e9374b61385f0012532aa0": writing blob: adding layer with blob "sha256:a0b630090f1fb5cae0e1ec48e5498021be8e609563859d8cebaf0ba75b89e21d": processing tar file(write /home/sisis/install/sisis-pap/usr/local/sisis-pap/pgsql-14.1/share/locale/fr/LC_MESSAGES/pg_test_fsync-14.mo: no space left on device): exit status 1
$ podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 4ea3a0a7bd94 27 minutes ago 2.85 GB
localhost/sles15-sp6 latest 0874a5469069 About an hour ago 6.31 GB
registry.suse.com/bci/bci-base 15.6 0babc7595746 12 days ago 130 MB
$ ls -l .local/share/containers
lrwxrwxrwx 1 root root 24 Aug 18 2023 .local/share/containers -> /appdata/guru/containers
$ env | grep TMP
TMPDIR=/home/apitzm/.local/share/containers/tmp
apitzm@srrp02dxr1:~$ df -kh /appdata/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vga-appdata 98G 83G 11G 89% /appdata
The container would need again 6.31 GB, maybe a bit more, but not 11G.
Why it is complaining?
matthias
--
Matthias Apitz, ✉ guru(a)unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub
I am not at war with Russia. Я не воюю с Россией.
Ich bin nicht im Krieg mit Russland.
10 months, 1 week
[Podman] Re: scp'ing a podman image to another host
by Charlie Doern
You should also usually get some sort of:
Storing signaturesLoaded image(s):
after
Writing manifest to image destination
if this doesn't show up, then the image doesn't actually get stored. I
remember there being some compatibility issues over certain
types/sizes of images w/ scp. Can you throw a `-v` in there to see if
it tells you anything else?
Charlie
On Wed, Jan 10, 2024 at 9:33 AM Matthias Apitz <guru(a)unixarea.de> wrote:
>
> I have an image on RH 8.x which runs fine (containing a SuSE SLES and
> PostgreSQL server):
>
> $ podman images
> REPOSITORY TAG IMAGE ID CREATED
> SIZE
> localhost/suse latest c87c80c0911a 26 hours ago
> 6.31 GB
> registry.suse.com/bci/bci-base 15.4 5bd0e4152d92 2 weeks ago
> 123 MB
>
> I created a connection to another host as:
>
> $ podman system connection list
> Name URI
> Identity Default
> srap57 ssh://
> apitzm@srap57dxr1.dev.xxxxxx.org:22/run/user/200007/podman/podman.sock
> true
>
> To the other host I can SSH fine based on RSA public/private keys and
> podman is installed there to:
>
> $ ssh apitzm(a)srap57dxr1.dev.xxxxxx.org
> Last login: Wed Jan 10 14:05:12 2024 from 10.201.64.28
> apitzm@srap57dxr1:~> podman version
> Client: Podman Engine
> Version: 4.7.2
> API Version: 4.7.2
> Go Version: go1.21.4
> Built: Wed Nov 1 13:00:00 2023
>
> When I now copy over the image with:
>
> $ podman image scp c87c80c0911a srap57::
>
> it transfers the ~6 GByte (I can see them in /tmp as a big tar file of
> tar files) and at the end it says:
>
> ...
> Writing manifest to image destination
> $
>
> (i.e. the shell prompt is there again)
>
> But on srap57dxr1.dev.xxxxxx.org I can't see anything of the image at the
> end.
>
> What I've done wrong?
>
> Thanks
>
> matthias
>
> --
> Matthias Apitz, ✉ guru(a)unixarea.de, http://www.unixarea.de/
> +49-176-38902045
> Public GnuPG key: http://www.unixarea.de/key.pub
>
> I am not at war with Russia. Я не воюю с Россией.
> Ich bin nicht im Krieg mit Russland.
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
1 year, 4 months
[Podman] Re: How does podman set rootfs ownership to root when using --userns keep-id ?
by Paul Holzinger
Hi Fabio,
My understanding is that the image is copied and chown-ed to the correct
uids when running rootless.
There is also the concept of idmapped mounts in the kernel but the kernel
only allows this as root at the moment.
Paul
On Thu, May 4, 2023 at 8:56 AM Fabio <fabio(a)redaril.me> wrote:
> Hi all,
>
> I'm trying to understand some of the internals of namespace-based Linux
> containers and I'm kindly asking you for help.
>
> When launching `podman run -it --rm -v ~/Downloads:/dwn
> docker.io/library/ubuntu /bin/bash`, the inside user is root. That is
> expected, and without any surprise the /proc/self/uid_map is:
> 0 1000 1
> 1 100000 65536
>
> When launching `podman run -it --rm -v ~/Downloads:/dwn --userns keep-id
> docker.io/library/ubuntu /bin/bash` instead, the /proc/self/uid_map is:
> 0 1 1000
> 1000 0 1
> 1001 1001 64536
>
> If I'm understanding it well, in the latter case there is a double
> mapping: to keep host UID and GID, podman fires two user namespaces,
> where the inner namespace maps its IDs the outer namespace, which
> finally maps to the host (that is, 1000 -> 0 -> 1000 again).
>
> The mechanism I don't get is how podman manages to make the rootfs owned
> by root inside the inner namespace, while assigning volumes to the
> unprivileged inner user:
> dr-xr-xr-x. 1 root root 18 May 4 06:33 .
> dr-xr-xr-x. 1 root root 18 May 4 06:33 ..
> lrwxrwxrwx. 1 root root 7 Mar 8 02:05 bin -> usr/bin
> drwxr-xr-x. 1 root root 0 Apr 18 2022 boot
> [...]
> drwxr-xr-x. 1 myuser 1000 2.1K May 3 15:07 dwn
>
> What is the algorithm here? I have a feeling there is some clever
> combination of syscalls here I don't get. When I tried to reproduce this
> double namespace situation, the rootfs of the inner namespace was all
> owned by 1000, not 0.
>
> Thank you so so much for your time if you're willing to help me,
> Fabio.
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
2 years