[Podman] Re: Rootless podman mount capabilities
by Giuseppe Scrivano
Lewis Gaul <lewis.gaul(a)gmail.com> writes:
> Hi Giuseppe,
>
> Thanks, some useful points there. However, my question was more specifically around how "special" mounts get created in containers, given it's not
> possible for the container process itself to create them. A concrete example below using rootless podman...
>
>> podman run --rm -it --name ubuntu --privileged ubuntu:20.04
> root@b2069e97cd13:/# findmnt -R /sys/fs/cgroup/freezer
> TARGET SOURCE FSTYPE OPTIONS
> /sys/fs/cgroup/freezer cgroup cgroup rw,nosuid,nodev,noexec,relatime,seclabel,freezer
> root@b2069e97cd13:/# umount /sys/fs/cgroup/freezer
> root@b2069e97cd13:/# mount -t cgroup cgroup /sys/fs/cgroup/freezer -o rw,nosuid,nodev,noexec,relatime,seclabel,freezer
> mount: /sys/fs/cgroup/freezer: permission denied.
>
> This shows that cgroup mounts are present in the container, and yet the container does not have permission to create the mount.
>
> However, I've realised these are perhaps just bind mounts from the host mount namespace? I can simulate this as follows:
yes rootless containers do not use cgroup v1 controllers. They are bind
mounts from the host.
>
>> podman run --rm -it --name ubuntu --privileged -v /sys/fs/cgroup:/tmp/host/cgroup:ro ubuntu:20.04
> root@495f11acdd5b:/# findmnt -R /tmp/host/cgroup/freezer/
> TARGET SOURCE FSTYPE OPTIONS
> /tmp/host/cgroup/freezer cgroup cgroup rw,nosuid,nodev,noexec,relatime,seclabel,freezer
> root@495f11acdd5b:/# umount /sys/fs/cgroup/freezer
> root@495f11acdd5b:/# mount --bind /tmp/host/cgroup/freezer /sys/fs/cgroup/freezer
> root@495f11acdd5b:/# findmnt -R /sys/fs/cgroup/freezer/
> TARGET SOURCE FSTYPE OPTIONS
> /sys/fs/cgroup/freezer cgroup cgroup rw,nosuid,nodev,noexec,relatime,seclabel,freezer
>
> One further thing I'm unclear on is as follows. It seems when a new mount namespace is created that the mount list is copied from the parent
> process, but some of the container cgroup mounts are bind mounts at some point in the hierarchy rather than being the same as the host mounts.
> Perhaps the container runtime first unmounts /sys/fs/cgroup in the
> container mount namespace before creating these bind mounts?
the container runtime creates all the mounts for the container under the
container rootfs directory and then it uses pivot_root() to change the
root for the current mount namespace. You can think of pivot_root() as
chroot().
>
> root@495f11acdd5b:/# findmnt /sys/fs/cgroup/devices
> TARGET SOURCE FSTYPE OPTIONS
> /sys/fs/cgroup/devices cgroup[/user.slice] cgroup rw,nosuid,nodev,noexec,relatime,seclabel,devices
> root@495f11acdd5b:/# findmnt /tmp/host/cgroup/devices
> TARGET SOURCE FSTYPE OPTIONS
> /tmp/host/cgroup/devices cgroup cgroup rw,nosuid,nodev,noexec,relatime,seclabel,devices
>
> Thanks,
> Lewis
>
> On Thu, 14 Sept 2023 at 12:46, Giuseppe Scrivano <gscrivan(a)redhat.com> wrote:
>
> Lewis Gaul <lewis.gaul(a)gmail.com> writes:
>
> > Hi,
> >
> > I'm trying to understand something about how capabilities in rootless podman work.
> >
> > How does rootless podman have the capability to set up container mounts (such as cgroup mounts) given a privileged container itself doesn't?
> Does
> > podman deliberately drop caps, or somehow get elevated privileges to do this?
> >
> > This is the process tree podman sets up (where bash is the container entrypoint here):
> > systemd(1)---conmon(1327421)---bash(1327432)
> >
> > I'm assuming it's conmon that sets up the container's mounts (via runc in this case), which is a process running as my user (rootless). How is it
> that
> > conmon has the capabilities required (SYS_ADMIN?) to create the container's cgroup and sysfs mounts but within the container itself this is not
> > possible?
> >
> > Thanks for any insight!
>
> a rootless container is able to perform "privileged" operations by using a
> user namespace, and in that user namespace it gains the capabilities
> required to perform mounts.
>
> Be aware that in a user namespace, the root user is still limited to
> what it can do, as the kernel differentiates between the root user on
> the host (known as the initial user namespace) and any other user
> namespace.
>
> The user namespace is a special namespace, that alters how other
> namespaces work since each namespace is "owned" by a user namespace.
>
> So a user namespace alone is not enough to perform mounts, the user must
> also create a new mount namespace. The combination user namespace+mount
> namespace is what "podman unshare" creates.
>
> For example:
>
> $ podman unshare
> $ id
> uid=0(root) gid=0(root) groups=0(root),65534(nobody) context=unconfined_u:unconfined_r:container_runtime_t:s0-s0:c0.c1023
> $ mkdir /tmp/test
> $ mount -t tmpfs tmpfs /tmp/test
> $ exit
>
> You can try manually:
>
> $ unshare -r bash ## creates a user namespace and maps your user to root
> $ mkdir /tmp/test; mount -t tmpfs tmpfs /tmp/test
> mkdir: cannot create directory ‘/tmp/test’: File exists
> mount: /tmp/test: permission denied.
> dmesg(1) may have more information after failed mount system call.
>
> The failure happens because the user namespace does not own the mount
> namespace as it is owned by the initial user namespace.
>
> So in order to perform a mount, you must create a mount namespace:
>
> $ unshare -m bash ## the new mount namespace is owned by the current
> ## user namespace
> $ mount -t tmpfs tmpfs /tmp/test
>
> In the rootless container case, the container mounts are performed by
> the OCI runtime that runs in the user+mount namespace created by
> Podman.
>
> Regards,
> Giuseppe
1 year, 8 months
[Podman] Some questions about networking
by Kamil Jońca
I recently started play with podman and non-root contaiers and I think I
missed something about networking.
Assume we have such network on host:
--8<---------------cut here---------------start------------->8---
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether b4:2e:99:f0:ae:57 brd ff:ff:ff:ff:ff:ff
inet 192.168.200.200/24 brd 192.168.200.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fd1b:69d9:9e1d:0:b62e:99ff:fef0:ae57/64 scope global dynamic mngtmpaddr
valid_lft forever preferred_lft forever
inet6 fe80::b62e:99ff:fef0:ae57/64 scope link
valid_lft forever preferred_lft forever
23: cni-podman0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether ce:89:80:01:c8:e6 brd ff:ff:ff:ff:ff:ff
inet 10.88.0.1/16 brd 10.88.255.255 scope global cni-podman0
valid_lft forever preferred_lft forever
inet6 fe80::1c6d:52ff:fedb:9d9a/64 scope link
valid_lft forever preferred_lft forever
--8<---------------cut here---------------end--------------->8---
There is not NAT/MASQUERADE rules in nftables.
$podman network inspect podman
[
{
"name": "podman",
"id": "2f259bab93aaaaa2542ba43ef33eb990d0999ee1b9924b557b7be53c0b7a1bb9",
"driver": "bridge",
"network_interface": "podman0",
"created": "2023-03-29T19:50:55.753738104+02:00",
"subnets": [
{
"subnet": "10.88.0.0/16",
"gateway": "10.88.0.1"
}
],
"ipv6_enabled": false,
"internal": false,
"dns_enabled": false,
"ipam_options": {
"driver": "host-local"
}
}
]
I run container with command:
$podman run -ti --log-level debug --network podman --name test test
then in container:
--8<---------------cut here---------------start------------->8---
root@ddc54c227a9d:/# curl onet.pl
<html>
<head><title>301 Moved Permanently</title></head>
<body>
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx</center>
</body>
</html>
--8<---------------cut here---------------end--------------->8---
Why container has access outside host?
I would expect that I will have some rules to allow traffic between
podman0 and eth0?
Is it possible to configure this behavior?
Where can I found description for values in "podman network inspect"?
What is difference (in this case) between macvlan and bridge (except
root/non-root)?
KJ
--
http://stopstopnop.pl/stop_stopnop.pl_o_nas.html
2 years, 1 month
[Podman] Re: scp'ing a podman image to another host
by Matthias Apitz
El día miércoles, enero 10, 2024 a las 10:09:27 -0500, Charlie Doern escribió:
> You should also usually get some sort of:
>
> Storing signaturesLoaded image(s):
>
> after
>
> Writing manifest to image destination
>
>
> if this doesn't show up, then the image doesn't actually get stored. I
> remember there being some compatibility issues over certain
> types/sizes of images w/ scp. Can you throw a `-v` in there to see if
> it tells you anything else?
I did tests in two directions:
1)
On the source host I run:
$ podman run -it docker.io/library/busybox
which gave me a local additional image and I transfered this to the
target host:
$ podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/suse latest c87c80c0911a 46 hours ago 6.31 GB
registry.suse.com/bci/bci-base 15.4 5bd0e4152d92 2 weeks ago 123 MB
docker.io/library/busybox latest 9211bbaa0dbd 3 weeks ago 4.5 MB
$ podman image scp 9211bbaa0dbd srap57::
Copying blob 82ae998286b2 done
Copying config 9211bbaa0d done
Writing manifest to image destination
Loaded image: sha256:9211bbaa0dbd68fed073065eb9f0a6ed00a75090a9235eca2554c62d1e75c58f
i.e. this was transfered fine and shows up on the target host as:
srap57dxr1:~> podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> b677170ada05 3 minutes ago 1.89 GB
registry.suse.com/bci/bci-base 15.4 5bd0e4152d92 2 weeks ago 123 MB
<none> <none> 9211bbaa0dbd 3 weeks ago 4.49 MB
apitzm@srap57dxr1:~> podman run -t 9211bbaa0dbd
/ #
2)
I copied over the files to build the image to the target host:
apitzm@srrp02dxr1:~$ scp -rp suse srap57dxr1:.
Dockerfile 100% 5051 1.2MB/s 00:00
initSunRise.sh 100% 953 314.2KB/s 00:00
postgresql.conf 100% 29KB 5.0MB/s 00:00
testdb.dmp.gz 100% 388MB 110.0MB/s 00:03
keyFile 100% 893 63.2KB/s 00:00
and built the image there with:
apitzm@srap57dxr1:~> podman build -t suse suse
...
which worked also fine:
...
STEP 58/59: ENTRYPOINT /usr/local/bin/start.sh
--> 86dab7ac3e4d
STEP 59/59: STOPSIGNAL SIGQUIT
COMMIT suse
--> a1ffb1f71791
Successfully tagged localhost/suse:latest
a1ffb1f717911b4e11aaa89d94c4959562c625b0e203dd906797e60d019cde57
The big difference between the image 'docker.io/library/busybox' and
mine is the size (4,5 MB ./. 6,1 GB). When I scp my big image I see in
/tmp that the sftp-server writes there a temp. file as:
ls -lh /tmp/tmp.RLHbJp9uzq
-rw------- 1 apitzm apitzm 5.8G Jan 11 10:58 /tmp/tmp.RLHbJp9uzq
and when this reached the size of 6 GB it gets deleted
3)
I removed all container files on the target host:
srap57dxr1:/ # rm -rf /data/guru/containers/*
srap57dxr1:/ # du -sh /data/guru/containers/
1.0K /data/guru/containers/
and started a fresh scp:
$ podman image scp c87c80c0911a srap57::
...
Copying blob a5a080851ed7 done
Copying blob 6fc7ff0cb132 done
Copying config c87c80c091 done
Writing manifest to image destination
When the transfer has ended on the target host one can see
1. the big file in /tmp gets deleted
2. something was written below the area of the containers (which was
empty before):
srap57dxr1:/# ls -lh /tmp/tmp.5uuhYWqqQT
-rw------- 1 apitzm apitzm 4.3G Jan 11 11:35 /tmp/tmp.5uuhYWqqQT
srap57dxr1:/# ls -lh /tmp/tmp.5uuhYWqqQT
-rw------- 1 apitzm apitzm 5.9G Jan 11 11:37 /tmp/tmp.5uuhYWqqQT
srap57dxr1:/# ls -lh /tmp/tmp.5uuhYWqqQT
ls: cannot access '/tmp/tmp.5uuhYWqqQT': No such file or directory
srap57dxr1:/# du -sh /data/guru/containers/
1.1G /data/guru/containers/
How can I get more messages about the failing process?
matthias
> On Wed, Jan 10, 2024 at 9:33 AM Matthias Apitz <guru(a)unixarea.de> wrote:
>
> >
> > I have an image on RH 8.x which runs fine (containing a SuSE SLES and
> > PostgreSQL server):
> >
> > $ podman images
> > REPOSITORY TAG IMAGE ID CREATED
> > SIZE
> > localhost/suse latest c87c80c0911a 26 hours ago
> > 6.31 GB
> > registry.suse.com/bci/bci-base 15.4 5bd0e4152d92 2 weeks ago
> > 123 MB
> >
> > I created a connection to another host as:
> >
> > $ podman system connection list
> > Name URI
> > Identity Default
> > srap57 ssh://
> > apitzm@srap57dxr1.dev.xxxxxx.org:22/run/user/200007/podman/podman.sock
> > true
> >
> > To the other host I can SSH fine based on RSA public/private keys and
> > podman is installed there to:
> >
> > $ ssh apitzm(a)srap57dxr1.dev.xxxxxx.org
> > Last login: Wed Jan 10 14:05:12 2024 from 10.201.64.28
> > apitzm@srap57dxr1:~> podman version
> > Client: Podman Engine
> > Version: 4.7.2
> > API Version: 4.7.2
> > Go Version: go1.21.4
> > Built: Wed Nov 1 13:00:00 2023
> >
> > When I now copy over the image with:
> >
> > $ podman image scp c87c80c0911a srap57::
> >
> > it transfers the ~6 GByte (I can see them in /tmp as a big tar file of
> > tar files) and at the end it says:
> >
> > ...
> > Writing manifest to image destination
> > $
> >
> > (i.e. the shell prompt is there again)
> >
> > But on srap57dxr1.dev.xxxxxx.org I can't see anything of the image at the
> > end.
> >
> > What I've done wrong?
> >
> > Thanks
> >
> > matthias
> >
> > --
> > Matthias Apitz, ✉ guru(a)unixarea.de, http://www.unixarea.de/
> > +49-176-38902045
> > Public GnuPG key: http://www.unixarea.de/key.pub
> >
> > I am not at war with Russia. Я не воюю с Россией.
> > Ich bin nicht im Krieg mit Russland.
> > _______________________________________________
> > Podman mailing list -- podman(a)lists.podman.io
> > To unsubscribe send an email to podman-leave(a)lists.podman.io
> >
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
--
Matthias Apitz, ✉ guru(a)unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub
I am not at war with Russia. Я не воюю с Россией.
Ich bin nicht im Krieg mit Russland.
1 year, 4 months
[Podman] Re: RunRoot & mistaken IDs
by lejeczek
On 29/01/2024 12:04, Daniel Walsh wrote:
> On 1/29/24 02:35, lejeczek via Podman wrote:
>>
>>
>> On 28/03/2023 21:00, Chris Evich wrote:
>>> On 3/28/23 09:06, lejeczek via Podman wrote:
>>>> I think it might have something to do with the fact
>>>> that I changed UID for the user
>>>
>>> The files under /run/user/$UID are typically managed by
>>> systemd-logind. I've noticed sometimes there's a delay
>>> between logging out and the files being cleaned up. Try
>>> logging out for a minute or three and see if that fixes it.
>>>
>>> Also, if you have lingering enabled for the user, it may
>>> take a restart of particular the user.slice.
>>>
>>> Lastly, I'm not certain, but you (as root) may be able
>>> to `systemctl reload systemd-logind`. That's a total
>>> guess though.
>>>
>>>
>> Those parts seem very clunky - at least in up-to-date
>> Centos 9 stream - I have removed a user and re/created
>> that user in IdM and..
>> even after full & healthy OS reboot, containers/podman
>> insist:
>>
>> -> $ podman container ls -a
>> WARN[0000] RunRoot is pointing to a path
>> (/run/user/2001/containers) which is not writable. Most
>> likely podman will fail.
>> Error: default OCI runtime "crun" not found: invalid
>> argument
>>
>> -> $ id
>> uid=1107400004(podmania) gid=1107400004(podmania)
>> groups=1107400004(podmania)
>> context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
>>
>>
>> Where/what does it persist/insist on that old,
>> non-existent UID - would anybody know?
>>
>> many thanks, L.
>> _______________________________________________
>> Podman mailing list -- podman(a)lists.podman.io
>> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
> Do you have XDG_RUNTIME_DIR pointing at it?
>
Nope, I don't think so.
-> $ echo $XDG_RUNTIME_DIR
/run/user/1107400004
1 year, 4 months
[Podman] Re: Speeding up podman by using cache
by Ganeshar, Puvi
Dan,
Thanks for coming back to me on this.
If I use an NFS store (with Read & Write) as Podman’s storage, do you anticipate any race conditions when multiple podman processes reading and writing at the same time? Do I need implement any locking mechanisms like what they do in relational databases.
Yum and DNF should not be a bigger issue as we don’t build them every day and we use distroless for the Go microservices and Java s built on a custom base image with all deps already included.
Thanks again.
Puvi Ganeshar | @pg925u
Principal, Platform Engineer
CICD - Pipeline Express | Toronto
[Image]
From: Daniel Walsh <dwalsh(a)redhat.com>
Date: Wednesday, October 23, 2024 at 11:10 AM
To: podman(a)lists.podman.io <podman(a)lists.podman.io>
Subject: [Podman] Re: Speeding up podman by using cache
On 10/22/24 11:04, Ganeshar, Puvi wrote:
Hello Podman team,
I am about explore this option so just wanted to check with you all before as I might be wasting my time.
I am in Platform Engineering team at DirecTV, and we run Go and Java pipelines on Jenkins using Amazon EKS as the workers. So, the process is that when a Jenkins build runs, it asks the EKS for a worker (Kubernetes pod) and the cluster would spawn one and the new pod would communicate back to the Jenkins controller. We use the Jenkins Kubernetes pod template to configure the communication. We are currently running the latest LTS of podman, v5.2.2, however still using cgroups-v1 for now, planning to migrate early 2025 by upgrading the cluster to use Amazon Linux 2023 which uses cgroups-v2 as default. Here’s the podman configuration details that we use:
host:
arch: arm64
buildahVersion: 1.37.2
cgroupControllers:
- cpuset
- cpu
- cpuacct
- blkio
- memory
- devices
- freezer
- net_cls
- perf_event
- net_prio
- hugetlb
- pids
cgroupManager: cgroupfs
cgroupVersion: v1
conmon:
package: conmon-2.1.12-1.el9.aarch64
path: /usr/bin/conmon
version: 'conmon version 2.1.12, commit: f174c390e4760883511ab6b5c146dcb244aeb647'
cpuUtilization:
idlePercent: 99.22
systemPercent: 0.37
userPercent: 0.41
cpus: 16
databaseBackend: sqlite
distribution:
distribution: centos
version: "9"
eventLogger: file
freeLocks: 2048
hostname: podmanv5-arm
idMappings:
gidmap: null
uidmap: null
kernel: 5.10.225-213.878.amzn2.aarch64
linkmode: dynamic
logDriver: k8s-file
memFree: 8531066880
memTotal: 33023348736
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: aardvark-dns-1.12.1-1.el9.aarch64
path: /usr/libexec/podman/aardvark-dns
version: aardvark-dns 1.12.1
package: netavark-1.12.2-1.el9.aarch64
path: /usr/libexec/podman/netavark
version: netavark 1.12.2
ociRuntime:
name: crun
package: crun-1.16.1-1.el9.aarch64
path: /usr/bin/crun
version: |-
crun version 1.16.1
commit: afa829ca0122bd5e1d67f1f38e6cc348027e3c32
rundir: /run/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
pasta:
executable: /usr/bin/pasta
package: passt-0^20240806.gee36266-2.el9.aarch64
version: |
pasta 0^20240806.gee36266-2.el9.aarch64-pasta
Copyright Red Hat
GNU General Public License, version 2 or later
https://www.gnu.org/licenses/old-licenses/gpl-2.0.html<https://urldefense.com/v3/__https:/www.gnu.org/licenses/old-licenses/gpl-...>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
remoteSocket:
exists: false
path: /run/podman/podman.sock
rootlessNetworkCmd: pasta
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: false
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.3.1-1.el9.aarch64
version: |-
slirp4netns version 1.3.1
commit: e5e368c4f5db6ae75c2fce786e31eef9da6bf236
libslirp: 4.4.0
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.2
swapFree: 0
swapTotal: 0
uptime: 144h 6m 15.00s (Approximately 6.00 days)
variant: v8
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- registry.access.redhat.com
- registry.redhat.io
- docker.io
store:
configFile: /etc/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.mountopt: nodev,metacopy=on
graphRoot: /var/lib/containers/storage
graphRootAllocated: 107352141824
graphRootUsed: 23986397184
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "false"
Supports d_type: "true"
Supports shifting: "true"
Supports volatile: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 1
runRoot: /run/containers/storage
transientStore: false
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 5.2.2
Built: 1724331496
BuiltTime: Thu Aug 22 12:58:16 2024
GitCommit: ""
GoVersion: go1.22.5 (Red Hat 1.22.5-2.el9)
Os: linux
OsArch: linux/arm64
Version: 5.2.2
We migrated to podman when Kubernetes deprecated docker and have been using podman for the last two years or so. Its working well, however since we run over 500 builds a day, I am trying to explore whether I can speed up the podman build process by using image caching. I wanted to see if I use an NFS file system (Amazon FSX) as the storage for podman (overlay-fs) would it improve podman performance by the builds completing much faster as of the already downloaded images on the NFS. Currently, podman in each pod on the EKS cluster would download all the required images every time so not taking advantage of the cached images.
These are my concerns:
1. Any race conditions, a podman processes colliding with each other during read and write.
2. Performance of I/O operations as NFS communication will be over the network.
Have any of you tried this method before? If so, can you share any pitfalls that you’ve faced?
Any comments / advice would be beneficial as I need to weigh up pros and cons before spending time on this. Also, if it causes outage due to storage failures it would block all our developers; so, I will have to design this in a way where we can recover quickly.
Thanks very much in advance and have a great day.
Puvi Ganeshar | @pg925u
Principal, Platform Engineer
CICD - Pipeline Express | Toronto
[Image]
_______________________________________________
Podman mailing list -- podman(a)lists.podman.io<mailto:podman@lists.podman.io>
To unsubscribe send an email to podman-leave(a)lists.podman.io<mailto:podman-leave@lists.podman.io>
You can setup an additional store which is preloaded with Images on an NFS share, which should work fine.
Whether this improves performance or not is probably something you need to discover.
If you are dealing with YUM and DNF, you might also want to play with sharing of the rpm database with the build system.
https://www.redhat.com/en/blog/speeding-container-buildah<https://urldefense.com/v3/__https:/www.redhat.com/en/blog/speeding-contai...>
https://www.youtube.com/watch?v=qsh7NL8H4GQ<https://urldefense.com/v3/__https:/www.youtube.com/watch?v=qsh7NL8H4GQ__;...>
7 months, 1 week
[Podman] Re: Podman 4.7.2 can't run imported containers by a service user. Is it a bug?
by Paul Holzinger
Hi Hans,
yes this looks like a bug so please file a issue. I don't think we must
write this file. It should be safe for podman to ignore this error.
Did you try to use the full qualified name instated of the ID? Also I
think you can set XDG_CACHE_HOME env to a writable location as workaround.
Thanks, Paul
On 03/12/2023 18:20, Hans F via Podman wrote:
> Hi folks,
>
> My storage config looks like:
>
> # /etc/containers/storage.conf
> [storage]
> driver = "overlay"
> graphroot = "/custom/path/root/data"
> rootless_storage_path = "/custom/path/$USER/data"
> runroot = "/run/containers/storage
>
> And I have "service" users (that are not to supposed to be used as
> normal users) with such config:
>
> # /etc/passwd
> foobar:x:5000:100::/var/empty:/usr/sbin/nologin
>
> I can run a container like this:
>
> su foobar
> podman run -d docker.io/library/debian:bookworm
> <http://docker.io/library/debian:bookworm> sleep infinity
>
> but I can't import a container and run it:
>
> podman load < /tmp/image.tar.gz
> podman image ls
> podman run -d 9ff9136eaaab sleep infinity
> Error: mkdir /var/empty/.cache: operation not permitted
>
> Testing this as a "normal" user (user with writable home directory) I
> noticed that Podman creates the following file:
>
> ls -lA .cache/containers/short-name-aliases.conf.lock
> -rw-r--r-- 1 me users 0 Dec 3 16:45
> .cache/containers/short-name-aliases.conf.lock
>
> Obviously that can't work with a "service" user since it doesn't have
> writable home.
>
> Could you please advise is this a bug? Should I create an issue on github?
>
> Thank you.
>
> Hans
>
> _______________________________________________
> Podman mailing list --podman(a)lists.podman.io
> To unsubscribe send an email topodman-leave(a)lists.podman.io
--
Paul Holzinger
Software Engineer
Red Hat
pholzing(a)redhat.com
Red Hat GmbH, Registered seat: Werner-von-Siemens-Ring 12, D-85630 Grasbrunn, Germany
Commercial register: Amtsgericht München/Munich, HRB 153243,
Managing Directors: Ryan Barnhart, Charles Cachera, Michael O'Neill, Amy Ross
1 year, 5 months
[Podman] Re: --dns=ipaddr - no effect of it
by Daniel Walsh
On 5/30/23 09:14, lejeczek via Podman wrote:
>
>
> On 30/05/2023 14:00, Daniel Walsh wrote:
>> On 5/29/23 05:59, lejeczek via Podman wrote:
>>> Hi guys.
>>>
>>> --dns=none renders what expected but with an actual server, say:
>>> --dns=10.3.1.200
>>> resolve.conf seems to be the one of the host's, as if --dns did not
>>> happen.
>>> Can anybody else say that is the case? Am I missing something?
>>> I'm on Centos 9 stream with all bits up-to-date.
>>> many thanks, L.
>>>
>>> _______________________________________________
>>> Podman mailing list --podman(a)lists.podman.io
>>> To unsubscribe send an email topodman-leave(a)lists.podman.io
>>
>> Here is what I am getting?
>>
>> ```
>> # podman run --dns=10.3.1.200 alpine cat /etc/resolv.conf
>> nameserver 10.3.1.200
>> # podman run --dns=none alpine cat /etc/resolv.conf
>> cat: can't open '/etc/resolv.conf': No such file or directory
>> ```
>>
>> Rootless
>>
>> ```
>> $ podman run --dns=10.3.1.200 alpine cat /etc/resolv.conf
>> nameserver 10.3.1.200
>> $ podman run --dns=none alpine cat /etc/resolv.conf
>> cat: can't open '/etc/resolv.conf': No such file or directory
>> ```
>>
> I'm trying, for production setup, Centos 9 (perhaps all official,
> available versions?)
>
> podman run -dt --network=off-host --ip=${_IP} --dns=10.3.1.200
> --hostname ${_H}.${_DOM} --name ${_NAME} localhost/centos9-mine
> ...
> [root@centos-whale /]# cat /etc/resolv.conf
> search mine.priv mszczonow.vectranet.pl
> nameserver 10.3.1.254
> nameserver 89.228.4.126
> nameserver 31.11.173.2
> nameserver 10.1.1.254
> options timeout:1
>
> that 'resolv.conf' is an exactl copy off the host, this:
>
> podman run -dt --network=off-host --ip=${_IP} --dns=none --hostname
> ${_H}.${_DOM} --name ${_NAME} localhost/centos9-mine
>
> [root@centos-whale /]# cat /etc/resolv.conf
> # Generated by NetworkManager
> nameserver 192.168.122.1
>
> Perhaps the issue(s) is with centos?
> centos9-mine is done off the 'quay.io/centos/centos' with only a
> couple add rpm packages.
>
>
>
> _______________________________________________
> Podman mailing list --podman(a)lists.podman.io
> To unsubscribe send an email topodman-leave(a)lists.podman.io
No I doubt it, please open an issue.
2 years
[Podman] Re: Problems with routing in rootless podman
by Brent Baude
A couple of things ... this might be more related to Podman's network
stack and the information provided does not suggest which stack it is nor
which versions of what is in the stack. So my recommendation would be:
* make sure you are using the netavark stack
* update netavark and aardvark to the latest versions available (better
yet, latest upstream)
* update podman in the same way
If you still see an issue, file an issue upstream
https://github.com/containers/podman/issues
Another option would be to follow whatever problem reporting mechanism
Oracle uses as it looks like that is the distribution in question. My
apologies there as I do not know what their process is.
If you still observe the problem, I would suggest we take Podman out of the
mix by doing this in a bash script with namespaces and netavark directly.
This would also provide a reproducer.
Brent
On Fri, Feb 17, 2023 at 3:41 AM Henrik Jacobsson <falikorrva(a)gmail.com>
wrote:
> Hello.
>
>
>
> We are running our application in rootless podman.
>
> After some random time (a couple of hours - a couple of weeks), we lose
> the network connectivity into the container.
>
> Everything seems to work fine from inside the container to the rest of the
> world (yum/dnf, ping, curl), but it looks like the routing stops working
> when someone calls from the outside.
>
> I set up a netcat listener (nc -lv), and called it on localhost (worked
> fine) and on the tap-interface (long delays if the packet ever returned). I
> also set up a tcpdump in a third screen – output below.
>
>
>
> bash-4.4$ podman --version
>
> podman version 4.2.0
>
>
>
> bash-4.4$ uname -a
>
> Linux podman-container 5.4.17-2136.315.5.el8uek.x86_64 #2 SMP Wed Dec 21
> 19:38:18 PST 2022 x86_64 x86_64 x86_64 GNU/Linux
>
>
>
> bash-4.4$ cat /etc/os-release
>
> NAME="Oracle Linux Server"
>
> VERSION="8.7"
>
> ID="ol"
>
> ID_LIKE="fedora"
>
> VARIANT="Server"
>
> VARIANT_ID="server"
>
> VERSION_ID="8.7"
>
> PLATFORM_ID="platform:el8"
>
> PRETTY_NAME="Oracle Linux Server 8.7"
>
> ANSI_COLOR="0;31"
>
> CPE_NAME="cpe:/o:oracle:linux:8:7:server"
>
> HOME_URL="https://linux.oracle.com/"
>
> BUG_REPORT_URL="https://bugzilla.oracle.com/"
>
>
>
> ORACLE_BUGZILLA_PRODUCT="Oracle Linux 8"
>
> ORACLE_BUGZILLA_PRODUCT_VERSION=8.7
>
> ORACLE_SUPPORT_PRODUCT="Oracle Linux"
>
> ORACLE_SUPPORT_PRODUCT_VERSION=8.7
>
>
>
>
>
>
>
> # Testing communication using 'localhost' inside the container - works as
> expected
>
>
>
> [root@NC-Test_podman-container /]# nc -lv 10370
>
> Listening on 0.0.0.0 10370
>
> Connection received on localhost 47218
>
> ping from server
>
> ping from client
>
>
>
>
>
> [root@NC-Test_podman-container /]# nc -v localhost 10370
>
> nc: connect to localhost (::1) port 10370 (tcp) failed: Connection refused
>
> Connection to localhost (127.0.0.1) 10370 port [tcp/*] succeeded!
>
> ping from server
>
> ping from client
>
>
>
>
>
>
>
> # Testing communication using hostname - "some" packets arrives, but only
> after a random delay of about 30-600 seconds
>
>
>
> [root@NC-Test_podman-container /]# nc -lv 10370
>
> Listening on 0.0.0.0 10370
>
> server
>
> Connection received on podman-container 59258
>
> client
>
>
>
> [root@NC-Test_podman-container /]# nc -v podman-container 10370
>
> Connection to podman-container (10.11.12.102) 10370 port [tcp/*] succeeded!
>
> client
>
> server
>
>
>
>
>
>
>
> [root@NC-Test_podman-container base_domain]# tcpdump -vv -X host
> podman-container and port 10370
>
> dropped privs to tcpdump
>
> tcpdump: listening on tap0, link-type EN10MB (Ethernet), capture size
> 262144 bytes
>
> 12:41:49.080602 IP (tos 0x0, ttl 64, id 61404, offset 0, flags [DF], proto
> TCP (6), length 47)
>
> podman-container.56372 > podman-container-oob.10370: Flags [P.], cksum
> 0xdb21 (correct), seq 2129174302:2129174309, ack 1071210498, win 65480,
> length 7
>
> 0x0000: 4500 002f efdc 4000 4006 7df1 0a00 0264 E../..@.@.}....d
>
> 0x0010: 0a31 b666 dc34 2882 7ee8 9f1e 3fd9 6002 .1.f.4(.~...?.`.
>
> 0x0020: 5018 ffc8 db21 0000 636c 6965 6e74 0a P....!..client.
>
> 12:41:49.080783 IP (tos 0x0, ttl 64, id 48821, offset 0, flags [none],
> proto TCP (6), length 40)
>
> podman-container-oob.10370 > podman-container.56372: Flags [.], cksum
> 0x2039 (correct), seq 1, ack 7, win 65535, length 0
>
> 0x0000: 4500 0028 beb5 0000 4006 ef1f 0a31 b666 E..(....@....1.f
>
> 0x0010: 0a00 0264 2882 dc34 3fd9 6002 7ee8 9f25 ...d(..4?.`.~..%
>
> 0x0020: 5010 ffff 2039 0000 P....9..
>
>
>
>
>
> 12:42:28.673431 IP (tos 0x0, ttl 64, id 49091, offset 0, flags [none],
> proto TCP (6), length 40)
>
> podman-container-oob.10370 > podman-container.51394: Flags [F.], cksum
> 0xf92e (correct), seq 946730519, ack 2284994989, win 65535, length 0
>
> 0x0000: 4500 0028 bfc3 0000 4006 ee11 0a31 b666 E..(....@....1.f
>
> 0x0010: 0a00 0264 2882 c8c2 386d f617 8832 41ad ...d(...8m...2A.
>
> 0x0020: 5011 ffff f92e 0000 P.......
>
> 12:42:28.673436 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP
> (6), length 40)
>
> podman-container.51394 > podman-container-oob.10370: Flags [R], cksum
> 0x27c1 (correct), seq 2284994989, win 0, length 0
>
> 0x0000: 4500 0028 0000 4000 4006 6dd5 0a00 0264 E..(..@.@.m....d
>
> 0x0010: 0a31 b666 c8c2 2882 8832 41ad 0000 0000 .1.f..(..2A.....
>
> 0x0020: 5004 0000 27c1 0000 P...'...
>
>
>
>
>
> 12:44:28.693154 IP (tos 0x0, ttl 64, id 49943, offset 0, flags [none],
> proto TCP (6), length 47)
>
> podman-container-oob.10370 > podman-container.56372: Flags [P.], cksum
> 0xcadb (correct), seq 1:8, ack 7, win 65535, length 7
>
> 0x0000: 4500 002f c317 0000 4006 eab6 0a31 b666 E../....@....1.f
>
> 0x0010: 0a00 0264 2882 dc34 3fd9 6002 7ee8 9f25 ...d(..4?.`.~..%
>
> 0x0020: 5018 ffff cadb 0000 7365 7276 6572 0a P.......server.
>
> 12:44:28.693174 IP (tos 0x0, ttl 64, id 61405, offset 0, flags [DF], proto
> TCP (6), length 40)
>
> podman-container.56372 > podman-container-oob.10370: Flags [.], cksum
> 0x2070 (correct), seq 7, ack 8, win 65473, length 0
>
> 0x0000: 4500 0028 efdd 4000 4006 7df7 0a00 0264 E..(..@.@.}....d
>
> 0x0010: 0a31 b666 dc34 2882 7ee8 9f25 3fd9 6009 .1.f.4(.~..%?.`.
>
> 0x0020: 5010 ffc1 2070 0000 P....p..
>
>
>
>
>
>
>
> Kind regards
>
> //Henrik
>
>
>
>
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
2 years, 3 months
[Podman] Re: 'system reset' makes things weird - ?
by lejeczek
On 07/05/2023 08:34, Taro Yokoyama wrote:
> Hi Lejeczek,
> It seems that the behavior is similar to this
> ticket(https://github.com/containers/podman/issues/13396).
>
> my assuming is that system resets setting by performing
> ‘podman system reset’ command and it looks like setup with
> CNI warns due to not utilizing containernetworking-plugins
> and netavark setting is internally selected by ‘podman
> network ls’ command at the first attempt after the reset,
> which of status is cached.
>
> Best Regards,
> Taro
>
> 2023年5月6日(土) 18:31 lejeczek via Podman
> <podman(a)lists.podman.io>:
>
> Hi guys.
>
> I'm seeing something strange and I hoped experts/devel
> would comment, on:
>
> -> $ podman network ls
> NETWORK ID NAME DRIVER
> 2f259bab93aa podman bridge
>
> -> $ podman system reset --force
>
> -> $ podman network ls
> WARN[0000] Error validating CNI config file
> /etc/cni/net.d/87-podman-bridge.conflist: [failed to
> find plugin "bridge" in path [/usr/local/libexec/cni
> /usr/libexec/cni /usr/local/lib/cni /usr/lib/cni
> /opt/cni/bin] failed to find plugin "portmap" in path
> [/usr/local/libexec/cni /usr/libexec/cni
> /usr/local/lib/cni /usr/lib/cni /opt/cni/bin] failed
> to find plugin "firewall" in path
> [/usr/local/libexec/cni /usr/libexec/cni
> /usr/local/lib/cni /usr/lib/cni /opt/cni/bin] failed
> to find plugin "tuning" in path
> [/usr/local/libexec/cni /usr/libexec/cni
> /usr/local/lib/cni /usr/lib/cni /opt/cni/bin]]
> NETWORK ID NAME DRIVER
> 2f259bab93aa podman bridge
>
> then I ssh-log out & in, things seem okey:
> -> $ podman network ls
> NETWORK ID NAME DRIVER
> 2f259bab93aa podman bridge
>
> and if I 'reset' then in circles it goes.
>
> what is actually happening here?
> Is system setup/installation missing something?
>
> many thanks, L.
>
It seems that this: /etc/cni/net.d/87-podman.conflist is
some remainer of some previous installation, previous version?
On a few different systems where the same version of podman
is installed that file does not exist and 'system reset'
does not seem to care about the file.
Also '/etc/udev/rules.d/etc/cni/net.d/87-podman.conflist'
existed - past tense as I removed both and now 'system
reset' and subsequent operations do not complain about that,
no log out&in necessary.
thanks, L.
2 years
[Podman] Re: podman image for ngninx
by Matthias Apitz
El día Wednesday, December 06, 2023 a las 08:41:28AM -0800, Robin Lee Powell via Podman escribió:
> That's pretty weird. Just to double check,
> 'curl http://deb.debian.org/debian/dists/buster/InRelease" works on
> the machine you're running podman from, yeah?
With Robin's hint I'm a bit further the road. I learned from our ID
departement, that I must use a SQID proxy to connect the Internet. When I set
$ export https_proxy=http://squid-r1.shr.xxxxxxxxx.org:3128
$ export http_proxy=http://squid-r1.shr.xxxxxxxxx.org:3128
The installation in the containers works in part, at least the fetch of
the software works:
$ nohup podman build -t nginx https://git.io/Jf8ol
$ grep Get nohup.out
...
Get:16 http://deb.debian.org/debian buster/main amd64 librtmp1 amd64 2.4+20151223.gitfa8646d.1-2 [60.5 kB]
Get:17 http://deb.debian.org/debian-security buster/updates/main amd64 libssh2-1 amd64 1.8.0-2.1+deb10u1 [141 kB]
Get:18 http://deb.debian.org/debian-security buster/updates/main amd64 libcurl3-gnutls amd64 7.64.0-4+deb10u8 [333 kB]
Get:19 http://deb.debian.org/debian buster/main amd64 libreadline7 amd64 7.0-5 [151 kB]
Get:20 http://deb.debian.org/debian buster/main amd64 gnupg1 amd64 1.4.23-1 [599 kB]
but later the fetch for keys fail in parts, at least, see at the end of
this posting.
I watched with tcpdump what is fetched, these are the answers from SQUID
for the contacted servers:
GET.http://ha.pool.sks-keysevers.net:11371/pks/lookup?....
HTTP/1.1.503
--
GET.http://keyserver.ubuntu.com:80/pks/lookup....
HTTP/1.1.200 OK
--
GET.http://p80.pool.sks-keyservers.net:80/pks/lookup...
HTTP/1.1.503 Service.Unavail...
--
GET.http://pgp.mit.edu:11371/pks/lookup...
no answer at all from SQUID within 10 secs;
What can I do for this server behind a firewall to Internet?
Can the fetch and usage of the keys somehow disabled?
Thanks in advance
matthias
...
+ NGINX_GPGKEY=573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62
+ found=
+ echo Fetching GPG key 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 from ha.pool.sks-keyservers.net
Fetching GPG key 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 from ha.pool.sks-keyservers.net
+ apt-key adv --keyserver ha.pool.sks-keyservers.net --keyserver-options timeout=10 --recv-keys 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62
Warning: apt-key output should not be parsed (stdout is not a terminal)
Executing: /tmp/apt-key-gpghome.PtVEOmYXUa/gpg.1.sh --keyserver ha.pool.sks-keyservers.net --keyserver-options timeout=10 --recv-keys 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62
gpg: requesting key 7BD9BF62 from hkp server ha.pool.sks-keyservers.net
gpgkeys: key 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 can't be retrieved
gpg: no valid OpenPGP data found.
gpg: Total number processed: 0
gpg: keyserver communications error: keyserver helper general error
gpg: keyserver communications error: unknown pubkey algorithm
gpg: keyserver receive failed: unknown pubkey algorithm
+ echo Fetching GPG key 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 from hkp://keyserver.ubuntu.com:80
+ apt-key advFetching GPG key 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 from hkp://keyserver.ubuntu.com:80
--keyserver hkp://keyserver.ubuntu.com:80 --keyserver-options timeout=10 --recv-keys 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62
Warning: apt-key output should not be parsed (stdout is not a terminal)
Executing: /tmp/apt-key-gpghome.w2W5sZvXEj/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --keyserver-options timeout=10 --recv-keys 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62
gpg: requesting key 7BD9BF62 from hkp server keyserver.ubuntu.com
gpg: key 7BD9BF62: public key "nginx signing key <signing-key(a)nginx.com>" imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
gpg: key 350947F8: "Debian Archive Automatic Signing Key (12/bookworm) <ftpmaster(a)debian.org>" not changed
gpg: key 8783D481: no valid user IDs
gpg: this may be caused by a missing self-signature
gpg: Total number processed: 13
gpg: skipped new keys: 11
gpg: w/o user IDs: 1
gpg: unchanged: 1
Fetching GPG key 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 from hkp://p80.pool.sks-keyservers.net:80
+ echo Fetching GPG key 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 from hkp://p80.pool.sks-keyservers.net:80
+ apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --keyserver-options timeout=10 --recv-keys 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62
Warning: apt-key output should not be parsed (stdout is not a terminal)
Executing: /tmp/apt-key-gpghome.AVA2hAGKvu/gpg.1.sh --keyserver hkp://p80.pool.sks-keyservers.net:80 --keyserver-options timeout=10 --recv-keys 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62
gpg: requesting key 7BD9BF62 from hkp server p80.pool.sks-keyservers.net
gpgkeys: key 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 can't be retrieved
gpg: no valid OpenPGP data found.
gpg: Total number processed: 0
gpg: keyserver communications error: keyserver helper general error
gpg: keyserver communications error: unknown pubkey algorithm
gpg: keyserver receive failed: unknown pubkey algorithm
+ echo Fetching GPG key 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 from pgp.mit.edu
Fetching GPG key 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 from pgp.mit.edu
+ apt-key adv --keyserver pgp.mit.edu --keyserver-options timeout=10 --recv-keys 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62
Warning: apt-key output should not be parsed (stdout is not a terminal)
Executing: /tmp/apt-key-gpghome.rksF4VZorD/gpg.1.sh --keyserver pgp.mit.edu --keyserver-options timeout=10 --recv-keys 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62
gpg: requesting key 7BD9BF62 from hkp server pgp.mit.edu
gpg: keyserver timed out
gpg: keyserver receive failed: keyserver error
+ test -z
+ echo error: failed to fetch GPG key 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62
error: failed to fetch GPG key 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62
+ exit 1
Error: building at STEP "RUN set -x && addgroup --system --gid 101 nginx && adduser --system --disabled-login --ingroup nginx --no-create-home --home /nonexistent --gecos "nginx user" --shell /bin/false --uid 101 nginx && apt-get update && apt-get install --no-install-recommends --no-install-suggests -y gnupg1 ca-certificates && NGINX_GPGKEY=573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62; found=''; for server in ha.pool.sks-keyservers.net hkp://keyserver.ubuntu.com:80 hkp://p80.pool.sks-keyservers.net:80 pgp.mit.edu ; do echo "Fetching GPG key $NGINX_GPGKEY from $server"; apt-key adv --keyserver "$server" --keyserver-options timeout=10 --recv-keys "$NGINX_GPGKEY" && found=yes && break; done; test -z "$found" && echo >&2 "error: failed to fetch GPG key $NGINX_GPGKEY" && exit 1; apt-get remove --purge --auto-remove -y gnupg1 && rm -rf /var/lib/apt/lists/* && dpkgArch="$(dpkg --print-architecture)" && nginxPackages=" nginx=${NGINX_VERSION}-${PKG_RELEASE} nginx-module-xslt=${NGINX_VERSION}-${PKG_RELEASE} nginx-module-geoip=${NGINX_VERSION}-${PKG_RELEASE} nginx-module-image-filter=${NGINX_VERSION}-${PKG_RELEASE} nginx-module-njs=${NGINX_VERSION}.${NJS_VERSION}-${PKG_RELEASE} " && case "$dpkgArch" in amd64|i386) echo "deb https://nginx.org/packages/mainline/debian/ buster nginx" >> /etc/apt/sources.list.d/nginx.list && apt-get update ;; *) echo "deb-src https://nginx.org/packages/mainline/debian/ buster nginx" >> /etc/apt/sources.list.d/nginx.list && tempDir="$(mktemp -d)" && chmod 777 "$tempDir" && savedAptMark="$(apt-mark showmanual)" && apt-get update && apt-get build-dep -y $nginxPackages && ( cd "$tempDir" && DEB_BUILD_OPTIONS="nocheck parallel=$(nproc)" apt-get source --compile $nginxPackages ) && apt-mark showmanual | xargs apt-mark auto > /dev/null && { [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; } && ls -lAFh "$tempDir" && ( cd "$tempDir" && dpkg-scanpackages . > Packages ) && grep '^Package: ' "$tempDir/Packages" && echo "deb [ trusted=yes ] file://$tempDir ./" > /etc/apt/sources.list.d/temp.list && apt-get -o Acquire::GzipIndexes=false update ;; esac && apt-get install --no-install-recommends --no-install-suggests -y $nginxPackages gettext-base && apt-get remove --purge --auto-remove -y ca-certificates && rm -rf /var/lib/apt/lists/* /etc/apt/sources.list.d/nginx.list && if [ -n "$tempDir" ]; then apt-get purge -y --auto-remove && rm -rf "$tempDir" /etc/apt/sources.list.d/temp.list; fi": while running runtime: exit status 1
--
Matthias Apitz, ✉ guru(a)unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub
I am not at war with Russia. Я не воюю с Россией.
Ich bin nicht im Krieg mit Russland.
1 year, 4 months