From eae at us.ibm.com Wed Oct 23 16:16:24 2019
Content-Type: multipart/mixed; boundary="===============7265567722270423716=="
MIME-Version: 1.0
From: eae at us.ibm.com
To: podman at lists.podman.io
Subject: [Podman] Sharing blob-info-cache-v1.boltdb across multiple machines
Date: Wed, 23 Oct 2019 16:16:17 +0000
Message-ID: <20191023161617.27175.75732@lists.podman.io>
--===============7265567722270423716==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
We have a cluster of machines where /home is a remote gluster mount. Runnin=
g podman rootless nicely solves the problem of accessing the remote filesys=
tem with user credentials. Since remote filesystems do not currently suppor=
t namespaces, podman is run with --root, --runroot, and --tmpdir set to be =
/tmp/$USER. All works well on the first client machine, but an image pulled=
successfully on one machine will fail to pull on a second. For example, on=
the second machine:
$ podman run --rm -it ubuntu
Trying to pull docker.io/library/ubuntu...Getting image source signatures
Copying blob c58094023a2e done
Copying blob 079b6d2a1e53 done
Copying blob 11048ebae908 done
Copying blob 22e816666fd6 done
Copying config cf0f3ca922 done
Writing manifest to image destination
Storing signatures
ERRO[0168] Error while applying layer: ApplyLayer exit status 1 stdout: st=
derr: lchown /etc/gshadow: operation not permitted =
ERRO[0200] Error pulling image ref //ubuntu:latest: Error committing the fi=
nished image: error adding layer with blob "sha256:22e816666fd6516bccd19765=
947232debc14a5baf2418b2202fd67b3807b6b91": ApplyLayer exit status 1 stdout:=
stderr: lchown /etc/gshadow: operation not permitted =
Failed
Trying to pull registry.fedoraproject.org/ubuntu...ERRO[0200] Error pulling=
image ref //registry.fedoraproject.org/ubuntu:latest: Error initializing s=
ource docker://registry.fedoraproject.org/ubuntu:latest: Error reading mani=
fest latest in registry.fedoraproject.org/ubuntu: manifest unknown: manifes=
t unknown =
Failed
Trying to pull quay.io/ubuntu...ERRO[0201] Error pulling image ref //quay.i=
o/ubuntu:latest: Error initializing source docker://quay.io/ubuntu:latest: =
Error reading manifest latest in quay.io/ubuntu: error parsing HTTP 404 res=
ponse body: invalid character '<' looking for beginning of value: "\n
404 Not Found\nNot Found
\nThe requested URL was not found on the server.=
If you entered the URL manually please check your spelling and try again.<=
/p>\n" =
Failed
Trying to pull registry.centos.org/ubuntu...ERRO[0201] Error pulling image =
ref //registry.centos.org/ubuntu:latest: Error initializing source docker:/=
/registry.centos.org/ubuntu:latest: Error reading manifest latest in regist=
ry.centos.org/ubuntu: manifest unknown: manifest unknown =
Failed
Error: unable to pull ubuntu: 4 errors occurred:
* Error committing the finished image: error adding layer with blob "sha25=
6:22e816666fd6516bccd19765947232debc14a5baf2418b2202fd67b3807b6b91": ApplyL=
ayer exit status 1 stdout: stderr: lchown /etc/gshadow: operation not perm=
itted
* Error initializing source docker://registry.fedoraproject.org/ubuntu:lat=
est: Error reading manifest latest in registry.fedoraproject.org/ubuntu: ma=
nifest unknown: manifest unknown
* Error initializing source docker://quay.io/ubuntu:latest: Error reading =
manifest latest in quay.io/ubuntu: error parsing HTTP 404 response body: in=
valid character '<' looking for beginning of value: "\n
404 Not Found\nNot =
Found
\nThe requested URL was not found on the server. If you entere=
d the URL manually please check your spelling and try again.
\n"
* Error initializing source docker://registry.centos.org/ubuntu:latest: Er=
ror reading manifest latest in registry.centos.org/ubuntu: manifest unknown=
: manifest unknown
Our guess is that this is happening because blob-info-cache-v1.boltdb is in=
the shared /home filesystem.
Is there a suggested approach to running rootless podman on multiple machin=
es with a shared /home directory?
Thanks,
Eddie
--===============7265567722270423716==--
From adrian at lisas.de Wed Oct 23 18:37:51 2019
Content-Type: multipart/mixed; boundary="===============7425246172390652850=="
MIME-Version: 1.0
From: Adrian Reber
To: podman at lists.podman.io
Subject: [Podman] Re: Sharing blob-info-cache-v1.boltdb across multiple
machines
Date: Wed, 23 Oct 2019 20:31:01 +0200
Message-ID: <20191023183101.GI28864@lisas.de>
In-Reply-To: 20191023161617.27175.75732@lists.podman.io
--===============7425246172390652850==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On Wed, Oct 23, 2019 at 04:16:17PM -0000, eae(a)us.ibm.com wrote:
> We have a cluster of machines where /home is a remote gluster mount. Runn=
ing podman rootless nicely solves the problem of accessing the remote files=
ystem with user credentials. Since remote filesystems do not currently supp=
ort namespaces, podman is run with --root, --runroot, and --tmpdir set to b=
e /tmp/$USER. All works well on the first client machine, but an image pull=
ed successfully on one machine will fail to pull on a second. For example, =
on the second machine:
> =
> $ podman run --rm -it ubuntu
> Trying to pull docker.io/library/ubuntu...Getting image source signatures
> Copying blob c58094023a2e done
> Copying blob 079b6d2a1e53 done
> Copying blob 11048ebae908 done
> Copying blob 22e816666fd6 done
> Copying config cf0f3ca922 done
> Writing manifest to image destination
> Storing signatures
> ERRO[0168] Error while applying layer: ApplyLayer exit status 1 stdout: =
stderr: lchown /etc/gshadow: operation not permitted =
> ERRO[0200] Error pulling image ref //ubuntu:latest: Error committing the =
finished image: error adding layer with blob "sha256:22e816666fd6516bccd197=
65947232debc14a5baf2418b2202fd67b3807b6b91": ApplyLayer exit status 1 stdou=
t: stderr: lchown /etc/gshadow: operation not permitted =
> Failed
> Trying to pull registry.fedoraproject.org/ubuntu...ERRO[0200] Error pulli=
ng image ref //registry.fedoraproject.org/ubuntu:latest: Error initializing=
source docker://registry.fedoraproject.org/ubuntu:latest: Error reading ma=
nifest latest in registry.fedoraproject.org/ubuntu: manifest unknown: manif=
est unknown =
> Failed
> Trying to pull quay.io/ubuntu...ERRO[0201] Error pulling image ref //quay=
.io/ubuntu:latest: Error initializing source docker://quay.io/ubuntu:latest=
: Error reading manifest latest in quay.io/ubuntu: error parsing HTTP 404 r=
esponse body: invalid character '<' looking for beginning of value: "\n404 Not Found=
title>\nNot Found
\nThe requested URL was not found on the serve=
r. If you entered the URL manually please check your spelling and try again=
.
\n" =
> Failed
> Trying to pull registry.centos.org/ubuntu...ERRO[0201] Error pulling imag=
e ref //registry.centos.org/ubuntu:latest: Error initializing source docker=
://registry.centos.org/ubuntu:latest: Error reading manifest latest in regi=
stry.centos.org/ubuntu: manifest unknown: manifest unknown =
> Failed
> Error: unable to pull ubuntu: 4 errors occurred:
> * Error committing the finished image: error adding layer with blob "sha=
256:22e816666fd6516bccd19765947232debc14a5baf2418b2202fd67b3807b6b91": Appl=
yLayer exit status 1 stdout: stderr: lchown /etc/gshadow: operation not pe=
rmitted
> * Error initializing source docker://registry.fedoraproject.org/ubuntu:l=
atest: Error reading manifest latest in registry.fedoraproject.org/ubuntu: =
manifest unknown: manifest unknown
> * Error initializing source docker://quay.io/ubuntu:latest: Error readin=
g manifest latest in quay.io/ubuntu: error parsing HTTP 404 response body: =
invalid character '<' looking for beginning of value: "\n404 Not Found\nNo=
t Found
\nThe requested URL was not found on the server. If you ente=
red the URL manually please check your spelling and try again.
\n"
> * Error initializing source docker://registry.centos.org/ubuntu:latest: =
Error reading manifest latest in registry.centos.org/ubuntu: manifest unkno=
wn: manifest unknown
> =
> Our guess is that this is happening because blob-info-cache-v1.boltdb is =
in the shared /home filesystem.
> =
> Is there a suggested approach to running rootless podman on multiple mach=
ines with a shared /home directory?
To run Podman in an HPC like environment with /home on NFS, I am doing
the following steps to set up Podman for each user:
$ podman info
$ sed -e "s,graphroot.*$,graphroot =3D \"/tmp/container\",g" -i .config/con=
tainers/storage.conf'
$ rm -f ./.local/share/containers/storage/libpod/bolt_state.db ./.local/sha=
re/containers/cache/blob-info-cache-v1.boltdb"
If a user now uses Podman it just works. This is for a CentOS 7.7 based
system. Maybe that helps for your use case also.
Adrian
--===============7425246172390652850==--
From dwalsh at redhat.com Wed Oct 23 20:31:16 2019
Content-Type: multipart/mixed; boundary="===============9013514626093243249=="
MIME-Version: 1.0
From: Daniel Walsh
To: podman at lists.podman.io
Subject: [Podman] Re: Sharing blob-info-cache-v1.boltdb across multiple
machines
Date: Wed, 23 Oct 2019 16:24:40 -0400
Message-ID:
In-Reply-To: 20191023183101.GI28864@lisas.de
--===============9013514626093243249==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On 10/23/19 2:31 PM, Adrian Reber wrote:
> On Wed, Oct 23, 2019 at 04:16:17PM -0000, eae(a)us.ibm.com wrote:
>> We have a cluster of machines where /home is a remote gluster mount. Run=
ning podman rootless nicely solves the problem of accessing the remote file=
system with user credentials. Since remote filesystems do not currently sup=
port namespaces, podman is run with --root, --runroot, and --tmpdir set to =
be /tmp/$USER. All works well on the first client machine, but an image pul=
led successfully on one machine will fail to pull on a second. For example,=
on the second machine:
>>
>> $ podman run --rm -it ubuntu
>> Trying to pull docker.io/library/ubuntu...Getting image source signatures
>> Copying blob c58094023a2e done
>> Copying blob 079b6d2a1e53 done
>> Copying blob 11048ebae908 done
>> Copying blob 22e816666fd6 done
>> Copying config cf0f3ca922 done
>> Writing manifest to image destination
>> Storing signatures
>> ERRO[0168] Error while applying layer: ApplyLayer exit status 1 stdout: =
stderr: lchown /etc/gshadow: operation not permitted =
>> ERRO[0200] Error pulling image ref //ubuntu:latest: Error committing the=
finished image: error adding layer with blob "sha256:22e816666fd6516bccd19=
765947232debc14a5baf2418b2202fd67b3807b6b91": ApplyLayer exit status 1 stdo=
ut: stderr: lchown /etc/gshadow: operation not permitted =
>> Failed
>> Trying to pull registry.fedoraproject.org/ubuntu...ERRO[0200] Error pull=
ing image ref //registry.fedoraproject.org/ubuntu:latest: Error initializin=
g source docker://registry.fedoraproject.org/ubuntu:latest: Error reading m=
anifest latest in registry.fedoraproject.org/ubuntu: manifest unknown: mani=
fest unknown =
>> Failed
>> Trying to pull quay.io/ubuntu...ERRO[0201] Error pulling image ref //qua=
y.io/ubuntu:latest: Error initializing source docker://quay.io/ubuntu:lates=
t: Error reading manifest latest in quay.io/ubuntu: error parsing HTTP 404 =
response body: invalid character '<' looking for beginning of value: "\n404 Not Found<=
/title>\nNot Found
\nThe requested URL was not found on the serv=
er. If you entered the URL manually please check your spelling and try agai=
n.
\n" =
>> Failed
>> Trying to pull registry.centos.org/ubuntu...ERRO[0201] Error pulling ima=
ge ref //registry.centos.org/ubuntu:latest: Error initializing source docke=
r://registry.centos.org/ubuntu:latest: Error reading manifest latest in reg=
istry.centos.org/ubuntu: manifest unknown: manifest unknown =
>> Failed
>> Error: unable to pull ubuntu: 4 errors occurred:
>> * Error committing the finished image: error adding layer with blob "sh=
a256:22e816666fd6516bccd19765947232debc14a5baf2418b2202fd67b3807b6b91": App=
lyLayer exit status 1 stdout: stderr: lchown /etc/gshadow: operation not p=
ermitted
>> * Error initializing source docker://registry.fedoraproject.org/ubuntu:=
latest: Error reading manifest latest in registry.fedoraproject.org/ubuntu:=
manifest unknown: manifest unknown
>> * Error initializing source docker://quay.io/ubuntu:latest: Error readi=
ng manifest latest in quay.io/ubuntu: error parsing HTTP 404 response body:=
invalid character '<' looking for beginning of value: "\n404 Not Found\nN=
ot Found
\nThe requested URL was not found on the server. If you ent=
ered the URL manually please check your spelling and try again.
\n"
>> * Error initializing source docker://registry.centos.org/ubuntu:latest:=
Error reading manifest latest in registry.centos.org/ubuntu: manifest unkn=
own: manifest unknown
>>
>> Our guess is that this is happening because blob-info-cache-v1.boltdb is=
in the shared /home filesystem.
>>
>> Is there a suggested approach to running rootless podman on multiple mac=
hines with a shared /home directory?
> To run Podman in an HPC like environment with /home on NFS, I am doing
> the following steps to set up Podman for each user:
>
> $ podman info
> $ sed -e "s,graphroot.*$,graphroot =3D \"/tmp/container\",g" -i .config/c=
ontainers/storage.conf'
> $ rm -f ./.local/share/containers/storage/libpod/bolt_state.db ./.local/s=
hare/containers/cache/blob-info-cache-v1.boltdb"
>
> If a user now uses Podman it just works. This is for a CentOS 7.7 based
> system. Maybe that helps for your use case also.
>
> Adrian
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
I think a nice blog on how to run podman on an NFS Homedir would be
something people could use.
--===============9013514626093243249==--
From eae at us.ibm.com Thu Oct 24 15:49:47 2019
Content-Type: multipart/mixed; boundary="===============7005710103100288625=="
MIME-Version: 1.0
From: eae at us.ibm.com
To: podman at lists.podman.io
Subject: [Podman] Re: Sharing blob-info-cache-v1.boltdb across multiple
machines
Date: Thu, 24 Oct 2019 15:49:40 +0000
Message-ID: <20191024154940.27175.74957@lists.podman.io>
In-Reply-To: 20191023183101.GI28864@lisas.de
--===============7005710103100288625==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Hi Adrian,
Thanks for the suggestion. This does allow my user to run rootless podman o=
n multiple machines with a shared /home directory, but other user IDs with =
the same configuration are blocked:
$ podman info
Error: could not get runtime: mkdir /tmp/container/mounts: permission denied
Is this expected?
Thanks,
Eddie
--===============7005710103100288625==--
From eae at us.ibm.com Thu Oct 24 19:25:25 2019
Content-Type: multipart/mixed; boundary="===============1305802918791175553=="
MIME-Version: 1.0
From: eae at us.ibm.com
To: podman at lists.podman.io
Subject: [Podman] Re: Sharing blob-info-cache-v1.boltdb across multiple
machines
Date: Thu, 24 Oct 2019 19:25:20 +0000
Message-ID: <20191024192520.27175.88771@lists.podman.io>
In-Reply-To: 20191024154940.27175.74957@lists.podman.io
--===============1305802918791175553==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Ooops, silly me. Setting graphroot =3D /tmp/user/container solves that prob=
lem.
Thanks again
--===============1305802918791175553==--
From fromani at redhat.com Mon Oct 28 09:01:55 2019
Content-Type: multipart/mixed; boundary="===============1178592133942042656=="
MIME-Version: 1.0
From: Francesco Romani
To: podman at lists.podman.io
Subject: [Podman] [varlink] how to do exec using podman-remote?
Date: Mon, 28 Oct 2019 10:01:42 +0100
Message-ID:
--===============1178592133942042656==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Hi all,
I'm using podman-remote and I'm trying to exec commands in containers. =
It seems it fails, but I don't really understand why.
I'm on fedora 30 (with last updates), I tried podman 1.6.1 (from RPMs) =
but also 1.6.2 compiled from scratch:
$ podman version
Version:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
1.6.2
RemoteAPI Version:=C2=A0 1
Go Version:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 go1.12.10
OS/Arch:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
linux/amd64
Ultimately, I'd need to exec commands from a golang program, but for now =
experimenting with command line is fine, but can't make even this work :)
here's what I tried:
$ varlink call -m unix:/run/podman/io.podman/io.podman.CreateContainer =
'{"create":{"args":["fedora:30", "/bin/sleep", "10h"]}}'
{
=C2=A0 "container": =
"c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853"
}
$ varlink call -m unix:/run/podman/io.podman/io.podman.StartContainer =
'{"name":"c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853"=
}'
{
=C2=A0 "container": =
"c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853"
}
The container IS running now:
root=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 7836=C2=A0 0.0=C2=A0 0.0=C2=A0 77876=C2=
=A0 1756 ?=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Ssl=C2=A0 09:11=C2=A0=
=C2=A0 0:00 =
/usr/bin/conmon --api-version 1 -s -c =
c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853 -u =
c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853 -r =
/usr/bin/runc -b =
/var/lib/containers/storage/overlay-containers/c24e28054f89c0a
root=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 7848=C2=A0 0.0=C2=A0 0.0=C2=A0=C2=A0 232=
0=C2=A0=C2=A0 684 ?=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Ss=C2=A0=C2=
=A0 09:11=C2=A0=C2=A0 0:00 \_ =
/bin/sleep 10h
So I do:
$ varlink call -m unix:/run/podman/io.podman/io.podman.ExecContainer '{ =
"opts": { "name": =
"c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853", =
"tty": true, "privileged": true, "cmd": ["/bin/date"] } }'
Call failed with error: io.podman.ErrorOccurred
{
=C2=A0 "reason": "client must use upgraded connection to exec"
}
So I downloaded go-varlink-cmd =
(https://github.com/varlink/go-varlink-cmd) and patched to supporte the =
upgraded connection (on client side)[1], but doesn't look much better:
$ ~/bin/go-varlink-cmd call -upgrade =
unix:/run/podman/io.podman/io.podman.ExecContainer '{ "opts": { "name": =
"c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853", =
"tty": true, "privileged": true, "cmd": ["/bin/date"] } }'
recv -> 0=C2=A0 # return value
retval -> map[string]interface {}(nil) # what got as answer
{} # answer translated to JSON
No luck with minimal command line either:
$ ~/bin/go-varlink-cmd call -upgrade =
unix:/run/podman/io.podman/io.podman.ExecContainer '{ "opts": { "name": =
"c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853", =
"cmd": ["/bin/date"] } }'
recv -> 0
retval -> map[string]interface {}(nil)
{}
Maybe, just wondering: do I need to set something when I create the =
container? If so the docs aren't crystal clear :\
I tried to look at logs but can't make much sense. Here's the logs for =
podman-remote on my system, with log level increased to debug:
Oct 28 09:21:09 myhost.lan systemd[1]: Started Podman Remote API Service.
Oct 28 09:21:09 myhost.lan audit[1]: SERVICE_START pid=3D1 uid=3D0 =
auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 =
msg=3D'unit=3Dio.podman comm=3D"systemd" exe=3D"/usr/lib/systemd/systemd" =
hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Using varlink socke=
t: =
unix:/run/podman/io.podman"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"using conmon: =
\"/usr/bin/conmon\""
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Initializing boltdb =
state at /var/lib/containers/storage/libpod/bolt_state.db"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Using graph driver =
overlay"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Using graph root =
/var/lib/containers/storage"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Using run root =
/var/run/containers/storage"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Using static dir =
/var/lib/containers/storage/libpod"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Using tmp dir =
/var/run/libpod"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Using volume path =
/var/lib/containers/storage/volumes"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Set libpod namespac=
e =
to \"\""
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"[graphdriver] tryin=
g =
provided driver \"overlay\""
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"cached value indica=
ted =
that overlay is supported"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"cached value indica=
ted =
that metacopy is being used"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"cached value indica=
ted =
that native-diff is not being used"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Dwarning msg=3D"Not using native =
diff for overlay, this may cause degraded performance for building =
images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"backingFs=3Dextfs, =
projectQuotaSupported=3Dfalse, useNativeDiff=3Dfalse, usingMetacopy=3Dtrue"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Initializing event =
backend journald"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"using runtime =
\"/usr/bin/runc\""
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Dwarning msg=3D"Error initializin=
g =
configured OCI runtime crun: no valid executable found for OCI runtime =
crun: invalid argument"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Dinfo msg=3D"Found CNI network =
podman (type=3Dbridge) at /etc/cni/net.d/87-podman-bridge.conflist"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Creating new exec =
session in container =
c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853 with =
session id 6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba1534c014c15054f67c"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Derror msg=3D"ExecContainer faile=
d =
to HANG-UP on =
c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853: write =
unix /run/podman/io.podman->@: write: broken pipe"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Derror msg=3D"Exec Container err: =
write unix /run/podman/io.podman->@: write: broken pipe"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"/usr/bin/conmon =
messages will be logged to syslog"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"running conmon: =
/usr/bin/conmon" args=3D"[--api-version 1 -s -c =
c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853 -u =
6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba1534c014c15054f67c -r =
/usr/bin/runc -b />
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"disabling SD notify"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Dinfo msg=3D"Running conmon under =
slice machine.slice and unitName =
libpod-conmon-c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164=
853.scope"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Dwarning msg=3D"Failed to add con=
mon =
to systemd sandbox cgroup: Unit =
libpod-conmon-c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164=
853.scope =
already exists."
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Attaching to contai=
ner =
c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853 exec =
session 6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba1534c014c15054f67c"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"connecting to socke=
t =
/var/run/libpod/socket/6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba1534c0=
14c15054f67c/attach"
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
: addr{sun_family=3DAF_UNIX, sun_path=3D/tmp/conmon-term.B2039Z}
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
: attach sock path: =
/var/run/libpod/socket/6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba1534c0=
14c15054f67c/attach
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
: addr{sun_family=3DAF_UNIX, =
sun_path=3D/var/run/libpod/socket/6a1ccdea916ee245420320fa3ce02ff68ea066b8b=
6b7ba1534c014c15054f67c/attach}
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
: ctl fifo path: =
/var/lib/containers/storage/overlay-containers/c24e28054f89c0a0ac9c5e3690ab=
0c5ef5a8b859ed51d032ae38cc618a164853/userdata/6a1ccdea916ee245420320fa3ce02=
ff68ea066b8b6b7ba1534c014c15054f67c/ctl
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
: terminal_ctrl_fd: 18
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
: sending attach message to parent
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
: sent attach message to parent
Oct 28 09:21:09 myhost.lan conmon[8108]: conmon c24e28054f89c0a0ac9c =
: exec with attach is waiting for start message from parent
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Received: 0"
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
: Accepted connection 20
Oct 28 09:21:09 myhost.lan conmon[8108]: conmon c24e28054f89c0a0ac9c =
: exec with attach got start message from parent
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
: about to accept from console_socket_fd: 14
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
: about to recvfd from connfd: 21
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
: console =3D {.name =3D '/dev/ptmx8 09:21:09 conmon: conmon =
c24e28054f89c0a0ac9c : about to recvfd from connfd: 21
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 '; .fd =3D 14}
Oct 28 09:21:09 myhost.lan systemd[2088]: =
run-runc-c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853-r=
unc.xJTNZd.mount: =
Succeeded.
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
: couldn't find cb for pid 8121
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
: container status and pid were found prior to callback being =
registered. calling manually
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
: container PID: 8121
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
: Failed to open cgroups file: /proc/8121/cgroup
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
: Failed to get memory cgroup path. Container may have exited
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Received: 8121"
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
: stdio_input read failed Input/output error
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
: Failed to write to socket
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
: Unable to send container stderr message to parent Bad file =
descriptor
Oct 28 09:21:09 myhost.lan podman[8087]: 2019-10-28 09:21:09.632987596 =
+0100 CET m=3D+0.247438867 container exec =
c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853 =
(image=3Ddocker.io/library/fedora:30, name=3Dsad_thompson)
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Successfully starte=
d =
exec session =
6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba1534c014c15054f67c in =
container c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Derror msg=3D"write unix =
/run/podman/io.podman->@: write: broken pipe"
I'm out of ideas and can't find anything in the docs - apologies if I =
missed anything, please feel free to send me there
Any help or comment would be appreciated.
Thanks and bests!
-- =
Francesco Romani
Senior SW Eng., Virtualization R&D
Red Hat
IRC: fromani github: @fromanirh
--===============1178592133942042656==--
From bbaude at redhat.com Mon Oct 28 14:10:47 2019
Content-Type: multipart/mixed; boundary="===============2783402013882764738=="
MIME-Version: 1.0
From: Brent Baude
To: podman at lists.podman.io
Subject: [Podman] Re: [varlink] how to do exec using podman-remote?
Date: Mon, 28 Oct 2019 09:09:26 -0500
Message-ID: <730692ff492af96394c2338ef8ea814df3e5dd74.camel@redhat.com>
In-Reply-To: ee9e8d04-350f-6934-e3a9-5a37194f8cea@redhat.com
--===============2783402013882764738==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
I'm glad to see that you are going to implement this in a golang
program. That is how you are going to have to do it. I know of no
other way.
On Mon, 2019-10-28 at 10:01 +0100, Francesco Romani wrote:
> Hi all,
> =
> =
> I'm using podman-remote and I'm trying to exec commands in
> containers. =
> It seems it fails, but I don't really understand why.
> =
> I'm on fedora 30 (with last updates), I tried podman 1.6.1 (from
> RPMs) =
> but also 1.6.2 compiled from scratch:
> =
> $ podman version
> Version: 1.6.2
> RemoteAPI Version: 1
> Go Version: go1.12.10
> OS/Arch: linux/amd64
> =
> =
> Ultimately, I'd need to exec commands from a golang program, but for
> now =
> experimenting with command line is fine, but can't make even this
> work :)
> =
> =
> here's what I tried:
> =
> $ varlink call -m
> unix:/run/podman/io.podman/io.podman.CreateContainer =
> '{"create":{"args":["fedora:30", "/bin/sleep", "10h"]}}'
> {
> "container": =
> "c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853"
> }
> $ varlink call -m
> unix:/run/podman/io.podman/io.podman.StartContainer =
> '{"name":"c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a1
> 64853"}'
> {
> "container": =
> "c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853"
> }
> =
> =
> The container IS running now:
> =
> root 7836 0.0 0.0 77876 1756 ? Ssl 09:11 0:00 =
> /usr/bin/conmon --api-version 1 -s -c =
> c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853 -u =
> c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853 -r =
> /usr/bin/runc -b =
> /var/lib/containers/storage/overlay-containers/c24e28054f89c0a
> root 7848 0.0 0.0 2320 684 ? Ss 09:11 0:00 \_ =
> /bin/sleep 10h
> =
> =
> So I do:
> =
> =
> $ varlink call -m unix:/run/podman/io.podman/io.podman.ExecContainer
> '{ =
> "opts": { "name": =
> "c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853", =
> "tty": true, "privileged": true, "cmd": ["/bin/date"] } }'
> Call failed with error: io.podman.ErrorOccurred
> {
> "reason": "client must use upgraded connection to exec"
> }
> =
> =
> So I downloaded go-varlink-cmd =
> (https://github.com/varlink/go-varlink-cmd) and patched to supporte
> the =
> upgraded connection (on client side)[1], but doesn't look much
> better:
> =
> =
> $ ~/bin/go-varlink-cmd call -upgrade =
> unix:/run/podman/io.podman/io.podman.ExecContainer '{ "opts": {
> "name": =
> "c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853", =
> "tty": true, "privileged": true, "cmd": ["/bin/date"] } }'
> =
> recv -> 0 # return value
> retval -> map[string]interface {}(nil) # what got as answer
> {} # answer translated to JSON
> =
> No luck with minimal command line either:
> =
> $ ~/bin/go-varlink-cmd call -upgrade =
> unix:/run/podman/io.podman/io.podman.ExecContainer '{ "opts": {
> "name": =
> "c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853", =
> "cmd": ["/bin/date"] } }'
> recv -> 0
> retval -> map[string]interface {}(nil)
> {}
> =
> =
> Maybe, just wondering: do I need to set something when I create the =
> container? If so the docs aren't crystal clear :\
> =
> =
> I tried to look at logs but can't make much sense. Here's the logs
> for =
> podman-remote on my system, with log level increased to debug:
> =
> Oct 28 09:21:09 myhost.lan systemd[1]: Started Podman Remote API
> Service.
> Oct 28 09:21:09 myhost.lan audit[1]: SERVICE_START pid=3D1 uid=3D0 =
> auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 =
> msg=3D'unit=3Dio.podman comm=3D"systemd" exe=3D"/usr/lib/systemd/systemd" =
> hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Using varlink
> socket: =
> unix:/run/podman/io.podman"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"using conmon: =
> \"/usr/bin/conmon\""
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Initializing
> boltdb =
> state at /var/lib/containers/storage/libpod/bolt_state.db"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Using graph drive=
r =
> overlay"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Using graph root =
> /var/lib/containers/storage"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Using run root =
> /var/run/containers/storage"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Using static dir =
> /var/lib/containers/storage/libpod"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Using tmp dir =
> /var/run/libpod"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Using volume path =
> /var/lib/containers/storage/volumes"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Set libpod
> namespace =
> to \"\""
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"[graphdriver]
> trying =
> provided driver \"overlay\""
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"cached value
> indicated =
> that overlay is supported"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"cached value
> indicated =
> that metacopy is being used"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"cached value
> indicated =
> that native-diff is not being used"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Dwarning msg=3D"Not using nativ=
e =
> diff for overlay, this may cause degraded performance for building =
> images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"backingFs=3Dextfs=
, =
> projectQuotaSupported=3Dfalse, useNativeDiff=3Dfalse, usingMetacopy=3Dtru=
e"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Initializing even=
t =
> backend journald"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"using runtime =
> \"/usr/bin/runc\""
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Dwarning msg=3D"Error
> initializing =
> configured OCI runtime crun: no valid executable found for OCI
> runtime =
> crun: invalid argument"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Dinfo msg=3D"Found CNI network =
> podman (type=3Dbridge) at /etc/cni/net.d/87-podman-bridge.conflist"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Creating new exec =
> session in container =
> c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853
> with =
> session id
> 6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba1534c014c15054f67c"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Derror msg=3D"ExecContainer
> failed =
> to HANG-UP on =
> c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853:
> write =
> unix /run/podman/io.podman->@: write: broken pipe"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Derror msg=3D"Exec Container
> err: =
> write unix /run/podman/io.podman->@: write: broken pipe"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"/usr/bin/conmon =
> messages will be logged to syslog"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"running conmon: =
> /usr/bin/conmon" args=3D"[--api-version 1 -s -c =
> c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853 -u =
> 6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba1534c014c15054f67c -r =
> /usr/bin/runc -b />
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"disabling SD
> notify"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Dinfo msg=3D"Running conmon
> under =
> slice machine.slice and unitName =
> libpod-conmon-
> c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853.scop
> e"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Dwarning msg=3D"Failed to add
> conmon =
> to systemd sandbox cgroup: Unit =
> libpod-conmon-
> c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853.scop
> e =
> already exists."
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Attaching to
> container =
> c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853
> exec =
> session
> 6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba1534c014c15054f67c"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"connecting to
> socket =
> /var/run/libpod/socket/6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba
> 1534c014c15054f67c/attach"
> Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
> : addr{sun_family=3DAF_UNIX, sun_path=3D/tmp/conmon-term.B2039Z}
> Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
> : attach sock path: =
> /var/run/libpod/socket/6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba
> 1534c014c15054f67c/attach
> Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
> : addr{sun_family=3DAF_UNIX, =
> sun_path=3D/var/run/libpod/socket/6a1ccdea916ee245420320fa3ce02ff68ea06
> 6b8b6b7ba1534c014c15054f67c/attach}
> Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
> : ctl fifo path: =
> /var/lib/containers/storage/overlay-
> containers/c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a
> 164853/userdata/6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba1534c01
> 4c15054f67c/ctl
> Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
> : terminal_ctrl_fd: 18
> Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
> : sending attach message to parent
> Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
> : sent attach message to parent
> Oct 28 09:21:09 myhost.lan conmon[8108]: conmon c24e28054f89c0a0ac9c =
> : exec with attach is waiting for start message from parent
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Received: 0"
> Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
> : Accepted connection 20
> Oct 28 09:21:09 myhost.lan conmon[8108]: conmon c24e28054f89c0a0ac9c =
> : exec with attach got start message from parent
> Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
> : about to accept from console_socket_fd: 14
> Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
> : about to recvfd from connfd: 21
> Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
> : console =3D {.name =3D '/dev/ptmx8 09:21:09 conmon: conmon =
> c24e28054f89c0a0ac9c : about to recvfd from connfd: 21
> '; .fd =3D 14}
> Oct 28 09:21:09 myhost.lan systemd[2088]: =
> run-runc-
> c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853-
> runc.xJTNZd.mount: =
> Succeeded.
> Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
> : couldn't find cb for pid 8121
> Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
> : container status and pid were found prior to callback
> being =
> registered. calling manually
> Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
> : container PID: 8121
> Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
> : Failed to open cgroups file: /proc/8121/cgroup
> Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
> : Failed to get memory cgroup path. Container may have exited
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Received: 8121"
> Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
> : stdio_input read failed Input/output error
> Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
> : Failed to write to socket
> Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
> : Unable to send container stderr message to parent Bad file =
> descriptor
> Oct 28 09:21:09 myhost.lan podman[8087]: 2019-10-28
> 09:21:09.632987596 =
> +0100 CET m=3D+0.247438867 container exec =
> c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853 =
> (image=3Ddocker.io/library/fedora:30, name=3Dsad_thompson)
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Successfully
> started =
> exec session =
> 6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba1534c014c15054f67c in =
> container
> c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Derror msg=3D"write unix =
> /run/podman/io.podman->@: write: broken pipe"
> =
> =
> =
> =
> I'm out of ideas and can't find anything in the docs - apologies if
> I =
> missed anything, please feel free to send me there
> =
> =
> Any help or comment would be appreciated.
> =
> =
> Thanks and bests!
> =
> -- =
> Francesco Romani
> Senior SW Eng., Virtualization R&D
> Red Hat
> IRC: fromani github: @fromanirh
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
--===============2783402013882764738==--
From fromani at redhat.com Mon Oct 28 14:20:10 2019
Content-Type: multipart/mixed; boundary="===============5892581322183327956=="
MIME-Version: 1.0
From: Francesco Romani
To: podman at lists.podman.io
Subject: [Podman] Re: [varlink] how to do exec using podman-remote?
Date: Mon, 28 Oct 2019 15:19:57 +0100
Message-ID:
In-Reply-To: 730692ff492af96394c2338ef8ea814df3e5dd74.camel@redhat.com
--===============5892581322183327956==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On 10/28/19 3:09 PM, Brent Baude wrote:
> I'm glad to see that you are going to implement this in a golang
> program. That is how you are going to have to do it. I know of no
> other way.
Hey,
thanks for your reply. Does this means it could be a podman bug?
bests,
-- =
Francesco Romani
Senior SW Eng., Virtualization R&D
Red Hat
IRC: fromani github: @fromanirh
--===============5892581322183327956==--
From bbaude at redhat.com Mon Oct 28 14:23:56 2019
Content-Type: multipart/mixed; boundary="===============2200728064163649904=="
MIME-Version: 1.0
From: Brent Baude
To: podman at lists.podman.io
Subject: [Podman] Re: [varlink] how to do exec using podman-remote?
Date: Mon, 28 Oct 2019 09:23:43 -0500
Message-ID:
In-Reply-To: c98a8153-03a8-5b4d-7665-c5631bf0d92b@redhat.com
--===============2200728064163649904==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On Mon, 2019-10-28 at 15:19 +0100, Francesco Romani wrote:
> On 10/28/19 3:09 PM, Brent Baude wrote:
> > I'm glad to see that you are going to implement this in a golang
> > program. That is how you are going to have to do it. I know of no
> > other way.
> =
> Hey,
> =
> =
> thanks for your reply. Does this means it could be a podman bug?
> =
> =
> bests,
> =
I don't believe so. I am unaware of how to use the generic varlink
tools to emulate and upgraded connection.
--===============2200728064163649904==--
From fromani at redhat.com Mon Oct 28 14:27:27 2019
Content-Type: multipart/mixed; boundary="===============3292944622976156203=="
MIME-Version: 1.0
From: Francesco Romani
To: podman at lists.podman.io
Subject: [Podman] Re: [varlink] how to do exec using podman-remote?
Date: Mon, 28 Oct 2019 15:27:18 +0100
Message-ID:
In-Reply-To: c1a62190139de969e853ee9b8e70628699ba8369.camel@redhat.com
--===============3292944622976156203==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On 10/28/19 3:23 PM, Brent Baude wrote:
> On Mon, 2019-10-28 at 15:19 +0100, Francesco Romani wrote:
>> On 10/28/19 3:09 PM, Brent Baude wrote:
>>> I'm glad to see that you are going to implement this in a golang
>>> program. That is how you are going to have to do it. I know of no
>>> other way.
>> Hey,
>>
>>
>> thanks for your reply. Does this means it could be a podman bug?
>>
>>
>> bests,
>>
> I don't believe so. I am unaware of how to use the generic varlink
> tools to emulate and upgraded connection.
OK, thanks. Any other tips about how I can debug this further?
These entries in the podman syslog (actually journal, but still) look =
suspicious:
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
: stdio_input read failed Input/output error
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
: Failed to write to socket
Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
: Unable to send container stderr message to parent Bad file =
descriptor
Oct 28 09:21:09 myhost.lan podman[8087]: 2019-10-28 09:21:09.632987596 =
+0100 CET m=3D+0.247438867 container exec =
c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853 =
(image=3Ddocker.io/library/fedora:30, name=3Dsad_thompson)
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Successfully starte=
d =
exec session =
6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba1534c014c15054f67c in =
container c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853"
Oct 28 09:21:09 myhost.lan podman[8087]: =
time=3D"2019-10-28T09:21:09+01:00" level=3Derror msg=3D"write unix =
/run/podman/io.podman->@: write: broken pipe"
My wild guess is that the client somehow exits before the podman on the =
other side of the varlink could reply.
Thanks,
-- =
Francesco Romani
Senior SW Eng., Virtualization R&D
Red Hat
IRC: fromani github: @fromanirh
--===============3292944622976156203==--
From bbaude at redhat.com Mon Oct 28 14:31:47 2019
Content-Type: multipart/mixed; boundary="===============4996069549265631879=="
MIME-Version: 1.0
From: Brent Baude
To: podman at lists.podman.io
Subject: [Podman] Re: [varlink] how to do exec using podman-remote?
Date: Mon, 28 Oct 2019 09:30:30 -0500
Message-ID: <87dcc53bea817ef44efcb74ad05f62aa778c3113.camel@redhat.com>
In-Reply-To: f6c60456-7634-d2e7-62ab-7712378e52e3@redhat.com
--===============4996069549265631879==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On Mon, 2019-10-28 at 15:27 +0100, Francesco Romani wrote:
> On 10/28/19 3:23 PM, Brent Baude wrote:
> > On Mon, 2019-10-28 at 15:19 +0100, Francesco Romani wrote:
> > > On 10/28/19 3:09 PM, Brent Baude wrote:
> > > > I'm glad to see that you are going to implement this in a
> > > > golang
> > > > program. That is how you are going to have to do it. I know
> > > > of no
> > > > other way.
> > > Hey,
> > > =
> > > =
> > > thanks for your reply. Does this means it could be a podman bug?
> > > =
> > > =
> > > bests,
> > > =
> > I don't believe so. I am unaware of how to use the generic varlink
> > tools to emulate and upgraded connection.
> =
> OK, thanks. Any other tips about how I can debug this further?
> =
What I am saying here is that this won't work. You are going to have
to implement this using golang (or python?). Exec requires a special
connection referred to as "upgraded" which I don't believe can be done
with the varlink clients.
> These entries in the podman syslog (actually journal, but still)
> look =
> suspicious:
> =
> =
> Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
> : stdio_input read failed Input/output error
> Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
> : Failed to write to socket
> Oct 28 09:21:09 myhost.lan conmon[8106]: conmon c24e28054f89c0a0ac9c =
> : Unable to send container stderr message to parent Bad file =
> descriptor
> Oct 28 09:21:09 myhost.lan podman[8087]: 2019-10-28
> 09:21:09.632987596 =
> +0100 CET m=3D+0.247438867 container exec =
> c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853 =
> (image=3Ddocker.io/library/fedora:30, name=3Dsad_thompson)
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Ddebug msg=3D"Successfully
> started =
> exec session =
> 6a1ccdea916ee245420320fa3ce02ff68ea066b8b6b7ba1534c014c15054f67c in =
> container
> c24e28054f89c0a0ac9c5e3690ab0c5ef5a8b859ed51d032ae38cc618a164853"
> Oct 28 09:21:09 myhost.lan podman[8087]: =
> time=3D"2019-10-28T09:21:09+01:00" level=3Derror msg=3D"write unix =
> /run/podman/io.podman->@: write: broken pipe"
> =
> =
> My wild guess is that the client somehow exits before the podman on
> the =
> other side of the varlink could reply.
> =
> =
> Thanks,
>=20
--===============4996069549265631879==--
From eae at us.ibm.com Tue Oct 29 02:39:52 2019
Content-Type: multipart/mixed; boundary="===============3040394939977769768=="
MIME-Version: 1.0
From: eae at us.ibm.com
To: podman at lists.podman.io
Subject: [Podman] rootless podman group credentials limited to users primary
group?
Date: Tue, 29 Oct 2019 02:39:46 +0000
Message-ID: <20191029023946.30597.51642@lists.podman.io>
--===============3040394939977769768==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Scenario: rootless user with primary and secondary group membership starts =
a container with mounted filesystem. =
Expected behavior: the group credentials of podman container would respect =
the results of newgrp before starting container.
Actual behavior: the group credentials for access are always the primary g=
roup.
--===============3040394939977769768==--
From gscrivan at redhat.com Tue Oct 29 08:45:16 2019
Content-Type: multipart/mixed; boundary="===============6594096035137724879=="
MIME-Version: 1.0
From: Giuseppe Scrivano
To: podman at lists.podman.io
Subject: [Podman] Re: rootless podman group credentials limited to users
primary group?
Date: Tue, 29 Oct 2019 09:38:39 +0100
Message-ID: <87lft4vtdc.fsf@redhat.com>
In-Reply-To: 20191029023946.30597.51642@lists.podman.io
--===============6594096035137724879==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
eae(a)us.ibm.com writes:
> Scenario: rootless user with primary and secondary group membership start=
s a container with mounted filesystem. =
>
> Expected behavior: the group credentials of podman container would respec=
t the results of newgrp before starting container.
>
> Actual behavior: the group credentials for access are always the primary=
group.
with rootless we cannot set arbitrary additional groups, as we do with
root containers. What we could do is to skip the setgroups(2) in the
OCI runtime so that the original additional groups can be maintained.
I've opened a PR for crun to enable that: https://github.com/containers/cru=
n/pull/148
Giuseppe
--===============6594096035137724879==--
From sincorchetes at gmail.com Tue Oct 29 12:17:39 2019
Content-Type: multipart/mixed; boundary="===============4474308126951290976=="
MIME-Version: 1.0
From: =?utf-8?q?=C3=81lvaro_Castillo_=3Csincorchetes_at_gmail=2Ecom=3E?=
To: podman at lists.podman.io
Subject: [Podman] podman references
Date: Tue, 29 Oct 2019 12:17:32 +0000
Message-ID: <20191029121732.30597.3514@lists.podman.io>
--===============4474308126951290976==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Hello all,
I am newbie here, I am interesting in get more information about how works =
podman with Kubernetes without uses Docker. Books, pdf, articles, howtos...
Greetings!
--===============4474308126951290976==--
From sincorchetes at gmail.com Tue Oct 29 12:21:27 2019
Content-Type: multipart/mixed; boundary="===============4245108323598368376=="
MIME-Version: 1.0
From: =?utf-8?q?=C3=81lvaro_Castillo_=3Csincorchetes_at_gmail=2Ecom=3E?=
To: podman at lists.podman.io
Subject: [Podman] port bindings are not yet supported by rootless containers
Date: Tue, 29 Oct 2019 12:21:22 +0000
Message-ID: <20191029122122.30597.89823@lists.podman.io>
--===============4245108323598368376==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Hello,
I am interesting to run a container with port redirects. I was trying run n=
ginx container with ports redirect likes 80:1024 80:1200 80:81...
But It's does give me the same error always.
port bindings are not yet supported by rootless containers
My OS is CentOS 8, but I've tried with Fedora 31 beta and It's same happens.
Can you help me?
Thanks.
--===============4245108323598368376==--
From tsweeney at redhat.com Tue Oct 29 12:34:16 2019
Content-Type: multipart/mixed; boundary="===============5619140678678980348=="
MIME-Version: 1.0
From: Tom Sweeney
To: podman at lists.podman.io
Subject: [Podman] Re: podman references
Date: Tue, 29 Oct 2019 08:27:53 -0400
Message-ID:
In-Reply-To: 20191029121732.30597.3514@lists.podman.io
--===============5619140678678980348==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On 10/29/2019 08:17 AM, =C3=81lvaro Castillo wrote:
> Hello all,
>
> I am newbie here, I am interesting in get more information about how work=
s podman with Kubernetes without uses Docker. Books, pdf, articles, howtos.=
..
>
> Greetings!
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
Hey =C3=81lvaro,
=C2=A0=C2=A0=C2=A0 Welcome to the list!=C2=A0 First place to hit up would be
https://podman.io, that's probably the best starting point.=C2=A0 For an
intro into Podman, this tutorial should get you going:
https://podman.io/getting-started/getting-started.=C2=A0 Lots of blogs on
that site too.=C2=A0
=C2=A0=C2=A0=C2=A0 Hope that helps get you started,
=C2=A0=C2=A0=C2=A0 t
--===============5619140678678980348==--
From gscrivan at redhat.com Tue Oct 29 12:54:13 2019
Content-Type: multipart/mixed; boundary="===============4377869181479766896=="
MIME-Version: 1.0
From: Giuseppe Scrivano
To: podman at lists.podman.io
Subject: [Podman] Re: port bindings are not yet supported by rootless
containers
Date: Tue, 29 Oct 2019 13:54:03 +0100
Message-ID: <87h83rww44.fsf@redhat.com>
In-Reply-To: 20191029122122.30597.89823@lists.podman.io
--===============4377869181479766896==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
=C3=81lvaro Castillo writes:
> Hello,
>
> I am interesting to run a container with port redirects. I was trying run=
nginx container with ports redirect likes 80:1024 80:1200 80:81...
>
> But It's does give me the same error always.
> port bindings are not yet supported by rootless containers
An unprivileged user cannot use port < 1024.
We document the differences between rootless and root containers here:
https://github.com/containers/libpod/blob/master/rootless.md
To solve the issue you have reported, you can either try to use a port
bigger than 1023; or as root, tweak the value in /proc/sys/net/ipv4/ip_unpr=
ivileged_port_start.
Giuseppe
--===============4377869181479766896==--
From dwalsh at redhat.com Tue Oct 29 13:21:27 2019
Content-Type: multipart/mixed; boundary="===============4962900771105680596=="
MIME-Version: 1.0
From: Daniel Walsh
To: podman at lists.podman.io
Subject: [Podman] Re: podman references
Date: Tue, 29 Oct 2019 09:20:10 -0400
Message-ID: <1d54e884-9006-e83d-c8b7-b78a6b9c45d9@redhat.com>
In-Reply-To: cca6b398-e682-09ad-da28-21f174556967@redhat.com
--===============4962900771105680596==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On 10/29/19 8:27 AM, Tom Sweeney wrote:
> On 10/29/2019 08:17 AM, =C3=81lvaro Castillo wrote:
>> Hello all,
>>
>> I am newbie here, I am interesting in get more information about how wor=
ks podman with Kubernetes without uses Docker. Books, pdf, articles, howtos=
...
There are lots of articles referenced there.=C2=A0 There are also some vide=
os
out on Youtube that you might want to watch.
https://www.youtube.com/watch?v=3DYkBk52MGV0Y
https://www.youtube.com/watch?v=3DBeRr3aZbzqo
Realize that Kubernetes does not use Podman, it uses CRI-O, which is
based on the same underlying libraries.
There are tools in Podman to generate kubernetes Yaml from traditional
containers and pods.
`podman generate kube`
and Podman has the ability to create containers/pods based on Kubernetes
Yaml files.
`podman play kube`
>> Greetings!
>> _______________________________________________
>> Podman mailing list -- podman(a)lists.podman.io
>> To unsubscribe send an email to podman-leave(a)lists.podman.io
> Hey =C3=81lvaro,
>
> =C2=A0=C2=A0=C2=A0 Welcome to the list!=C2=A0 First place to hit up would=
be
> https://podman.io, that's probably the best starting point.=C2=A0 For an
> intro into Podman, this tutorial should get you going:
> https://podman.io/getting-started/getting-started.=C2=A0 Lots of blogs on
> that site too.=C2=A0
>
> =C2=A0=C2=A0=C2=A0 Hope that helps get you started,
>
> =C2=A0=C2=A0=C2=A0 t
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
--===============4962900771105680596==--
From mail at gsforza.de Tue Oct 29 20:48:12 2019
Content-Type: multipart/mixed; boundary="===============3390850392212533161=="
MIME-Version: 1.0
From: Giuseppe Sforza
To: podman at lists.podman.io
Subject: [Podman] Re: port bindings are not yet supported by rootless
containers
Date: Tue, 29 Oct 2019 21:40:21 +0100
Message-ID: <20191029204021.82ED76440183@dd43326.kasserver.com>
In-Reply-To: 87h83rww44.fsf@redhat.com
--===============3390850392212533161==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Giuseppe Scrivano wrote on 29.10.2019 13:54 (GMT +01:00):
> An unprivileged user cannot use port < 1024.
> =
> We document the differences between rootless and root containers here:
> https://github.com/containers/libpod/blob/master/rootless.md
> =
> To solve the issue you have reported, you can either try to use a port
> bigger than 1023; or as root, tweak the value in =
> /proc/sys/net/ipv4/ip_unprivileged_port_start.
> =
> Giuseppe
I can replicate this on CentOS 8. I guess in this specific case it has to d=
o with the version of podman available for CentOS.
See:
$ podman run -d -p 8080:8080 nginx:latest =
port bindings are not yet supported by rootless containers
In the case of Fedora it actually has to do with the privileged ports, I gu=
ess.
-- =
Giuseppe Sforza
--===============3390850392212533161==--
From mheon at redhat.com Wed Oct 30 13:52:19 2019
Content-Type: multipart/mixed; boundary="===============5098065615068220809=="
MIME-Version: 1.0
From: Matt Heon
To: podman at lists.podman.io
Subject: [Podman] Re: port bindings are not yet supported by rootless
containers
Date: Wed, 30 Oct 2019 09:52:08 -0400
Message-ID: <20191030135208.u6cqbnfcohku2qv3@Agincourt.redhat.com>
In-Reply-To: 20191029204021.82ED76440183@dd43326.kasserver.com
--===============5098065615068220809==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On 2019-10-29 21:40, Giuseppe Sforza wrote:
>
>
>Giuseppe Scrivano wrote on 29.10.2019 13:54 (GMT +01:00):
>
>> An unprivileged user cannot use port < 1024.
>>
>> We document the differences between rootless and root containers here:
>> https://github.com/containers/libpod/blob/master/rootless.md
>>
>> To solve the issue you have reported, you can either try to use a port
>> bigger than 1023; or as root, tweak the value in
>> /proc/sys/net/ipv4/ip_unprivileged_port_start.
>>
>> Giuseppe
>
>
>I can replicate this on CentOS 8. I guess in this specific case it has to =
do with the version of podman available for CentOS.
>
>See:
>$ podman run -d -p 8080:8080 nginx:latest
>port bindings are not yet supported by rootless containers
>
>In the case of Fedora it actually has to do with the privileged ports, I g=
uess.
>
>--
>Giuseppe Sforza
This will be resolved once RHEL/CentOS 8.1 are available. Podman 1.0,
as shipped in Cent/RHEL 8.0, is very old, and rootless support was
still in beta at that point. The 1.4.2 version shipping in 8.1 is
a lot more recent, and rootless is fully supported, including port
bindings.
Thanks,
Matt Heon
>_______________________________________________
>Podman mailing list -- podman(a)lists.podman.io
>To unsubscribe send an email to podman-leave(a)lists.podman.io
--===============5098065615068220809==
Content-Type: application/pgp-signature
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="signature.asc"
LS0tLS1CRUdJTiBQR1AgU0lHTkFUVVJFLS0tLS0KCmlRSXpCQUFCQ1FBZEZpRUU1eG9UdlpBZ1JP
R1hBMzNiV2c5Zm9jNzF5SFFGQWwyNWxZVUFDZ2tRV2c5Zm9jNzEKeUhTb1JRLytPMXhoRXJSeVE3
TFE0UUM4bTZoMVQzcTNoSmpBTmhEdU1IRlZVRkFpaUlaV0h3eEVmd0VLTHlHbgpIdFFiVVduTUFU
RC9YVWdSMEJCajFha3ZQMEdYZEhOWUhsOXZDcDhCalZIeXpJZHBqRW1sUmYrQzROYmYzL0ZVCmRC
WEU4dzFON3JrU21UV2Y5VWRUa3JpVFd1amNqd0RtM1BaQ0JKOEpZVm1nZEVtSExtUldwdDh4Sm83
bmRIMDUKenZobFU4ZWY3bWpyNVJ4Y25IWmlDY0dRQnRrWHhKeDlKKzgvWEVYc3F6VnhRZnU1WEMy
ZXJMRWZFWDBmZ1pjdQphMFNTU1duK3d3dkExWVBISCtKZXl6MCtTTEdXVTJwd0phUnVGYnIzSlpT
UzlyWjMvVXFOaXI3ZnFaSWJ5ZURjCmRUYXlZbk16NWpNQWgvQTJoUWxiSnBQM1djakJGMGJybVNL
YVF6TU53MWxyTExoM3NBTVhReEZjOStyTWxOUGcKNnJlV0tJOFpJTE53L3VKRlF2cEo5WnZjV0VJ
M0wxUFBocnFBSTl3Um81ZXNGYUxWUTlHVlg3UXFwN29jWEZXUQpMdk1GUE5jSThlVklLRUdvQ21N
UkQ5cWtrZCtKL0JMWUJmd0RUVy9MOWJKeXQxZEVYcmJFQmNUYXM0LytqNXdHCnJvRFlCdWpaSXhh
NGVMRXF1OHFpdFNDS0ZkYTBYb0xqUndZeXZQRXB2aGFXVXdwSHF1a2dTdkpQSW5RM0hNeS8KdVlY
SnowS3U1Z0t2MENONDNSOGpSREp6blJuN21BQk5Pa3NxK2RMTkl4YXdESXpDeVAyeHhQM3NGdmIx
Umd2WQpUSGlNTTBmbktHTG5iUVE4cFB4UXFZL0JXdkM2VE93QTM0ZkR3cjRoN2F6UlkraHhDN009
Cj12T2JHCi0tLS0tRU5EIFBHUCBTSUdOQVRVUkUtLS0tLQo=
--===============5098065615068220809==--
From mail at gsforza.de Wed Oct 30 14:37:44 2019
Content-Type: multipart/mixed; boundary="===============3415432950937157259=="
MIME-Version: 1.0
From: Giuseppe Sforza
To: podman at lists.podman.io
Subject: [Podman] Re: port bindings are not yet supported by rootless
containers
Date: Wed, 30 Oct 2019 15:37:35 +0100
Message-ID: <20191030143735.722566444627@dd43326.kasserver.com>
In-Reply-To: 20191030135208.u6cqbnfcohku2qv3@Agincourt.redhat.com
--===============3415432950937157259==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Matt Heon wrote on 30.10.2019 14:52 (GMT +01:00):
=
> This will be resolved once RHEL/CentOS 8.1 are available. Podman 1.0,
> as shipped in Cent/RHEL 8.0, is very old, and rootless support was
> still in beta at that point. The 1.4.2 version shipping in 8.1 is
> a lot more recent, and rootless is fully supported, including port
> bindings.
> =
> Thanks,
> Matt Heon
Any ETA on the release of said versions?
-- =
Giuseppe Sforza
--===============3415432950937157259==--
From dwalsh at redhat.com Wed Oct 30 14:53:10 2019
Content-Type: multipart/mixed; boundary="===============2139109436479084160=="
MIME-Version: 1.0
From: Daniel Walsh
To: podman at lists.podman.io
Subject: [Podman] Re: port bindings are not yet supported by rootless
containers
Date: Wed, 30 Oct 2019 10:53:01 -0400
Message-ID: <798eb0d8-c521-3595-8880-e8c17dd0ac52@redhat.com>
In-Reply-To: 20191030143735.722566444627@dd43326.kasserver.com
--===============2139109436479084160==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On 10/30/19 10:37 AM, Giuseppe Sforza wrote:
>
> Matt Heon wrote on 30.10.2019 14:52 (GMT +01:00):
> =
>> This will be resolved once RHEL/CentOS 8.1 are available. Podman 1.0,
>> as shipped in Cent/RHEL 8.0, is very old, and rootless support was
>> still in beta at that point. The 1.4.2 version shipping in 8.1 is
>> a lot more recent, and rootless is fully supported, including port
>> bindings.
>>
>> Thanks,
>> Matt Heon
> Any ETA on the release of said versions?
>
RHEL8.1 is supposed to be released in November.
--===============2139109436479084160==--
From dwalsh at redhat.com Sat Nov 2 10:31:39 2019
Content-Type: multipart/mixed; boundary="===============7794076634851470846=="
MIME-Version: 1.0
From: Daniel Walsh
To: podman at lists.podman.io
Subject: [Podman] Trying to run podman within a locked down podman.
Date: Sat, 02 Nov 2019 06:31:25 -0400
Message-ID: <9d5a95ad-0636-4b36-5202-76526a8f6590@redhat.com>
--===============7794076634851470846==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
# cat ~/Dockerfile.podman
FROM podman/stable
RUN useradd podman
# podman run -ti --security-opt seccomp=3D/tmp/seccomp.json --user podman
--rm podman podman unshare cat /etc/subuid
ERRO[0000] unable to write system event: "write unixgram
@000ea->/run/systemd/journal/socket: sendmsg: no such file or directory"
podman:100000:65536
# podman run -ti --security-opt seccomp=3Dunconfined --user podman --rm
podman podman unshare cat /proc/self/uid_map
ERRO[0000] unable to write system event: "write unixgram
@000df->/run/systemd/journal/socket: sendmsg: no such file or directory"
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 1000=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 1
# podman run -ti --security-opt seccomp=3D/tmp/seccomp.json --user podman
--rm podman podman unshare cat /proc/self/uid_map
ERRO[0000] unable to write system event: "write unixgram
@000e6->/run/systemd/journal/socket: sendmsg: no such file or directory"
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 1000=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 1
Running with Debug shows
DEBU[0000] error from newuidmap: newuidmap: write to uid_map failed:
Operation not permitted
WARN[0000] using rootless single mapping into the namespace. This might
break some images. Check /etc/subuid and /etc/subgid for adding subids
User Namespace does not seem to be working unless I add "clone" syscall,
and SETUID, SETGID
# podman run -ti --cap-add SETUID,SETGID --security-opt
seccomp=3D/tmp/seccomp.json --user podman --rm podman podman unshare cat
/proc/self/uid_map
ERRO[0000] unable to write system event: "write unixgram
@00103->/run/systemd/journal/socket: sendmsg: no such file or directory"
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 1000=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 1
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 1=C2=A0=C2=A0=C2=A0=C2=A0 =
100000=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 65536
```Need these SELinux Rules:
allow container_t nsfs_t:file read;
allow container_t proc_t:filesystem mount;
allow container_t tmpfs_t:filesystem { mount unmount };
```
I am getting close with this:
=C2=A0diff /usr/share/containers/seccomp.json /tmp/seccomp.json
367c367,370
< =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=
=A0 "unshare"
---
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=
=A0 "unshare",
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=
=A0 "clone",
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=
=A0 "keyctl",
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=
=A0 "pivot_root"
# podman run -ti --privileged --cap-add SETUID,SETGID --security-opt
seccomp=3D/tmp/seccomp.json --user podman --rm podman podman run
--net=3Dhost --cgroup-manager cgroupfs alpine echo hello
ERRO[0000] unable to write system event: "write unixgram
@0016a->/run/systemd/journal/socket: sendmsg: no such file or directory"
Trying to pull docker.io/library/alpine...
Getting image source signatures
Copying blob 89d9c30c1d48 done
Copying config 965ea09ff2 done
Writing manifest to image destination
Storing signatures
ERRO[0004] unable to write pod event: "write unixgram
@0016a->/run/systemd/journal/socket: sendmsg: no such file or directory"
Error: cannot configure rootless cgroup using the cgroupfs manager
executable file not found in $PATH: No such file or directory: OCI
runtime command not found error
--===============7794076634851470846==--
From patrakov at gmail.com Mon Nov 4 18:41:14 2019
Content-Type: multipart/mixed; boundary="===============7263943531436262873=="
MIME-Version: 1.0
From: Alexander E. Patrakov
To: podman at lists.podman.io
Subject: [Podman] Locking issue?
Date: Mon, 04 Nov 2019 23:40:54 +0500
Message-ID:
--===============7263943531436262873==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Hello.
I have tried Podman in Fedora 31. Not a rootless setup.
Software versions:
podman-1.6.2-2.fc31.x86_64
containernetworking-plugins-0.8.2-2.1.dev.git485be65.fc31.x86_64
I have created two containers:
# podman container run -d --name nginx_1 -p 80:80 nginx
# podman container run -d --name nginx_2 -p 81:80 nginx
Then I wanted to make sure that they start on boot.
According to RHEL 7 documentation,
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atom=
ic_host/7/html/managing_containers/running_containers_as_systemd_services_w=
ith_podman
, I am supposed to create systemd units. OK, let's take the documented
form of the unit and turn it into a template:
[Unit]
Description=3DContainer %i
[Service]
ExecStart=3D/usr/bin/podman start -a %i
ExecStop=3D/usr/bin/podman stop -t 2 %i
[Install]
WantedBy=3Dmulti-user.target
This doesn't work if there is more than one container. The error
is:
Nov 04 21:35:57 podman[2268]: time=3D"2019-11-04T21:35:57+05:00"
level=3Derror msg=3D"Error adding network: failed to allocate for range 0:
10.88.0.19 has been allocated to
ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
duplicate allocation is not allowed"
Nov 04 21:35:57 podman[2268]: time=3D"2019-11-04T21:35:57+05:00"
level=3Derror msg=3D"Error while adding pod to CNI network \"podman\":
failed to allocate for range 0: 10.88.0.19 has been allocated to
ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
duplicate allocation is not allowed"
Nov 04 21:35:57 podman[2268]: Error: unable to start container
ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019:
error configuring network namespace for container
ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019:
failed to allocate for range 0: 10.88.0.19 has been allocated to
ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
duplicate allocation is not allowed
(as you can see, the conflict is against the container itself)
Apparently different runs of podman need to be serialized against each
other. This works:
[Unit]
Description=3DContainer %i
Wants=3Dnetwork-online.target
After=3Dnetwork-online.target
[Service]
Type=3Doneshot
RemainAfterExit=3Dyes
ExecStart=3Dflock /run/lock/subsys/container.lck /usr/bin/podman start %i
ExecStop=3D/usr/bin/podman stop -t 2 %i
[Install]
WantedBy=3Dmulti-user.target
Questions:
a) Why isn't some equivalent of this unit shipped with podman? Or, am
I missing some package that ships it?
b) Why isn't the necessary locking built into podman itself? Or, is it
a bug in containernetworking-plugins?
-- =
Alexander E. Patrakov
--===============7263943531436262873==--
From smccarty at redhat.com Mon Nov 4 18:55:40 2019
Content-Type: multipart/mixed; boundary="===============4753858730289580723=="
MIME-Version: 1.0
From: Scott McCarty
To: podman at lists.podman.io
Subject: [Podman] Re: Locking issue?
Date: Mon, 04 Nov 2019 13:54:56 -0500
Message-ID:
In-Reply-To: CAN_LGv1bwwodybx85QbTS54MbOvF5+W98-VR9BfStDcay+S61g@mail.gmail.com
--===============4753858730289580723==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Instead, try "podman generate systemd" and you will have your unit files
made specifically for those containers.
On Mon, Nov 4, 2019 at 1:41 PM Alexander E. Patrakov
wrote:
> Hello.
>
> I have tried Podman in Fedora 31. Not a rootless setup.
>
> Software versions:
>
> podman-1.6.2-2.fc31.x86_64
> containernetworking-plugins-0.8.2-2.1.dev.git485be65.fc31.x86_64
>
> I have created two containers:
>
> # podman container run -d --name nginx_1 -p 80:80 nginx
> # podman container run -d --name nginx_2 -p 81:80 nginx
>
> Then I wanted to make sure that they start on boot.
>
> According to RHEL 7 documentation,
>
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_at=
omic_host/7/html/managing_containers/running_containers_as_systemd_services=
_with_podman
> , I am supposed to create systemd units. OK, let's take the documented
> form of the unit and turn it into a template:
>
> [Unit]
> Description=3DContainer %i
>
> [Service]
> ExecStart=3D/usr/bin/podman start -a %i
> ExecStop=3D/usr/bin/podman stop -t 2 %i
>
> [Install]
> WantedBy=3Dmulti-user.target
>
> This doesn't work if there is more than one container. The error
> is:
>
> Nov 04 21:35:57 podman[2268]: time=3D"2019-11-04T21:35:57+05:00"
> level=3Derror msg=3D"Error adding network: failed to allocate for range 0:
> 10.88.0.19 has been allocated to
> ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
> duplicate allocation is not allowed"
> Nov 04 21:35:57 podman[2268]: time=3D"2019-11-04T21:35:57+05:00"
> level=3Derror msg=3D"Error while adding pod to CNI network \"podman\":
> failed to allocate for range 0: 10.88.0.19 has been allocated to
> ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
> duplicate allocation is not allowed"
> Nov 04 21:35:57 podman[2268]: Error: unable to start container
> ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019:
> error configuring network namespace for container
> ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019:
> failed to allocate for range 0: 10.88.0.19 has been allocated to
> ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
> duplicate allocation is not allowed
>
> (as you can see, the conflict is against the container itself)
>
> Apparently different runs of podman need to be serialized against each
> other. This works:
>
> [Unit]
> Description=3DContainer %i
> Wants=3Dnetwork-online.target
> After=3Dnetwork-online.target
>
> [Service]
> Type=3Doneshot
> RemainAfterExit=3Dyes
> ExecStart=3Dflock /run/lock/subsys/container.lck /usr/bin/podman start %i
> ExecStop=3D/usr/bin/podman stop -t 2 %i
>
> [Install]
> WantedBy=3Dmulti-user.target
>
> Questions:
>
> a) Why isn't some equivalent of this unit shipped with podman? Or, am
> I missing some package that ships it?
> b) Why isn't the necessary locking built into podman itself? Or, is it
> a bug in containernetworking-plugins?
>
> --
> Alexander E. Patrakov
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
-- =
-- =
Scott McCarty, RHCA
Product Management - Containers, Red Hat Enterprise Linux & OpenShift
Email: smccarty(a)redhat.com
Phone: 312-660-3535
Cell: 330-807-1043
Web: http://crunchtools.com
Have you ever wondered what happens behind the scenes when you type
www.redhat.com into a browser and hit enter?
https://www.redhat.com/en/blog/what-happens-when-you-hit-enter
--===============4753858730289580723==
Content-Type: text/html
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="attachment.html"
PGRpdiBkaXI9Imx0ciI+PGRpdiBjbGFzcz0iZ21haWxfZGVmYXVsdCIgc3R5bGU9ImZvbnQtc2l6
ZTpzbWFsbCI+SW5zdGVhZCwgdHJ5ICZxdW90O3BvZG1hbiBnZW5lcmF0ZSBzeXN0ZW1kJnF1b3Q7
IGFuZCB5b3Ugd2lsbCBoYXZlIHlvdXIgdW5pdCBmaWxlcyBtYWRlIHNwZWNpZmljYWxseSBmb3Ig
dGhvc2UgY29udGFpbmVycy48L2Rpdj48L2Rpdj48YnI+PGRpdiBjbGFzcz0iZ21haWxfcXVvdGUi
PjxkaXYgZGlyPSJsdHIiIGNsYXNzPSJnbWFpbF9hdHRyIj5PbiBNb24sIE5vdiA0LCAyMDE5IGF0
IDE6NDEgUE0gQWxleGFuZGVyIEUuIFBhdHJha292ICZsdDs8YSBocmVmPSJtYWlsdG86cGF0cmFr
b3ZAZ21haWwuY29tIj5wYXRyYWtvdkBnbWFpbC5jb208L2E+Jmd0OyB3cm90ZTo8YnI+PC9kaXY+
PGJsb2NrcXVvdGUgY2xhc3M9ImdtYWlsX3F1b3RlIiBzdHlsZT0ibWFyZ2luOjBweCAwcHggMHB4
IDAuOGV4O2JvcmRlci1sZWZ0OjFweCBzb2xpZCByZ2IoMjA0LDIwNCwyMDQpO3BhZGRpbmctbGVm
dDoxZXgiPkhlbGxvLjxicj4KPGJyPgpJIGhhdmUgdHJpZWQgUG9kbWFuIGluIEZlZG9yYSAzMS4g
Tm90IGEgcm9vdGxlc3Mgc2V0dXAuPGJyPgo8YnI+ClNvZnR3YXJlIHZlcnNpb25zOjxicj4KPGJy
Pgpwb2RtYW4tMS42LjItMi5mYzMxLng4Nl82NDxicj4KY29udGFpbmVybmV0d29ya2luZy1wbHVn
aW5zLTAuOC4yLTIuMS5kZXYuZ2l0NDg1YmU2NS5mYzMxLng4Nl82NDxicj4KPGJyPgpJIGhhdmUg
Y3JlYXRlZCB0d28gY29udGFpbmVyczo8YnI+Cjxicj4KIyBwb2RtYW4gY29udGFpbmVyIHJ1biAt
ZCAtLW5hbWUgbmdpbnhfMSAtcCA4MDo4MCBuZ2lueDxicj4KIyBwb2RtYW4gY29udGFpbmVyIHJ1
biAtZCAtLW5hbWUgbmdpbnhfMiAtcCA4MTo4MCBuZ2lueDxicj4KPGJyPgpUaGVuIEkgd2FudGVk
IHRvIG1ha2Ugc3VyZSB0aGF0IHRoZXkgc3RhcnQgb24gYm9vdC48YnI+Cjxicj4KQWNjb3JkaW5n
IHRvIFJIRUwgNyBkb2N1bWVudGF0aW9uLDxicj4KPGEgaHJlZj0iaHR0cHM6Ly9hY2Nlc3MucmVk
aGF0LmNvbS9kb2N1bWVudGF0aW9uL2VuLXVzL3JlZF9oYXRfZW50ZXJwcmlzZV9saW51eF9hdG9t
aWNfaG9zdC83L2h0bWwvbWFuYWdpbmdfY29udGFpbmVycy9ydW5uaW5nX2NvbnRhaW5lcnNfYXNf
c3lzdGVtZF9zZXJ2aWNlc193aXRoX3BvZG1hbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9i
bGFuayI+aHR0cHM6Ly9hY2Nlc3MucmVkaGF0LmNvbS9kb2N1bWVudGF0aW9uL2VuLXVzL3JlZF9o
YXRfZW50ZXJwcmlzZV9saW51eF9hdG9taWNfaG9zdC83L2h0bWwvbWFuYWdpbmdfY29udGFpbmVy
cy9ydW5uaW5nX2NvbnRhaW5lcnNfYXNfc3lzdGVtZF9zZXJ2aWNlc193aXRoX3BvZG1hbjwvYT48
YnI+CiwgSSBhbSBzdXBwb3NlZCB0byBjcmVhdGUgc3lzdGVtZCB1bml0cy4gT0ssIGxldCYjMzk7
cyB0YWtlIHRoZSBkb2N1bWVudGVkPGJyPgpmb3JtIG9mIHRoZSB1bml0IGFuZCB0dXJuIGl0IGlu
dG8gYSB0ZW1wbGF0ZTo8YnI+Cjxicj4KW1VuaXRdPGJyPgpEZXNjcmlwdGlvbj1Db250YWluZXIg
JWk8YnI+Cjxicj4KW1NlcnZpY2VdPGJyPgpFeGVjU3RhcnQ9L3Vzci9iaW4vcG9kbWFuIHN0YXJ0
IC1hICVpPGJyPgpFeGVjU3RvcD0vdXNyL2Jpbi9wb2RtYW4gc3RvcCAtdCAyICVpPGJyPgo8YnI+
CltJbnN0YWxsXTxicj4KV2FudGVkQnk9bXVsdGktdXNlci50YXJnZXQ8YnI+Cjxicj4KVGhpcyBk
b2VzbiYjMzk7dCB3b3JrIGlmIHRoZXJlIGlzIG1vcmUgdGhhbiBvbmUgY29udGFpbmVyLiBUaGUg
ZXJyb3I8YnI+CmlzOjxicj4KPGJyPgpOb3YgMDQgMjE6MzU6NTcgcG9kbWFuWzIyNjhdOiB0aW1l
PSZxdW90OzIwMTktMTEtMDRUMjE6MzU6NTcrMDU6MDAmcXVvdDs8YnI+CmxldmVsPWVycm9yIG1z
Zz0mcXVvdDtFcnJvciBhZGRpbmcgbmV0d29yazogZmFpbGVkIHRvIGFsbG9jYXRlIGZvciByYW5n
ZSAwOjxicj4KMTAuODguMC4xOSBoYXMgYmVlbiBhbGxvY2F0ZWQgdG88YnI+CmFjZTJkZTQ0MDUy
MDVhOWE3Njc0YTI1MjRjZDY3YzFmMGUzOTVhOTIzNGIwNDU2YzU1ODgxYTFhNGFkZDYwMTksPGJy
PgpkdXBsaWNhdGUgYWxsb2NhdGlvbiBpcyBub3QgYWxsb3dlZCZxdW90Ozxicj4KTm92IDA0IDIx
OjM1OjU3IHBvZG1hblsyMjY4XTogdGltZT0mcXVvdDsyMDE5LTExLTA0VDIxOjM1OjU3KzA1OjAw
JnF1b3Q7PGJyPgpsZXZlbD1lcnJvciBtc2c9JnF1b3Q7RXJyb3Igd2hpbGUgYWRkaW5nIHBvZCB0
byBDTkkgbmV0d29yayBcJnF1b3Q7cG9kbWFuXCZxdW90Ozo8YnI+CmZhaWxlZCB0byBhbGxvY2F0
ZSBmb3IgcmFuZ2UgMDogMTAuODguMC4xOSBoYXMgYmVlbiBhbGxvY2F0ZWQgdG88YnI+CmFjZTJk
ZTQ0MDUyMDVhOWE3Njc0YTI1MjRjZDY3YzFmMGUzOTVhOTIzNGIwNDU2YzU1ODgxYTFhNGFkZDYw
MTksPGJyPgpkdXBsaWNhdGUgYWxsb2NhdGlvbiBpcyBub3QgYWxsb3dlZCZxdW90Ozxicj4KTm92
IDA0IDIxOjM1OjU3IHBvZG1hblsyMjY4XTogRXJyb3I6IHVuYWJsZSB0byBzdGFydCBjb250YWlu
ZXI8YnI+CmFjZTJkZTQ0MDUyMDVhOWE3Njc0YTI1MjRjZDY3YzFmMGUzOTVhOTIzNGIwNDU2YzU1
ODgxYTFhNGFkZDYwMTk6PGJyPgplcnJvciBjb25maWd1cmluZyBuZXR3b3JrIG5hbWVzcGFjZSBm
b3IgY29udGFpbmVyPGJyPgphY2UyZGU0NDA1MjA1YTlhNzY3NGEyNTI0Y2Q2N2MxZjBlMzk1YTky
MzRiMDQ1NmM1NTg4MWExYTRhZGQ2MDE5Ojxicj4KZmFpbGVkIHRvIGFsbG9jYXRlIGZvciByYW5n
ZSAwOiAxMC44OC4wLjE5IGhhcyBiZWVuIGFsbG9jYXRlZCB0bzxicj4KYWNlMmRlNDQwNTIwNWE5
YTc2NzRhMjUyNGNkNjdjMWYwZTM5NWE5MjM0YjA0NTZjNTU4ODFhMWE0YWRkNjAxOSw8YnI+CmR1
cGxpY2F0ZSBhbGxvY2F0aW9uIGlzIG5vdCBhbGxvd2VkPGJyPgo8YnI+CihhcyB5b3UgY2FuIHNl
ZSwgdGhlIGNvbmZsaWN0IGlzIGFnYWluc3QgdGhlIGNvbnRhaW5lciBpdHNlbGYpPGJyPgo8YnI+
CkFwcGFyZW50bHkgZGlmZmVyZW50IHJ1bnMgb2YgcG9kbWFuIG5lZWQgdG8gYmUgc2VyaWFsaXpl
ZCBhZ2FpbnN0IGVhY2g8YnI+Cm90aGVyLiBUaGlzIHdvcmtzOjxicj4KPGJyPgpbVW5pdF08YnI+
CkRlc2NyaXB0aW9uPUNvbnRhaW5lciAlaTxicj4KV2FudHM9bmV0d29yay1vbmxpbmUudGFyZ2V0
PGJyPgpBZnRlcj1uZXR3b3JrLW9ubGluZS50YXJnZXQ8YnI+Cjxicj4KW1NlcnZpY2VdPGJyPgpU
eXBlPW9uZXNob3Q8YnI+ClJlbWFpbkFmdGVyRXhpdD15ZXM8YnI+CkV4ZWNTdGFydD1mbG9jayAv
cnVuL2xvY2svc3Vic3lzL2NvbnRhaW5lci5sY2sgL3Vzci9iaW4vcG9kbWFuIHN0YXJ0ICVpPGJy
PgpFeGVjU3RvcD0vdXNyL2Jpbi9wb2RtYW4gc3RvcCAtdCAyICVpPGJyPgo8YnI+CltJbnN0YWxs
XTxicj4KV2FudGVkQnk9bXVsdGktdXNlci50YXJnZXQ8YnI+Cjxicj4KUXVlc3Rpb25zOjxicj4K
PGJyPgphKSBXaHkgaXNuJiMzOTt0IHNvbWUgZXF1aXZhbGVudCBvZiB0aGlzIHVuaXQgc2hpcHBl
ZCB3aXRoIHBvZG1hbj8gT3IsIGFtPGJyPgpJIG1pc3Npbmcgc29tZSBwYWNrYWdlIHRoYXQgc2hp
cHMgaXQ/PGJyPgpiKSBXaHkgaXNuJiMzOTt0IHRoZSBuZWNlc3NhcnkgbG9ja2luZyBidWlsdCBp
bnRvIHBvZG1hbiBpdHNlbGY/IE9yLCBpcyBpdDxicj4KYSBidWcgaW4gY29udGFpbmVybmV0d29y
a2luZy1wbHVnaW5zPzxicj4KPGJyPgotLSA8YnI+CkFsZXhhbmRlciBFLiBQYXRyYWtvdjxicj4K
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX188YnI+ClBvZG1h
biBtYWlsaW5nIGxpc3QgLS0gPGEgaHJlZj0ibWFpbHRvOnBvZG1hbkBsaXN0cy5wb2RtYW4uaW8i
IHRhcmdldD0iX2JsYW5rIj5wb2RtYW5AbGlzdHMucG9kbWFuLmlvPC9hPjxicj4KVG8gdW5zdWJz
Y3JpYmUgc2VuZCBhbiBlbWFpbCB0byA8YSBocmVmPSJtYWlsdG86cG9kbWFuLWxlYXZlQGxpc3Rz
LnBvZG1hbi5pbyIgdGFyZ2V0PSJfYmxhbmsiPnBvZG1hbi1sZWF2ZUBsaXN0cy5wb2RtYW4uaW88
L2E+PGJyPgo8L2Jsb2NrcXVvdGU+PC9kaXY+PGJyIGNsZWFyPSJhbGwiPjxkaXY+PGJyPjwvZGl2
Pi0tIDxicj48ZGl2IGRpcj0ibHRyIiBjbGFzcz0iZ21haWxfc2lnbmF0dXJlIj48ZGl2IGRpcj0i
bHRyIj48ZGl2PjxkaXYgZGlyPSJsdHIiPjxkaXYgZGlyPSJsdHIiPjxkaXYgZGlyPSJsdHIiPjxk
aXYgZGlyPSJsdHIiPjxkaXYgZGlyPSJsdHIiPjxkaXYgZGlyPSJsdHIiPjxkaXYgZGlyPSJsdHIi
PjxkaXYgZGlyPSJsdHIiPjxkaXYgZGlyPSJsdHIiPjxwcmU+LS0gPGJyPjwvcHJlPjxwcmU+PGZv
bnQgc2l6ZT0iMiI+U2NvdHQgTWNDYXJ0eSwgUkhDQQpQcm9kdWN0IE1hbmFnZW1lbnQgLSBDb250
YWluZXJzLCBSZWQgSGF0IEVudGVycHJpc2UgTGludXggJmFtcDsgT3BlblNoaWZ0CkVtYWlsOiA8
YSBocmVmPSJtYWlsdG86c21jY2FydHlAcmVkaGF0LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPnNtY2Nh
cnR5QHJlZGhhdC5jb208L2E+ClBob25lOiAzMTItNjYwLTM1MzUKQ2VsbDogMzMwLTgwNy0xMDQz
CldlYjogPGEgaHJlZj0iaHR0cDovL2NydW5jaHRvb2xzLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHA6Ly9jcnVuY2h0b29scy5jb208L2E+PC9mb250PjxwcmU+SGF2ZSB5b3UgZXZlciB3b25kZXJl
ZCB3aGF0IGhhcHBlbnMgYmVoaW5kIHRoZSBzY2VuZXMgd2hlbiB5b3UgdHlwZSA8YSBocmVmPSJo
dHRwOi8vd3d3LnJlZGhhdC5jb20iIHRhcmdldD0iX2JsYW5rIj53d3cucmVkaGF0LmNvbTwvYT4g
aW50byBhIGJyb3dzZXIgYW5kIGhpdCBlbnRlcj8gPGEgaHJlZj0iaHR0cHM6Ly93d3cucmVkaGF0
LmNvbS9lbi9ibG9nL3doYXQtaGFwcGVucy13aGVuLXlvdS1oaXQtZW50ZXIiIHN0eWxlPSJmb250
LWZhbWlseTpBcmlhbCxIZWx2ZXRpY2Esc2Fucy1zZXJpZiIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBz
Oi8vd3d3LnJlZGhhdC5jb20vZW4vYmxvZy93aGF0LWhhcHBlbnMtd2hlbi15b3UtaGl0LWVudGVy
PC9hPjwvcHJlPjwvcHJlPjwvZGl2PjwvZGl2PjwvZGl2PjwvZGl2PjwvZGl2PjwvZGl2PjwvZGl2
PjwvZGl2PjwvZGl2PjwvZGl2PjwvZGl2PjwvZGl2Pgo=
--===============4753858730289580723==--
From patrakov at gmail.com Mon Nov 4 19:53:30 2019
Content-Type: multipart/mixed; boundary="===============4532917608492438095=="
MIME-Version: 1.0
From: Alexander E. Patrakov
To: podman at lists.podman.io
Subject: [Podman] Re: Locking issue?
Date: Tue, 05 Nov 2019 00:53:13 +0500
Message-ID:
In-Reply-To: CAL+7UBY0=fsgmQmwb=ZAiV_hf_oST8W2D7Acw190w97LWtLw-g@mail.gmail.com
--===============4532917608492438095==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Yes, it generates systemd files, but still hits the locking issue
(i.e. only one container works), just like the first unit in my
initial email.
=D0=BF=D0=BD, 4 =D0=BD=D0=BE=D1=8F=D0=B1. 2019 =D0=B3. =D0=B2 23:54, Scott =
McCarty :
>
> Instead, try "podman generate systemd" and you will have your unit files =
made specifically for those containers.
>
> On Mon, Nov 4, 2019 at 1:41 PM Alexander E. Patrakov wrote:
>>
>> Hello.
>>
>> I have tried Podman in Fedora 31. Not a rootless setup.
>>
>> Software versions:
>>
>> podman-1.6.2-2.fc31.x86_64
>> containernetworking-plugins-0.8.2-2.1.dev.git485be65.fc31.x86_64
>>
>> I have created two containers:
>>
>> # podman container run -d --name nginx_1 -p 80:80 nginx
>> # podman container run -d --name nginx_2 -p 81:80 nginx
>>
>> Then I wanted to make sure that they start on boot.
>>
>> According to RHEL 7 documentation,
>> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_a=
tomic_host/7/html/managing_containers/running_containers_as_systemd_service=
s_with_podman
>> , I am supposed to create systemd units. OK, let's take the documented
>> form of the unit and turn it into a template:
>>
>> [Unit]
>> Description=3DContainer %i
>>
>> [Service]
>> ExecStart=3D/usr/bin/podman start -a %i
>> ExecStop=3D/usr/bin/podman stop -t 2 %i
>>
>> [Install]
>> WantedBy=3Dmulti-user.target
>>
>> This doesn't work if there is more than one container. The error
>> is:
>>
>> Nov 04 21:35:57 podman[2268]: time=3D"2019-11-04T21:35:57+05:00"
>> level=3Derror msg=3D"Error adding network: failed to allocate for range =
0:
>> 10.88.0.19 has been allocated to
>> ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
>> duplicate allocation is not allowed"
>> Nov 04 21:35:57 podman[2268]: time=3D"2019-11-04T21:35:57+05:00"
>> level=3Derror msg=3D"Error while adding pod to CNI network \"podman\":
>> failed to allocate for range 0: 10.88.0.19 has been allocated to
>> ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
>> duplicate allocation is not allowed"
>> Nov 04 21:35:57 podman[2268]: Error: unable to start container
>> ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019:
>> error configuring network namespace for container
>> ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019:
>> failed to allocate for range 0: 10.88.0.19 has been allocated to
>> ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
>> duplicate allocation is not allowed
>>
>> (as you can see, the conflict is against the container itself)
>>
>> Apparently different runs of podman need to be serialized against each
>> other. This works:
>>
>> [Unit]
>> Description=3DContainer %i
>> Wants=3Dnetwork-online.target
>> After=3Dnetwork-online.target
>>
>> [Service]
>> Type=3Doneshot
>> RemainAfterExit=3Dyes
>> ExecStart=3Dflock /run/lock/subsys/container.lck /usr/bin/podman start %i
>> ExecStop=3D/usr/bin/podman stop -t 2 %i
>>
>> [Install]
>> WantedBy=3Dmulti-user.target
>>
>> Questions:
>>
>> a) Why isn't some equivalent of this unit shipped with podman? Or, am
>> I missing some package that ships it?
>> b) Why isn't the necessary locking built into podman itself? Or, is it
>> a bug in containernetworking-plugins?
>>
>> --
>> Alexander E. Patrakov
>> _______________________________________________
>> Podman mailing list -- podman(a)lists.podman.io
>> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
>
>
> --
>
> --
>
> Scott McCarty, RHCA
> Product Management - Containers, Red Hat Enterprise Linux & OpenShift
> Email: smccarty(a)redhat.com
> Phone: 312-660-3535
> Cell: 330-807-1043
> Web: http://crunchtools.com
>
> Have you ever wondered what happens behind the scenes when you type www.r=
edhat.com into a browser and hit enter? https://www.redhat.com/en/blog/what=
-happens-when-you-hit-enter
-- =
Alexander E. Patrakov
--===============4532917608492438095==--
From patrakov at gmail.com Mon Nov 4 20:01:08 2019
Content-Type: multipart/mixed; boundary="===============0511987459466130455=="
MIME-Version: 1.0
From: Alexander E. Patrakov
To: podman at lists.podman.io
Subject: [Podman] Firewalling services provided by containers
Date: Tue, 05 Nov 2019 01:00:51 +0500
Message-ID:
--===============0511987459466130455==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Hello.
I have tried Podman in Fedora 31. Not a rootless setup.
Software versions:
podman-1.6.2-2.fc31.x86_64
containernetworking-plugins-0.8.2-2.1.dev.git485be65.fc31.x86_64
IP and netmask of the Fedora machine in my network: 192.168.5.130/24.
Podman creates, on the start of the first container, its default
cni-podman0 bridge with IP and netmask 10.88.0.1/16.
I wanted to play through a situation when we are migrating from a
service (let's say, 9999/tcp) formerly provided by some software
installed directly on the host to the same service provided by the
same software, but in a podman container. And this software needs to
be firewalled: there is a whitelist of IP addresses (let's say
192.158.5.30 and 192.168.5.44) that have the privilege to talk to
192.168.5.130:9999.
With the old, non-containerized setup, implementing this kind of
whitelist is trivial. Add a new firewalld zone, add thenecessary ports
and whitelisted client IPs to it, set the target to REJECT or DROP,
done. However, once I switch to a containerized service, the firewall
becomes ineffective, because the packets hit the FORWARD chain, not
INPUT. I could not find a good solution that works in terms of the
exposed port (i.e. 9999, even if inside the container a different port
is used). I could either add iptables rules (yuck... firewalld exists
for a reason) to "raw" or "mangle" tables (but then I cannot reject),
or do something in the "filter" table with "-p tcp -m tcp -m conntrack
--ctorigdstport 9999" (that's better).
I think that firewald could see some improvement here. In order to
apply a whitelist of hosts that can connect, I should not need to care
whether the service is provided by something running on the host, or
by a container.
OK, another crazy idea: is it possible to use slirp4netns instead of
the default bridge for root-owned containers, just to avoid these
INPUT-vs-FORWARD firewall troubles?
-- =
Alexander E. Patrakov
--===============0511987459466130455==--
From patrakov at gmail.com Mon Nov 4 20:11:29 2019
Content-Type: multipart/mixed; boundary="===============0760749851177450803=="
MIME-Version: 1.0
From: Alexander E. Patrakov
To: podman at lists.podman.io
Subject: [Podman] Re: Firewalling services provided by containers
Date: Tue, 05 Nov 2019 01:11:11 +0500
Message-ID:
In-Reply-To: CAN_LGv2j_p=vxoC1kWGXJLeF6E4+Zey8G3+K7z93g2LG-ua8HA@mail.gmail.com
--===============0760749851177450803==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Forgot the question: what's the current best practice for firewalling
(as in: selectively, by source IP, allowing access to) services
provided by containers on the exposed ports (the "-p" option)?
=D0=B2=D1=82, 5 =D0=BD=D0=BE=D1=8F=D0=B1. 2019 =D0=B3. =D0=B2 01:00, Alexan=
der E. Patrakov :
>
> Hello.
>
> I have tried Podman in Fedora 31. Not a rootless setup.
>
> Software versions:
>
> podman-1.6.2-2.fc31.x86_64
> containernetworking-plugins-0.8.2-2.1.dev.git485be65.fc31.x86_64
>
> IP and netmask of the Fedora machine in my network: 192.168.5.130/24.
> Podman creates, on the start of the first container, its default
> cni-podman0 bridge with IP and netmask 10.88.0.1/16.
>
> I wanted to play through a situation when we are migrating from a
> service (let's say, 9999/tcp) formerly provided by some software
> installed directly on the host to the same service provided by the
> same software, but in a podman container. And this software needs to
> be firewalled: there is a whitelist of IP addresses (let's say
> 192.158.5.30 and 192.168.5.44) that have the privilege to talk to
> 192.168.5.130:9999.
>
> With the old, non-containerized setup, implementing this kind of
> whitelist is trivial. Add a new firewalld zone, add thenecessary ports
> and whitelisted client IPs to it, set the target to REJECT or DROP,
> done. However, once I switch to a containerized service, the firewall
> becomes ineffective, because the packets hit the FORWARD chain, not
> INPUT. I could not find a good solution that works in terms of the
> exposed port (i.e. 9999, even if inside the container a different port
> is used). I could either add iptables rules (yuck... firewalld exists
> for a reason) to "raw" or "mangle" tables (but then I cannot reject),
> or do something in the "filter" table with "-p tcp -m tcp -m conntrack
> --ctorigdstport 9999" (that's better).
>
> I think that firewald could see some improvement here. In order to
> apply a whitelist of hosts that can connect, I should not need to care
> whether the service is provided by something running on the host, or
> by a container.
>
> OK, another crazy idea: is it possible to use slirp4netns instead of
> the default bridge for root-owned containers, just to avoid these
> INPUT-vs-FORWARD firewall troubles?
>
> --
> Alexander E. Patrakov
-- =
Alexander E. Patrakov
--===============0760749851177450803==--
From mheon at redhat.com Mon Nov 4 20:14:09 2019
Content-Type: multipart/mixed; boundary="===============3041195059807107632=="
MIME-Version: 1.0
From: Matt Heon
To: podman at lists.podman.io
Subject: [Podman] Re: Firewalling services provided by containers
Date: Mon, 04 Nov 2019 15:13:58 -0500
Message-ID: <20191104201358.ivrgg3rbc7mu5if2@Agincourt.redhat.com>
In-Reply-To: CAN_LGv2j_p=vxoC1kWGXJLeF6E4+Zey8G3+K7z93g2LG-ua8HA@mail.gmail.com
--===============3041195059807107632==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On 2019-11-05 01:00, Alexander E. Patrakov wrote:
>Hello.
>
>I have tried Podman in Fedora 31. Not a rootless setup.
>
>Software versions:
>
>podman-1.6.2-2.fc31.x86_64
>containernetworking-plugins-0.8.2-2.1.dev.git485be65.fc31.x86_64
>
>IP and netmask of the Fedora machine in my network: 192.168.5.130/24.
>Podman creates, on the start of the first container, its default
>cni-podman0 bridge with IP and netmask 10.88.0.1/16.
>
>I wanted to play through a situation when we are migrating from a
>service (let's say, 9999/tcp) formerly provided by some software
>installed directly on the host to the same service provided by the
>same software, but in a podman container. And this software needs to
>be firewalled: there is a whitelist of IP addresses (let's say
>192.158.5.30 and 192.168.5.44) that have the privilege to talk to
>192.168.5.130:9999.
>
>With the old, non-containerized setup, implementing this kind of
>whitelist is trivial. Add a new firewalld zone, add thenecessary ports
>and whitelisted client IPs to it, set the target to REJECT or DROP,
>done. However, once I switch to a containerized service, the firewall
>becomes ineffective, because the packets hit the FORWARD chain, not
>INPUT. I could not find a good solution that works in terms of the
>exposed port (i.e. 9999, even if inside the container a different port
>is used). I could either add iptables rules (yuck... firewalld exists
>for a reason) to "raw" or "mangle" tables (but then I cannot reject),
>or do something in the "filter" table with "-p tcp -m tcp -m conntrack
>--ctorigdstport 9999" (that's better).
>
There's an open feature request to add a chain for user-specified
IPTables rules that act on containers, such that they will be
preserved across container start/stop - and I think that without this
(which is not yet implemented) you can't reliable manually configure
IPTables rules for containers, because start/stop can mangle your
rules.
>I think that firewald could see some improvement here. In order to
>apply a whitelist of hosts that can connect, I should not need to care
>whether the service is provided by something running on the host, or
>by a container.
>
>OK, another crazy idea: is it possible to use slirp4netns instead of
>the default bridge for root-owned containers, just to avoid these
>INPUT-vs-FORWARD firewall troubles?
Yes - this is possible. I believe `--net=3Dslirp4netns` should do this.
Thanks,
Matt Heon
>
>--
>Alexander E. Patrakov
>_______________________________________________
>Podman mailing list -- podman(a)lists.podman.io
>To unsubscribe send an email to podman-leave(a)lists.podman.io
--===============3041195059807107632==
Content-Type: application/pgp-signature
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="signature.asc"
LS0tLS1CRUdJTiBQR1AgU0lHTkFUVVJFLS0tLS0KCmlRSXpCQUFCQ1FBZEZpRUU1eG9UdlpBZ1JP
R1hBMzNiV2c5Zm9jNzF5SFFGQWwzQWhvTUFDZ2tRV2c5Zm9jNzEKeUhRZDFnLy9WR2lhcXFIeWZh
Yk9HM2VuMHp2cW1BQzVqVEVRWERDMldxcDY0ODdUeTBJZEVPS1lTTkRSRUZKQwprRWU3RVpLZm1m
emV1U01GOG1LbGV6Znhpams4UEdhM09FN3l6V2ZFTlFBRThkbDlUMEJvMlNrWHovcWxKOVVqCnl1
WHhzeUpZRTA1Yi9SMHdkdlRZRGZROGdTS0tTTUpkbnlCQnJiN0g3WGxudExMa0JvMno4V2ZzQlRO
anpjV24KQmxaM2dMc0FjOXU5b1E0SGVINkVrOE01azl1VlR0eFJWdk1YeFhKQS9iRHNLcjl1Qlpv
dzJaSkF3TCtyWHNpUAoyTW5wTG9HeFU4dEFLYkpWVDBGcmQ4WG9ybi9aNWtBVzhmRlVSM1hlRkhn
UUJ2K3dmTUYrSHlBNXFXWk5UYm5PCmVBR0MyTzNCLy91UjIwMXREMW9uaUEwcEZsVXdwaXhYeURC
ak5jcDMzb2JkaDFmMWdnaWJsMmlTQ2xtZ0FnUlgKSkg3Y3dSM01PdzY1S3VTTVNNbzlaY09pNlht
VWNkbkt5TzZJVFFKNlZQK1lDL1ZHT1JUMTB5d0RUb2hwMWY3ZApEaHh2MTlIc3IzYktkSWxwVU5Q
cGJhZ1E0VUtxMGVKQ2NEdXlRUGRqb3BvcVBvUERNN1dvTEhYT1BqMU9mb2F5CkdSQjNqaTJxQ3pC
dWNQSFUzT1huYUFFU3lBRzdCano5TFhsQ1FBc1pGN3dVNm9XVFVGekpQN2NTWXp5UFg4TWIKQ0RU
QnBTOTZKMlNLemlyR3dVbXFlclh6cTFTT1NPMFIwQUJKVlN5ZGFDVmQvb29xUElxUnVPd1pVRHVM
bWRLMQpZUnd2ZFVaSWZWcDFXQmhpcExkOSt6RmVYOEt5aUh0cW4yTjBQcmNmMlJ5VnB0anV2STQ9
Cj00Yk5RCi0tLS0tRU5EIFBHUCBTSUdOQVRVUkUtLS0tLQo=
--===============3041195059807107632==--
From mheon at redhat.com Mon Nov 4 20:19:34 2019
Content-Type: multipart/mixed; boundary="===============5913297975790398053=="
MIME-Version: 1.0
From: Matt Heon
To: podman at lists.podman.io
Subject: [Podman] Re: Locking issue?
Date: Mon, 04 Nov 2019 15:19:23 -0500
Message-ID: <20191104201923.6623faxgjeoutvfe@Agincourt.redhat.com>
In-Reply-To: CAN_LGv1bwwodybx85QbTS54MbOvF5+W98-VR9BfStDcay+S61g@mail.gmail.com
--===============5913297975790398053==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On 2019-11-04 23:40, Alexander E. Patrakov wrote:
>Hello.
>
>I have tried Podman in Fedora 31. Not a rootless setup.
>
>Software versions:
>
>podman-1.6.2-2.fc31.x86_64
>containernetworking-plugins-0.8.2-2.1.dev.git485be65.fc31.x86_64
>
>I have created two containers:
>
># podman container run -d --name nginx_1 -p 80:80 nginx
># podman container run -d --name nginx_2 -p 81:80 nginx
>
>Then I wanted to make sure that they start on boot.
>
>According to RHEL 7 documentation,
>https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_ato=
mic_host/7/html/managing_containers/running_containers_as_systemd_services_=
with_podman
>, I am supposed to create systemd units. OK, let's take the documented
>form of the unit and turn it into a template:
>
>[Unit]
>Description=3DContainer %i
>
>[Service]
>ExecStart=3D/usr/bin/podman start -a %i
>ExecStop=3D/usr/bin/podman stop -t 2 %i
>
>[Install]
>WantedBy=3Dmulti-user.target
>
>This doesn't work if there is more than one container. The error
>is:
>
>Nov 04 21:35:57 podman[2268]: time=3D"2019-11-04T21:35:57+05:00"
>level=3Derror msg=3D"Error adding network: failed to allocate for range 0:
>10.88.0.19 has been allocated to
>ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
>duplicate allocation is not allowed"
>Nov 04 21:35:57 podman[2268]: time=3D"2019-11-04T21:35:57+05:00"
>level=3Derror msg=3D"Error while adding pod to CNI network \"podman\":
>failed to allocate for range 0: 10.88.0.19 has been allocated to
>ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
>duplicate allocation is not allowed"
>Nov 04 21:35:57 podman[2268]: Error: unable to start container
>ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019:
>error configuring network namespace for container
>ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019:
>failed to allocate for range 0: 10.88.0.19 has been allocated to
>ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
>duplicate allocation is not allowed
>
>(as you can see, the conflict is against the container itself)
>
>Apparently different runs of podman need to be serialized against each
>other. This works:
>
>[Unit]
>Description=3DContainer %i
>Wants=3Dnetwork-online.target
>After=3Dnetwork-online.target
>
>[Service]
>Type=3Doneshot
>RemainAfterExit=3Dyes
>ExecStart=3Dflock /run/lock/subsys/container.lck /usr/bin/podman start %i
>ExecStop=3D/usr/bin/podman stop -t 2 %i
>
>[Install]
>WantedBy=3Dmulti-user.target
>
>Questions:
>
>a) Why isn't some equivalent of this unit shipped with podman? Or, am
>I missing some package that ships it?
>b) Why isn't the necessary locking built into podman itself? Or, is it
>a bug in containernetworking-plugins?
These containers aren't using static IPs, correct?
I can recall an issue where static IP allocations were leaving address
reservations around after reboot, causing issues... But that should be
fixed on the Podman we ship in F31.
Otherwise, this sounds suspiciously like a CNI bug. I would hope that
CNI has sufficient locking to prevent this from racing, but I could be
wrong.
Also, you should try using `podman generate systemd` for unit files.
Looking at your unit files, I don't think they operate as advertised
(`start --attach` can exit while the container is still running, so
tracking it is not a reliable way of tracking the container).
Thanks,
Matt Heon
>
>--
>Alexander E. Patrakov
>_______________________________________________
>Podman mailing list -- podman(a)lists.podman.io
>To unsubscribe send an email to podman-leave(a)lists.podman.io
--===============5913297975790398053==
Content-Type: application/pgp-signature
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="signature.asc"
LS0tLS1CRUdJTiBQR1AgU0lHTkFUVVJFLS0tLS0KCmlRSXpCQUFCQ1FBZEZpRUU1eG9UdlpBZ1JP
R1hBMzNiV2c5Zm9jNzF5SFFGQWwzQWg4a0FDZ2tRV2c5Zm9jNzEKeUhTYUNBLy9hSWlxalNmVld3
VHN5UlRtc251U0JwZmxZQVBodjc1ak5oT2dTaGl2Z21DNEtJYlZtVWJZbUszUwpLUS8vOWsrVHZl
UHFNTVdiTmFSWVh2ZUZ6VVNnUG5GSDdGaGlGMkZTbzdxZWlPQmQwejl1K0RhcFFkQWRHOVF0CmMy
SmJEUXZtVndtRktaZFFzelB4RzJwQ1JiOW9OcHlwVytDaDZuOXpHdEk2NFcvM2xtd2tYbW90eDV1
N0NnTmQKcGxsMVhBaFIwN0dxL2tKUklWK050YVZuRlA2djJYZTNzcTYybmlxZlExcUcxMWQvNTN1
SExyTnJGc2FnY0RaRwpDQjJjTTZ3R205bllJb2wyaVNxMEM3SU1wZG1KTk5xUzZydFZXeTRQeDcv
SVZFZXRJTllRbVI4Qm5KMVV5U3JrCjF1SzlDNnFKWUhpMFFsSlVIRDhuU0lxZ0lXOXMvUmZIUEdG
SFA0NTRqTERWYXYzQ2ZOb2NhRkNscWFXbXMzeUUKM2xwWG91RVdNc0RCckpBbDRMV08xcmdueUlj
SEUrZjhxekRhZ25KVjQ3VHRYWWliY3dmVTE0M3RXa0dRSytpVwpmOWFPMHpoOGhoMGFHeGlrZWtZ
eXlDeExUUWJoZGpmV1o4QUFYa1VZa25aRFptRkEvZXhLNFRZZEFCRnBLR2kzCk1ROVpBcXlZbElG
V0w5bExIV0k1dDVtdXEvend4R2t2MHNTRmhKcDJTUyt2SzF3WFlrMU1MRGhJVkxQckNJOGwKSWQ3
ZGhFQk1DYnFuYWtBeDZvaGJxYk94NHlYY0haZVpCMVo4djl3cUUxODlyeWw1UytvZmJpbzJBeklm
Y2RzZgp0RFl2ekgwdmp6cGMvT1V5NTE2OTFIbGtIV1FtVUoyQnFNNmFJcWxNZkUzOW9DZ09OMkE9
Cj1udEc2Ci0tLS0tRU5EIFBHUCBTSUdOQVRVUkUtLS0tLQo=
--===============5913297975790398053==--
From bbaude at redhat.com Mon Nov 4 20:21:11 2019
Content-Type: multipart/mixed; boundary="===============4673883250620169272=="
MIME-Version: 1.0
From: Brent Baude
To: podman at lists.podman.io
Subject: [Podman] Re: Firewalling services provided by containers
Date: Mon, 04 Nov 2019 14:19:51 -0600
Message-ID:
In-Reply-To: CAN_LGv33b2c1TEewYkCKWLfhWmHQ94oc6WJdoKYZdAPXoBLdzA@mail.gmail.com
--===============4673883250620169272==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Alexander,
Would this help your case?
https://github.com/containernetworking/plugins/tree/master/plugins/meta/fir=
ewall
On Tue, 2019-11-05 at 01:11 +0500, Alexander E. Patrakov wrote:
> Forgot the question: what's the current best practice for firewalling
> (as in: selectively, by source IP, allowing access to) services
> provided by containers on the exposed ports (the "-p" option)?
> =
> =D0=B2=D1=82, 5 =D0=BD=D0=BE=D1=8F=D0=B1. 2019 =D0=B3. =D0=B2 01:00, Alex=
ander E. Patrakov <
> patrakov(a)gmail.com>:
> > Hello.
> > =
> > I have tried Podman in Fedora 31. Not a rootless setup.
> > =
> > Software versions:
> > =
> > podman-1.6.2-2.fc31.x86_64
> > containernetworking-plugins-0.8.2-2.1.dev.git485be65.fc31.x86_64
> > =
> > IP and netmask of the Fedora machine in my network:
> > 192.168.5.130/24.
> > Podman creates, on the start of the first container, its default
> > cni-podman0 bridge with IP and netmask 10.88.0.1/16.
> > =
> > I wanted to play through a situation when we are migrating from a
> > service (let's say, 9999/tcp) formerly provided by some software
> > installed directly on the host to the same service provided by the
> > same software, but in a podman container. And this software needs
> > to
> > be firewalled: there is a whitelist of IP addresses (let's say
> > 192.158.5.30 and 192.168.5.44) that have the privilege to talk to
> > 192.168.5.130:9999.
> > =
> > With the old, non-containerized setup, implementing this kind of
> > whitelist is trivial. Add a new firewalld zone, add thenecessary
> > ports
> > and whitelisted client IPs to it, set the target to REJECT or DROP,
> > done. However, once I switch to a containerized service, the
> > firewall
> > becomes ineffective, because the packets hit the FORWARD chain, not
> > INPUT. I could not find a good solution that works in terms of the
> > exposed port (i.e. 9999, even if inside the container a different
> > port
> > is used). I could either add iptables rules (yuck... firewalld
> > exists
> > for a reason) to "raw" or "mangle" tables (but then I cannot
> > reject),
> > or do something in the "filter" table with "-p tcp -m tcp -m
> > conntrack
> > --ctorigdstport 9999" (that's better).
> > =
> > I think that firewald could see some improvement here. In order to
> > apply a whitelist of hosts that can connect, I should not need to
> > care
> > whether the service is provided by something running on the host,
> > or
> > by a container.
> > =
> > OK, another crazy idea: is it possible to use slirp4netns instead
> > of
> > the default bridge for root-owned containers, just to avoid these
> > INPUT-vs-FORWARD firewall troubles?
> > =
> > --
> > Alexander E. Patrakov
> =
> =
> -- =
> Alexander E. Patrakov
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
--===============4673883250620169272==--
From patrakov at gmail.com Mon Nov 4 20:30:48 2019
Content-Type: multipart/mixed; boundary="===============8851351808741310227=="
MIME-Version: 1.0
From: Alexander E. Patrakov
To: podman at lists.podman.io
Subject: [Podman] Re: Firewalling services provided by containers
Date: Tue, 05 Nov 2019 01:30:31 +0500
Message-ID:
In-Reply-To: 20191104201358.ivrgg3rbc7mu5if2@Agincourt.redhat.com
--===============8851351808741310227==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
=D0=B2=D1=82, 5 =D0=BD=D0=BE=D1=8F=D0=B1. 2019 =D0=B3. =D0=B2 01:14, Matt =
Heon :
>
> On 2019-11-05 01:00, Alexander E. Patrakov wrote:
> >Hello.
> >
> >I have tried Podman in Fedora 31. Not a rootless setup.
> >
> >Software versions:
> >
> >podman-1.6.2-2.fc31.x86_64
> >containernetworking-plugins-0.8.2-2.1.dev.git485be65.fc31.x86_64
> >
> >IP and netmask of the Fedora machine in my network: 192.168.5.130/24.
> >Podman creates, on the start of the first container, its default
> >cni-podman0 bridge with IP and netmask 10.88.0.1/16.
> >
> >I wanted to play through a situation when we are migrating from a
> >service (let's say, 9999/tcp) formerly provided by some software
> >installed directly on the host to the same service provided by the
> >same software, but in a podman container. And this software needs to
> >be firewalled: there is a whitelist of IP addresses (let's say
> >192.158.5.30 and 192.168.5.44) that have the privilege to talk to
> >192.168.5.130:9999.
> >
> >With the old, non-containerized setup, implementing this kind of
> >whitelist is trivial. Add a new firewalld zone, add thenecessary ports
> >and whitelisted client IPs to it, set the target to REJECT or DROP,
> >done. However, once I switch to a containerized service, the firewall
> >becomes ineffective, because the packets hit the FORWARD chain, not
> >INPUT. I could not find a good solution that works in terms of the
> >exposed port (i.e. 9999, even if inside the container a different port
> >is used). I could either add iptables rules (yuck... firewalld exists
> >for a reason) to "raw" or "mangle" tables (but then I cannot reject),
> >or do something in the "filter" table with "-p tcp -m tcp -m conntrack
> >--ctorigdstport 9999" (that's better).
> >
>
> There's an open feature request to add a chain for user-specified
> IPTables rules that act on containers, such that they will be
> preserved across container start/stop - and I think that without this
> (which is not yet implemented) you can't reliable manually configure
> IPTables rules for containers, because start/stop can mangle your
> rules.
Thanks for confirmation.
> >I think that firewald could see some improvement here. In order to
> >apply a whitelist of hosts that can connect, I should not need to care
> >whether the service is provided by something running on the host, or
> >by a container.
> >
> >OK, another crazy idea: is it possible to use slirp4netns instead of
> >the default bridge for root-owned containers, just to avoid these
> >INPUT-vs-FORWARD firewall troubles?
>
> Yes - this is possible. I believe `--net=3Dslirp4netns` should do this.
This works, let me play a bit more with it in order to see if it is a
viable workaround.
Thanks for your help!
-- =
Alexander E. Patrakov
--===============8851351808741310227==--
From patrakov at gmail.com Mon Nov 4 20:34:58 2019
Content-Type: multipart/mixed; boundary="===============5390473288570715075=="
MIME-Version: 1.0
From: Alexander E. Patrakov
To: podman at lists.podman.io
Subject: [Podman] Re: Firewalling services provided by containers
Date: Tue, 05 Nov 2019 01:34:39 +0500
Message-ID:
In-Reply-To: c41a0263b7f84b35d2074d61abd06aa5920a5ba4.camel@redhat.com
--===============5390473288570715075==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Hello Brent,
no, the "firewall" plugin is for a different purpose. It inserts
iptables rules that allow the NATed traffic to containers, or adds the
IP addresses of containers to a configurable ("trusted" by default)
firewalld zone. It offers no way (or at least no obvious way) to say
that 192.168.5.30 can have access and 192.168.5.31 can't.
=D0=B2=D1=82, 5 =D0=BD=D0=BE=D1=8F=D0=B1. 2019 =D0=B3. =D0=B2 01:19, Brent =
Baude :
>
> Alexander,
>
> Would this help your case?
>
> https://github.com/containernetworking/plugins/tree/master/plugins/meta/f=
irewall
>
> On Tue, 2019-11-05 at 01:11 +0500, Alexander E. Patrakov wrote:
> > Forgot the question: what's the current best practice for firewalling
> > (as in: selectively, by source IP, allowing access to) services
> > provided by containers on the exposed ports (the "-p" option)?
> >
> > =D0=B2=D1=82, 5 =D0=BD=D0=BE=D1=8F=D0=B1. 2019 =D0=B3. =D0=B2 01:00, Al=
exander E. Patrakov <
> > patrakov(a)gmail.com>:
> > > Hello.
> > >
> > > I have tried Podman in Fedora 31. Not a rootless setup.
> > >
> > > Software versions:
> > >
> > > podman-1.6.2-2.fc31.x86_64
> > > containernetworking-plugins-0.8.2-2.1.dev.git485be65.fc31.x86_64
> > >
> > > IP and netmask of the Fedora machine in my network:
> > > 192.168.5.130/24.
> > > Podman creates, on the start of the first container, its default
> > > cni-podman0 bridge with IP and netmask 10.88.0.1/16.
> > >
> > > I wanted to play through a situation when we are migrating from a
> > > service (let's say, 9999/tcp) formerly provided by some software
> > > installed directly on the host to the same service provided by the
> > > same software, but in a podman container. And this software needs
> > > to
> > > be firewalled: there is a whitelist of IP addresses (let's say
> > > 192.158.5.30 and 192.168.5.44) that have the privilege to talk to
> > > 192.168.5.130:9999.
> > >
> > > With the old, non-containerized setup, implementing this kind of
> > > whitelist is trivial. Add a new firewalld zone, add thenecessary
> > > ports
> > > and whitelisted client IPs to it, set the target to REJECT or DROP,
> > > done. However, once I switch to a containerized service, the
> > > firewall
> > > becomes ineffective, because the packets hit the FORWARD chain, not
> > > INPUT. I could not find a good solution that works in terms of the
> > > exposed port (i.e. 9999, even if inside the container a different
> > > port
> > > is used). I could either add iptables rules (yuck... firewalld
> > > exists
> > > for a reason) to "raw" or "mangle" tables (but then I cannot
> > > reject),
> > > or do something in the "filter" table with "-p tcp -m tcp -m
> > > conntrack
> > > --ctorigdstport 9999" (that's better).
> > >
> > > I think that firewald could see some improvement here. In order to
> > > apply a whitelist of hosts that can connect, I should not need to
> > > care
> > > whether the service is provided by something running on the host,
> > > or
> > > by a container.
> > >
> > > OK, another crazy idea: is it possible to use slirp4netns instead
> > > of
> > > the default bridge for root-owned containers, just to avoid these
> > > INPUT-vs-FORWARD firewall troubles?
> > >
> > > --
> > > Alexander E. Patrakov
> >
> >
> > --
> > Alexander E. Patrakov
> > _______________________________________________
> > Podman mailing list -- podman(a)lists.podman.io
> > To unsubscribe send an email to podman-leave(a)lists.podman.io
>
-- =
Alexander E. Patrakov
--===============5390473288570715075==--
From patrakov at gmail.com Mon Nov 4 20:40:44 2019
Content-Type: multipart/mixed; boundary="===============8335688159559055099=="
MIME-Version: 1.0
From: Alexander E. Patrakov
To: podman at lists.podman.io
Subject: [Podman] Re: Locking issue?
Date: Tue, 05 Nov 2019 01:40:27 +0500
Message-ID:
In-Reply-To: 20191104201923.6623faxgjeoutvfe@Agincourt.redhat.com
--===============8335688159559055099==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
"Matt,
no, I don't use static IPs. I let podman allocate them. I have already
tried `podman generate systemd` as per earlier suggestion.
The issue is definitely not with stale reservations persisting across
a reboot, otherwise adding "flock" would not have helped.
Regarding the "`start --attach` can exit while the container is still
running comment: if it is true, please ask the appropriate person to
fix the systemd unit example in RHEL7 documentation.
=D0=B2=D1=82, 5 =D0=BD=D0=BE=D1=8F=D0=B1. 2019 =D0=B3. =D0=B2 01:19, Matt H=
eon :
>
> On 2019-11-04 23:40, Alexander E. Patrakov wrote:
> >Hello.
> >
> >I have tried Podman in Fedora 31. Not a rootless setup.
> >
> >Software versions:
> >
> >podman-1.6.2-2.fc31.x86_64
> >containernetworking-plugins-0.8.2-2.1.dev.git485be65.fc31.x86_64
> >
> >I have created two containers:
> >
> ># podman container run -d --name nginx_1 -p 80:80 nginx
> ># podman container run -d --name nginx_2 -p 81:80 nginx
> >
> >Then I wanted to make sure that they start on boot.
> >
> >According to RHEL 7 documentation,
> >https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_a=
tomic_host/7/html/managing_containers/running_containers_as_systemd_service=
s_with_podman
> >, I am supposed to create systemd units. OK, let's take the documented
> >form of the unit and turn it into a template:
> >
> >[Unit]
> >Description=3DContainer %i
> >
> >[Service]
> >ExecStart=3D/usr/bin/podman start -a %i
> >ExecStop=3D/usr/bin/podman stop -t 2 %i
> >
> >[Install]
> >WantedBy=3Dmulti-user.target
> >
> >This doesn't work if there is more than one container. The error
> >is:
> >
> >Nov 04 21:35:57 podman[2268]: time=3D"2019-11-04T21:35:57+05:00"
> >level=3Derror msg=3D"Error adding network: failed to allocate for range =
0:
> >10.88.0.19 has been allocated to
> >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
> >duplicate allocation is not allowed"
> >Nov 04 21:35:57 podman[2268]: time=3D"2019-11-04T21:35:57+05:00"
> >level=3Derror msg=3D"Error while adding pod to CNI network \"podman\":
> >failed to allocate for range 0: 10.88.0.19 has been allocated to
> >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
> >duplicate allocation is not allowed"
> >Nov 04 21:35:57 podman[2268]: Error: unable to start container
> >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019:
> >error configuring network namespace for container
> >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019:
> >failed to allocate for range 0: 10.88.0.19 has been allocated to
> >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
> >duplicate allocation is not allowed
> >
> >(as you can see, the conflict is against the container itself)
> >
> >Apparently different runs of podman need to be serialized against each
> >other. This works:
> >
> >[Unit]
> >Description=3DContainer %i
> >Wants=3Dnetwork-online.target
> >After=3Dnetwork-online.target
> >
> >[Service]
> >Type=3Doneshot
> >RemainAfterExit=3Dyes
> >ExecStart=3Dflock /run/lock/subsys/container.lck /usr/bin/podman start %i
> >ExecStop=3D/usr/bin/podman stop -t 2 %i
> >
> >[Install]
> >WantedBy=3Dmulti-user.target
> >
> >Questions:
> >
> >a) Why isn't some equivalent of this unit shipped with podman? Or, am
> >I missing some package that ships it?
> >b) Why isn't the necessary locking built into podman itself? Or, is it
> >a bug in containernetworking-plugins?
>
> These containers aren't using static IPs, correct?
>
> I can recall an issue where static IP allocations were leaving address
> reservations around after reboot, causing issues... But that should be
> fixed on the Podman we ship in F31.
>
> Otherwise, this sounds suspiciously like a CNI bug. I would hope that
> CNI has sufficient locking to prevent this from racing, but I could be
> wrong.
>
> Also, you should try using `podman generate systemd` for unit files.
> Looking at your unit files, I don't think they operate as advertised
> (`start --attach` can exit while the container is still running, so
> tracking it is not a reliable way of tracking the container).
>
> Thanks,
> Matt Heon
>
> >
> >--
> >Alexander E. Patrakov
> >_______________________________________________
> >Podman mailing list -- podman(a)lists.podman.io
> >To unsubscribe send an email to podman-leave(a)lists.podman.io
-- =
Alexander E. Patrakov
--===============8335688159559055099==--
From mh+podman at scrit.ch Mon Nov 4 21:25:22 2019
Content-Type: multipart/mixed; boundary="===============8544836681431864845=="
MIME-Version: 1.0
From: mh
To: podman at lists.podman.io
Subject: [Podman] Re: feasible to upgrade podman on CentOS 8 to current
version?
Date: Mon, 04 Nov 2019 21:29:33 +0100
Message-ID:
In-Reply-To: 20191009132508.GC51717@nagato.nanadai.me
--===============8544836681431864845==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
> On Wed, Oct 09, 2019 at 09:03:41AM -0400, Robert P. J. Day wrote:
>>
>> i just upgraded a CentOS box to CentOS 8, and i can see that the
>> version of podman is (unsurprisingly) a bit dated:
>>
>> $ podman --version
>> podman version 1.0.2-dev
>>
>> compared to my fedora 30 system:
>>
>> $ podman --version
>> podman version 1.6.1
>>
>> is it feasible to try to download and build from source to get the
>> latest version on my CentOS system, or would that just be more trouble
>> than it's worth?
> =
> I guess building from Makefile should work just fine..
> =
> If you'd like to try building from fedora 31 rpm spec file,
> see: https://src.fedoraproject.org/rpms/podman/blob/f31/f/podman.spec
> I try to keep it buildable on CentOS7 (haven't tried CentOS8 yet).
> =
> CentOS Stream (once available) should hopefully address the availability =
issue.
> =
> I'm not sure of ETA, but I'm thinking I'll enable epel8 COPR for this as a
> temporary solution in case CentOS Stream takes too long.
> (Let me get back on this..)
I am not able to build it on CentOS 8:
$ mock -r epel-8-x86_64 --rebuild
/home/mh/fedora/buildah/buildah-1.11.4-3.fc31.src.rpm
[...]
ERROR: Exception(/home/mh/fedora/buildah/buildah-1.11.4-3.fc31.src.rpm)
Config(epel-8-x86_64) 1 minutes 36 seconds
INFO: Results and/or logs in: /var/lib/mock/epel-8-x86_64/result
INFO: Cleaning up build root ('cleanup_on_failure=3DTrue')
Start: clean chroot
Finish: clean chroot
ERROR: Command failed:
# /usr/bin/dnf builddep --installroot /var/lib/mock/epel-8-x86_64/root/
--releasever 8 --setopt=3Ddeltarpm=3DFalse --allowerasing
--disableplugin=3Dlocal --disableplugin=3Dspacewalk --disableplugin=3Dlocal
--disableplugin=3Dspacewalk
/var/lib/mock/epel-8-x86_64/root//builddir/build/SRPMS/buildah-1.11.4-3.el8=
.src.rpm
--setopt=3Dtsflags=3Dnocontexts
No matches found for the following disable plugin patterns: local, spacewalk
CentOS-8 - Base
10 kB/s | 3.9 kB 00:00
CentOS-8 - Base
15 MB/s | 7.9 MB 00:00
CentOS-8 - AppStream
17 kB/s | 4.3 kB 00:00
CentOS-8 - AppStream
8.4 MB/s | 6.3 MB 00:00
CentOS-8 - PowerTools
12 kB/s | 4.3 kB 00:00
CentOS-8 - PowerTools
599 kB/s | 1.8 MB 00:03
CentOS-8 - Extras
3.6 kB/s | 1.5 kB 00:00
No matching package to install: 'btrfs-progs-devel'
No matching package to install: 'go-md2man'
No matching package to install: 'libseccomp-static'
Package make-1:4.2.1-9.el8.x86_64 is already installed.
Not all dependencies satisfied
Error: Some packages could not be found.
Anyone has a more recent version for EL8?
~mh
--===============8544836681431864845==--
From bbaude at redhat.com Mon Nov 4 22:06:36 2019
Content-Type: multipart/mixed; boundary="===============0466537926869464443=="
MIME-Version: 1.0
From: Brent Baude
To: podman at lists.podman.io
Subject: [Podman] Re: Locking issue?
Date: Mon, 04 Nov 2019 16:06:22 -0600
Message-ID:
In-Reply-To: CAN_LGv1+LpDM-bhpT=qwBDEdCgdauF5=ELwf83GYv3W-Tcn6eQ@mail.gmail.com
--===============0466537926869464443==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
The appropriate forum for the doc correction would be bugzilla.
On Tue, 2019-11-05 at 01:40 +0500, Alexander E. Patrakov wrote:
> "Matt,
> =
> no, I don't use static IPs. I let podman allocate them. I have
> already
> tried `podman generate systemd` as per earlier suggestion.
> =
> The issue is definitely not with stale reservations persisting across
> a reboot, otherwise adding "flock" would not have helped.
> =
> Regarding the "`start --attach` can exit while the container is still
> running comment: if it is true, please ask the appropriate person to
> fix the systemd unit example in RHEL7 documentation.
> =
> =D0=B2=D1=82, 5 =D0=BD=D0=BE=D1=8F=D0=B1. 2019 =D0=B3. =D0=B2 01:19, Matt=
Heon :
> > On 2019-11-04 23:40, Alexander E. Patrakov wrote:
> > > Hello.
> > > =
> > > I have tried Podman in Fedora 31. Not a rootless setup.
> > > =
> > > Software versions:
> > > =
> > > podman-1.6.2-2.fc31.x86_64
> > > containernetworking-plugins-0.8.2-2.1.dev.git485be65.fc31.x86_64
> > > =
> > > I have created two containers:
> > > =
> > > # podman container run -d --name nginx_1 -p 80:80 nginx
> > > # podman container run -d --name nginx_2 -p 81:80 nginx
> > > =
> > > Then I wanted to make sure that they start on boot.
> > > =
> > > According to RHEL 7 documentation,
> > > https://access.redhat.com/documentation/en-us/red_hat_enterprise_linu=
x_atomic_host/7/html/managing_containers/running_containers_as_systemd_serv=
ices_with_podman
> > > , I am supposed to create systemd units. OK, let's take the
> > > documented
> > > form of the unit and turn it into a template:
> > > =
> > > [Unit]
> > > Description=3DContainer %i
> > > =
> > > [Service]
> > > ExecStart=3D/usr/bin/podman start -a %i
> > > ExecStop=3D/usr/bin/podman stop -t 2 %i
> > > =
> > > [Install]
> > > WantedBy=3Dmulti-user.target
> > > =
> > > This doesn't work if there is more than one container. The error
> > > is:
> > > =
> > > Nov 04 21:35:57 podman[2268]: time=3D"2019-11-04T21:35:57+05:00"
> > > level=3Derror msg=3D"Error adding network: failed to allocate for
> > > range 0:
> > > 10.88.0.19 has been allocated to
> > > ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
> > > duplicate allocation is not allowed"
> > > Nov 04 21:35:57 podman[2268]: time=3D"2019-11-04T21:35:57+05:00"
> > > level=3Derror msg=3D"Error while adding pod to CNI network
> > > \"podman\":
> > > failed to allocate for range 0: 10.88.0.19 has been allocated to
> > > ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
> > > duplicate allocation is not allowed"
> > > Nov 04 21:35:57 podman[2268]: Error: unable to start container
> > > ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019:
> > > error configuring network namespace for container
> > > ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019:
> > > failed to allocate for range 0: 10.88.0.19 has been allocated to
> > > ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
> > > duplicate allocation is not allowed
> > > =
> > > (as you can see, the conflict is against the container itself)
> > > =
> > > Apparently different runs of podman need to be serialized against
> > > each
> > > other. This works:
> > > =
> > > [Unit]
> > > Description=3DContainer %i
> > > Wants=3Dnetwork-online.target
> > > After=3Dnetwork-online.target
> > > =
> > > [Service]
> > > Type=3Doneshot
> > > RemainAfterExit=3Dyes
> > > ExecStart=3Dflock /run/lock/subsys/container.lck /usr/bin/podman
> > > start %i
> > > ExecStop=3D/usr/bin/podman stop -t 2 %i
> > > =
> > > [Install]
> > > WantedBy=3Dmulti-user.target
> > > =
> > > Questions:
> > > =
> > > a) Why isn't some equivalent of this unit shipped with podman?
> > > Or, am
> > > I missing some package that ships it?
> > > b) Why isn't the necessary locking built into podman itself? Or,
> > > is it
> > > a bug in containernetworking-plugins?
> > =
> > These containers aren't using static IPs, correct?
> > =
> > I can recall an issue where static IP allocations were leaving
> > address
> > reservations around after reboot, causing issues... But that should
> > be
> > fixed on the Podman we ship in F31.
> > =
> > Otherwise, this sounds suspiciously like a CNI bug. I would hope
> > that
> > CNI has sufficient locking to prevent this from racing, but I could
> > be
> > wrong.
> > =
> > Also, you should try using `podman generate systemd` for unit
> > files.
> > Looking at your unit files, I don't think they operate as
> > advertised
> > (`start --attach` can exit while the container is still running, so
> > tracking it is not a reliable way of tracking the container).
> > =
> > Thanks,
> > Matt Heon
> > =
> > > --
> > > Alexander E. Patrakov
> > > _______________________________________________
> > > Podman mailing list -- podman(a)lists.podman.io
> > > To unsubscribe send an email to podman-leave(a)lists.podman.io
> =
> =
> -- =
> Alexander E. Patrakov
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
--===============0466537926869464443==--
From smccarty at redhat.com Tue Nov 5 11:53:51 2019
Content-Type: multipart/mixed; boundary="===============1794120593391317596=="
MIME-Version: 1.0
From: Scott McCarty
To: podman at lists.podman.io
Subject: [Podman] Re: feasible to upgrade podman on CentOS 8 to current
version?
Date: Tue, 05 Nov 2019 06:54:12 -0500
Message-ID:
In-Reply-To: b18d05a5-57f8-6663-21eb-99ded238acf5@scrit.ch
--===============1794120593391317596==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Long story, but we weren't able to get an update into the RHEl 8.0 - 12
week release (aka six weeks after RHEl 8.0 launched. Instead we have to
wait for RHEl 8.1. This combined with the fact that Podman is moving very
quickly has combined to create the perception of a very old version on
RHEL/CentOS. Our next updates to RHEL will be in 8.1 (impending), 12 weeks
after that, and again at 8.. In a nutshell, container-tools:rhel8 should be
updated every 12 weeks from now on.
CentOS Stream really plays no role in this. This is all set by the RHEL
clock. In the meantime, one of my hacks has been to ride off of the work
Lokesh does here:
https://cbs.centos.org/koji/packageinfo?packageID=3D6853
But, sadly, I don't think he has done anything for RHEL 8 yet. RHEL 8.1
with podman 1.4.4 should be out any day. Then we should have podman 1.6.X
in RHEl 8.1 about 12 weeks later (pretty good - hopefully "suprisingly" ?
:-)
Best Regards
Scott M
On Mon, Nov 4, 2019 at 4:26 PM mh wrote:
> > On Wed, Oct 09, 2019 at 09:03:41AM -0400, Robert P. J. Day wrote:
> >>
> >> i just upgraded a CentOS box to CentOS 8, and i can see that the
> >> version of podman is (unsurprisingly) a bit dated:
> >>
> >> $ podman --version
> >> podman version 1.0.2-dev
> >>
> >> compared to my fedora 30 system:
> >>
> >> $ podman --version
> >> podman version 1.6.1
> >>
> >> is it feasible to try to download and build from source to get the
> >> latest version on my CentOS system, or would that just be more trouble
> >> than it's worth?
> >
> > I guess building from Makefile should work just fine..
> >
> > If you'd like to try building from fedora 31 rpm spec file,
> > see: https://src.fedoraproject.org/rpms/podman/blob/f31/f/podman.spec
> > I try to keep it buildable on CentOS7 (haven't tried CentOS8 yet).
> >
> > CentOS Stream (once available) should hopefully address the availability
> issue.
> >
> > I'm not sure of ETA, but I'm thinking I'll enable epel8 COPR for this as
> a
> > temporary solution in case CentOS Stream takes too long.
> > (Let me get back on this..)
>
> I am not able to build it on CentOS 8:
>
> $ mock -r epel-8-x86_64 --rebuild
> /home/mh/fedora/buildah/buildah-1.11.4-3.fc31.src.rpm
>
> [...]
> ERROR: Exception(/home/mh/fedora/buildah/buildah-1.11.4-3.fc31.src.rpm)
> Config(epel-8-x86_64) 1 minutes 36 seconds
> INFO: Results and/or logs in: /var/lib/mock/epel-8-x86_64/result
> INFO: Cleaning up build root ('cleanup_on_failure=3DTrue')
> Start: clean chroot
> Finish: clean chroot
> ERROR: Command failed:
> # /usr/bin/dnf builddep --installroot /var/lib/mock/epel-8-x86_64/root/
> --releasever 8 --setopt=3Ddeltarpm=3DFalse --allowerasing
> --disableplugin=3Dlocal --disableplugin=3Dspacewalk --disableplugin=3Dloc=
al
> --disableplugin=3Dspacewalk
>
> /var/lib/mock/epel-8-x86_64/root//builddir/build/SRPMS/buildah-1.11.4-3.e=
l8.src.rpm
> --setopt=3Dtsflags=3Dnocontexts
> No matches found for the following disable plugin patterns: local,
> spacewalk
> CentOS-8 - Base
> 10 kB/s | 3.9 kB 00:00
> CentOS-8 - Base
> 15 MB/s | 7.9 MB 00:00
> CentOS-8 - AppStream
> 17 kB/s | 4.3 kB 00:00
> CentOS-8 - AppStream
> 8.4 MB/s | 6.3 MB 00:00
> CentOS-8 - PowerTools
> 12 kB/s | 4.3 kB 00:00
> CentOS-8 - PowerTools
> 599 kB/s | 1.8 MB 00:03
> CentOS-8 - Extras
> 3.6 kB/s | 1.5 kB 00:00
> No matching package to install: 'btrfs-progs-devel'
> No matching package to install: 'go-md2man'
> No matching package to install: 'libseccomp-static'
> Package make-1:4.2.1-9.el8.x86_64 is already installed.
> Not all dependencies satisfied
> Error: Some packages could not be found.
>
>
> Anyone has a more recent version for EL8?
>
> ~mh
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
-- =
-- =
Scott McCarty, RHCA
Product Management - Containers, Red Hat Enterprise Linux & OpenShift
Email: smccarty(a)redhat.com
Phone: 312-660-3535
Cell: 330-807-1043
Web: http://crunchtools.com
Have you ever wondered what happens behind the scenes when you type
www.redhat.com into a browser and hit enter?
https://www.redhat.com/en/blog/what-happens-when-you-hit-enter
--===============1794120593391317596==
Content-Type: text/html
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="attachment.html"
PGRpdiBkaXI9Imx0ciI+PGRpdiBjbGFzcz0iZ21haWxfZGVmYXVsdCIgc3R5bGU9ImZvbnQtc2l6
ZTpzbWFsbCI+TG9uZyBzdG9yeSwgYnV0IHdlIHdlcmVuJiMzOTt0IGFibGUgdG8gZ2V0IGFuIHVw
ZGF0ZSBpbnRvIHRoZSBSSEVsIDguMCAtIDEyIHdlZWsgcmVsZWFzZSAoYWthIHNpeCB3ZWVrcyBh
ZnRlciBSSEVsIDguMCBsYXVuY2hlZC4gSW5zdGVhZCB3ZSBoYXZlIHRvIHdhaXQgZm9yIFJIRWwg
OC4xLiBUaGlzIGNvbWJpbmVkIHdpdGggdGhlIGZhY3QgdGhhdCBQb2RtYW4gaXMgbW92aW5nIHZl
cnkgcXVpY2tseSBoYXMgY29tYmluZWQgdG8gY3JlYXRlIHRoZSBwZXJjZXB0aW9uIG9mIGEgdmVy
eSBvbGQgdmVyc2lvbiBvbiBSSEVML0NlbnRPUy4gT3VyIG5leHQgdXBkYXRlcyB0byBSSEVMIHdp
bGwgYmUgaW4gOC4xIChpbXBlbmRpbmcpLCAxMiB3ZWVrcyBhZnRlciB0aGF0LCBhbmQgYWdhaW4g
YXQgOC4uIEluIGEgbnV0c2hlbGwsIGNvbnRhaW5lci10b29sczpyaGVsOCBzaG91bGQgYmUgdXBk
YXRlZCBldmVyeSAxMiB3ZWVrcyBmcm9tIG5vdyBvbi7CoDwvZGl2PjxkaXYgY2xhc3M9ImdtYWls
X2RlZmF1bHQiIHN0eWxlPSJmb250LXNpemU6c21hbGwiPjxicj48L2Rpdj48ZGl2IGNsYXNzPSJn
bWFpbF9kZWZhdWx0IiBzdHlsZT0iZm9udC1zaXplOnNtYWxsIj5DZW50T1MgU3RyZWFtIHJlYWxs
eSBwbGF5cyBubyByb2xlIGluIHRoaXMuIFRoaXMgaXMgYWxsIHNldCBieSB0aGUgUkhFTCBjbG9j
ay4gSW4gdGhlwqBtZWFudGltZSwgb25lIG9mIG15IGhhY2tzIGhhcyBiZWVuIHRvIHJpZGUgb2Zm
IG9mIHRoZSB3b3JrIExva2VzaCBkb2VzIGhlcmU6PC9kaXY+PGRpdiBjbGFzcz0iZ21haWxfZGVm
YXVsdCIgc3R5bGU9ImZvbnQtc2l6ZTpzbWFsbCI+PGJyPjwvZGl2PjxkaXYgY2xhc3M9ImdtYWls
X2RlZmF1bHQiIHN0eWxlPSJmb250LXNpemU6c21hbGwiPjxhIGhyZWY9Imh0dHBzOi8vY2JzLmNl
bnRvcy5vcmcva29qaS9wYWNrYWdlaW5mbz9wYWNrYWdlSUQ9Njg1MyI+aHR0cHM6Ly9jYnMuY2Vu
dG9zLm9yZy9rb2ppL3BhY2thZ2VpbmZvP3BhY2thZ2VJRD02ODUzPC9hPjxicj48L2Rpdj48ZGl2
IGNsYXNzPSJnbWFpbF9kZWZhdWx0IiBzdHlsZT0iZm9udC1zaXplOnNtYWxsIj48YnI+PC9kaXY+
PGRpdiBjbGFzcz0iZ21haWxfZGVmYXVsdCIgc3R5bGU9ImZvbnQtc2l6ZTpzbWFsbCI+QnV0LCBz
YWRseSwgSSBkb24mIzM5O3QgdGhpbmsgaGUgaGFzIGRvbmUgYW55dGhpbmcgZm9yIFJIRUwgOCB5
ZXQuIFJIRUwgOC4xIHdpdGggcG9kbWFuIDEuNC40IHNob3VsZCBiZSBvdXQgYW55IGRheS4gVGhl
biB3ZSBzaG91bGQgaGF2ZSBwb2RtYW4gMS42LlggaW4gUkhFbCA4LjEgYWJvdXQgMTIgd2Vla3Mg
bGF0ZXIgKHByZXR0eSBnb29kIC0gaG9wZWZ1bGx5ICZxdW90O3N1cHJpc2luZ2x5JnF1b3Q7ID8g
Oi0pPC9kaXY+PGRpdiBjbGFzcz0iZ21haWxfZGVmYXVsdCIgc3R5bGU9ImZvbnQtc2l6ZTpzbWFs
bCI+PGJyPjwvZGl2PjxkaXYgY2xhc3M9ImdtYWlsX2RlZmF1bHQiIHN0eWxlPSJmb250LXNpemU6
c21hbGwiPkJlc3QgUmVnYXJkczxicj5TY290dCBNPC9kaXY+PC9kaXY+PGJyPjxkaXYgY2xhc3M9
ImdtYWlsX3F1b3RlIj48ZGl2IGRpcj0ibHRyIiBjbGFzcz0iZ21haWxfYXR0ciI+T24gTW9uLCBO
b3YgNCwgMjAxOSBhdCA0OjI2IFBNIG1oICZsdDs8YSBocmVmPSJtYWlsdG86bWglMkJwb2RtYW5A
c2NyaXQuY2giPm1oK3BvZG1hbkBzY3JpdC5jaDwvYT4mZ3Q7IHdyb3RlOjxicj48L2Rpdj48Ymxv
Y2txdW90ZSBjbGFzcz0iZ21haWxfcXVvdGUiIHN0eWxlPSJtYXJnaW46MHB4IDBweCAwcHggMC44
ZXg7Ym9yZGVyLWxlZnQ6MXB4IHNvbGlkIHJnYigyMDQsMjA0LDIwNCk7cGFkZGluZy1sZWZ0OjFl
eCI+Jmd0OyBPbiBXZWQsIE9jdCAwOSwgMjAxOSBhdCAwOTowMzo0MUFNIC0wNDAwLCBSb2JlcnQg
UC4gSi4gRGF5IHdyb3RlOjxicj4KJmd0OyZndDs8YnI+CiZndDsmZ3Q7wqAgwqBpIGp1c3QgdXBn
cmFkZWQgYSBDZW50T1MgYm94IHRvIENlbnRPUyA4LCBhbmQgaSBjYW4gc2VlIHRoYXQgdGhlPGJy
PgomZ3Q7Jmd0OyB2ZXJzaW9uIG9mIHBvZG1hbiBpcyAodW5zdXJwcmlzaW5nbHkpIGEgYml0IGRh
dGVkOjxicj4KJmd0OyZndDs8YnI+CiZndDsmZ3Q7wqAgwqAkIHBvZG1hbiAtLXZlcnNpb248YnI+
CiZndDsmZ3Q7wqAgwqBwb2RtYW4gdmVyc2lvbiAxLjAuMi1kZXY8YnI+CiZndDsmZ3Q7PGJyPgom
Z3Q7Jmd0OyBjb21wYXJlZCB0byBteSBmZWRvcmEgMzAgc3lzdGVtOjxicj4KJmd0OyZndDs8YnI+
CiZndDsmZ3Q7wqAgwqAkIHBvZG1hbiAtLXZlcnNpb248YnI+CiZndDsmZ3Q7wqAgwqBwb2RtYW4g
dmVyc2lvbiAxLjYuMTxicj4KJmd0OyZndDs8YnI+CiZndDsmZ3Q7IGlzIGl0IGZlYXNpYmxlIHRv
IHRyeSB0byBkb3dubG9hZCBhbmQgYnVpbGQgZnJvbSBzb3VyY2UgdG8gZ2V0IHRoZTxicj4KJmd0
OyZndDsgbGF0ZXN0IHZlcnNpb24gb24gbXkgQ2VudE9TIHN5c3RlbSwgb3Igd291bGQgdGhhdCBq
dXN0IGJlIG1vcmUgdHJvdWJsZTxicj4KJmd0OyZndDsgdGhhbiBpdCYjMzk7cyB3b3J0aD88YnI+
CiZndDsgPGJyPgomZ3Q7IEkgZ3Vlc3MgYnVpbGRpbmcgZnJvbSBNYWtlZmlsZSBzaG91bGQgd29y
ayBqdXN0IGZpbmUuLjxicj4KJmd0OyA8YnI+CiZndDsgSWYgeW91JiMzOTtkIGxpa2UgdG8gdHJ5
IGJ1aWxkaW5nIGZyb20gZmVkb3JhIDMxIHJwbSBzcGVjIGZpbGUsPGJyPgomZ3Q7IHNlZTogPGEg
aHJlZj0iaHR0cHM6Ly9zcmMuZmVkb3JhcHJvamVjdC5vcmcvcnBtcy9wb2RtYW4vYmxvYi9mMzEv
Zi9wb2RtYW4uc3BlYyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9z
cmMuZmVkb3JhcHJvamVjdC5vcmcvcnBtcy9wb2RtYW4vYmxvYi9mMzEvZi9wb2RtYW4uc3BlYzwv
YT48YnI+CiZndDsgSSB0cnkgdG8ga2VlcCBpdCBidWlsZGFibGUgb24gQ2VudE9TNyAoaGF2ZW4m
IzM5O3QgdHJpZWQgQ2VudE9TOCB5ZXQpLjxicj4KJmd0OyA8YnI+CiZndDsgQ2VudE9TIFN0cmVh
bSAob25jZSBhdmFpbGFibGUpIHNob3VsZCBob3BlZnVsbHkgYWRkcmVzcyB0aGUgYXZhaWxhYmls
aXR5IGlzc3VlLjxicj4KJmd0OyA8YnI+CiZndDsgSSYjMzk7bSBub3Qgc3VyZSBvZiBFVEEsIGJ1
dCBJJiMzOTttIHRoaW5raW5nIEkmIzM5O2xsIGVuYWJsZSBlcGVsOCBDT1BSIGZvciB0aGlzIGFz
IGE8YnI+CiZndDsgdGVtcG9yYXJ5IHNvbHV0aW9uIGluIGNhc2UgQ2VudE9TIFN0cmVhbSB0YWtl
cyB0b28gbG9uZy48YnI+CiZndDsgKExldCBtZSBnZXQgYmFjayBvbiB0aGlzLi4pPGJyPgo8YnI+
CkkgYW0gbm90IGFibGUgdG8gYnVpbGQgaXQgb24gQ2VudE9TIDg6PGJyPgo8YnI+CiQgbW9jayAt
ciBlcGVsLTgteDg2XzY0IC0tcmVidWlsZDxicj4KL2hvbWUvbWgvZmVkb3JhL2J1aWxkYWgvYnVp
bGRhaC0xLjExLjQtMy5mYzMxLnNyYy5ycG08YnI+Cjxicj4KWy4uLl08YnI+CkVSUk9SOiBFeGNl
cHRpb24oL2hvbWUvbWgvZmVkb3JhL2J1aWxkYWgvYnVpbGRhaC0xLjExLjQtMy5mYzMxLnNyYy5y
cG0pPGJyPgpDb25maWcoZXBlbC04LXg4Nl82NCkgMSBtaW51dGVzIDM2IHNlY29uZHM8YnI+CklO
Rk86IFJlc3VsdHMgYW5kL29yIGxvZ3MgaW46IC92YXIvbGliL21vY2svZXBlbC04LXg4Nl82NC9y
ZXN1bHQ8YnI+CklORk86IENsZWFuaW5nIHVwIGJ1aWxkIHJvb3QgKCYjMzk7Y2xlYW51cF9vbl9m
YWlsdXJlPVRydWUmIzM5Oyk8YnI+ClN0YXJ0OiBjbGVhbiBjaHJvb3Q8YnI+CkZpbmlzaDogY2xl
YW4gY2hyb290PGJyPgpFUlJPUjogQ29tbWFuZCBmYWlsZWQ6PGJyPgrCoCMgL3Vzci9iaW4vZG5m
IGJ1aWxkZGVwIC0taW5zdGFsbHJvb3QgL3Zhci9saWIvbW9jay9lcGVsLTgteDg2XzY0L3Jvb3Qv
PGJyPgotLXJlbGVhc2V2ZXIgOCAtLXNldG9wdD1kZWx0YXJwbT1GYWxzZSAtLWFsbG93ZXJhc2lu
Zzxicj4KLS1kaXNhYmxlcGx1Z2luPWxvY2FsIC0tZGlzYWJsZXBsdWdpbj1zcGFjZXdhbGsgLS1k
aXNhYmxlcGx1Z2luPWxvY2FsPGJyPgotLWRpc2FibGVwbHVnaW49c3BhY2V3YWxrPGJyPgovdmFy
L2xpYi9tb2NrL2VwZWwtOC14ODZfNjQvcm9vdC8vYnVpbGRkaXIvYnVpbGQvU1JQTVMvYnVpbGRh
aC0xLjExLjQtMy5lbDguc3JjLnJwbTxicj4KLS1zZXRvcHQ9dHNmbGFncz1ub2NvbnRleHRzPGJy
PgpObyBtYXRjaGVzIGZvdW5kIGZvciB0aGUgZm9sbG93aW5nIGRpc2FibGUgcGx1Z2luIHBhdHRl
cm5zOiBsb2NhbCwgc3BhY2V3YWxrPGJyPgpDZW50T1MtOCAtIEJhc2U8YnI+CsKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIDEwIGtCL3MgfCAzLjkga0LCoCDCoCDCoDAwOjAwPGJyPgpDZW50T1MtOCAt
IEJhc2U8YnI+CsKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIDE1IE1CL3MgfCA3LjkgTULCoCDCoCDC
oDAwOjAwPGJyPgpDZW50T1MtOCAtIEFwcFN0cmVhbTxicj4KwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgMTcga0IvcyB8IDQuMyBrQsKgIMKgIMKgMDA6MDA8YnI+CkNlbnRPUy04IC0gQXBwU3RyZWFt
PGJyPgrCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoDguNCBNQi9zIHwgNi4zIE1CwqAgwqAgwqAwMDow
MDxicj4KQ2VudE9TLTggLSBQb3dlclRvb2xzPGJyPgrCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCAx
MiBrQi9zIHwgNC4zIGtCwqAgwqAgwqAwMDowMDxicj4KQ2VudE9TLTggLSBQb3dlclRvb2xzPGJy
PgrCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoDU5OSBrQi9zIHwgMS44IE1CwqAgwqAgwqAwMDowMzxi
cj4KQ2VudE9TLTggLSBFeHRyYXM8YnI+CsKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgMy42IGtCL3Mg
fCAxLjUga0LCoCDCoCDCoDAwOjAwPGJyPgpObyBtYXRjaGluZyBwYWNrYWdlIHRvIGluc3RhbGw6
ICYjMzk7YnRyZnMtcHJvZ3MtZGV2ZWwmIzM5Ozxicj4KTm8gbWF0Y2hpbmcgcGFja2FnZSB0byBp
bnN0YWxsOiAmIzM5O2dvLW1kMm1hbiYjMzk7PGJyPgpObyBtYXRjaGluZyBwYWNrYWdlIHRvIGlu
c3RhbGw6ICYjMzk7bGlic2VjY29tcC1zdGF0aWMmIzM5Ozxicj4KUGFja2FnZSBtYWtlLTE6NC4y
LjEtOS5lbDgueDg2XzY0IGlzIGFscmVhZHkgaW5zdGFsbGVkLjxicj4KTm90IGFsbCBkZXBlbmRl
bmNpZXMgc2F0aXNmaWVkPGJyPgpFcnJvcjogU29tZSBwYWNrYWdlcyBjb3VsZCBub3QgYmUgZm91
bmQuPGJyPgo8YnI+Cjxicj4KQW55b25lIGhhcyBhIG1vcmUgcmVjZW50IHZlcnNpb24gZm9yIEVM
OD88YnI+Cjxicj4Kfm1oPGJyPgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fXzxicj4KUG9kbWFuIG1haWxpbmcgbGlzdCAtLSA8YSBocmVmPSJtYWlsdG86cG9k
bWFuQGxpc3RzLnBvZG1hbi5pbyIgdGFyZ2V0PSJfYmxhbmsiPnBvZG1hbkBsaXN0cy5wb2RtYW4u
aW88L2E+PGJyPgpUbyB1bnN1YnNjcmliZSBzZW5kIGFuIGVtYWlsIHRvIDxhIGhyZWY9Im1haWx0
bzpwb2RtYW4tbGVhdmVAbGlzdHMucG9kbWFuLmlvIiB0YXJnZXQ9Il9ibGFuayI+cG9kbWFuLWxl
YXZlQGxpc3RzLnBvZG1hbi5pbzwvYT48YnI+CjwvYmxvY2txdW90ZT48L2Rpdj48YnIgY2xlYXI9
ImFsbCI+PGRpdj48YnI+PC9kaXY+LS0gPGJyPjxkaXYgZGlyPSJsdHIiIGNsYXNzPSJnbWFpbF9z
aWduYXR1cmUiPjxkaXYgZGlyPSJsdHIiPjxkaXY+PGRpdiBkaXI9Imx0ciI+PGRpdiBkaXI9Imx0
ciI+PGRpdiBkaXI9Imx0ciI+PGRpdiBkaXI9Imx0ciI+PGRpdiBkaXI9Imx0ciI+PGRpdiBkaXI9
Imx0ciI+PGRpdiBkaXI9Imx0ciI+PGRpdiBkaXI9Imx0ciI+PGRpdiBkaXI9Imx0ciI+PHByZT4t
LSA8YnI+PC9wcmU+PHByZT48Zm9udCBzaXplPSIyIj5TY290dCBNY0NhcnR5LCBSSENBClByb2R1
Y3QgTWFuYWdlbWVudCAtIENvbnRhaW5lcnMsIFJlZCBIYXQgRW50ZXJwcmlzZSBMaW51eCAmYW1w
OyBPcGVuU2hpZnQKRW1haWw6IDxhIGhyZWY9Im1haWx0bzpzbWNjYXJ0eUByZWRoYXQuY29tIiB0
YXJnZXQ9Il9ibGFuayI+c21jY2FydHlAcmVkaGF0LmNvbTwvYT4KUGhvbmU6IDMxMi02NjAtMzUz
NQpDZWxsOiAzMzAtODA3LTEwNDMKV2ViOiA8YSBocmVmPSJodHRwOi8vY3J1bmNodG9vbHMuY29t
IiB0YXJnZXQ9Il9ibGFuayI+aHR0cDovL2NydW5jaHRvb2xzLmNvbTwvYT48L2ZvbnQ+PHByZT5I
YXZlIHlvdSBldmVyIHdvbmRlcmVkIHdoYXQgaGFwcGVucyBiZWhpbmQgdGhlIHNjZW5lcyB3aGVu
IHlvdSB0eXBlIDxhIGhyZWY9Imh0dHA6Ly93d3cucmVkaGF0LmNvbSIgdGFyZ2V0PSJfYmxhbmsi
Pnd3dy5yZWRoYXQuY29tPC9hPiBpbnRvIGEgYnJvd3NlciBhbmQgaGl0IGVudGVyPyA8YSBocmVm
PSJodHRwczovL3d3dy5yZWRoYXQuY29tL2VuL2Jsb2cvd2hhdC1oYXBwZW5zLXdoZW4teW91LWhp
dC1lbnRlciIgc3R5bGU9ImZvbnQtZmFtaWx5OkFyaWFsLEhlbHZldGljYSxzYW5zLXNlcmlmIiB0
YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly93d3cucmVkaGF0LmNvbS9lbi9ibG9nL3doYXQtaGFwcGVu
cy13aGVuLXlvdS1oaXQtZW50ZXI8L2E+PC9wcmU+PC9wcmU+PC9kaXY+PC9kaXY+PC9kaXY+PC9k
aXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+Cg==
--===============1794120593391317596==--
From smccarty at redhat.com Tue Nov 5 11:57:09 2019
Content-Type: multipart/mixed; boundary="===============9166587576933754280=="
MIME-Version: 1.0
From: Scott McCarty
To: podman at lists.podman.io
Subject: [Podman] Re: Locking issue?
Date: Tue, 05 Nov 2019 06:57:30 -0500
Message-ID:
In-Reply-To: CAN_LGv1+LpDM-bhpT=qwBDEdCgdauF5=ELwf83GYv3W-Tcn6eQ@mail.gmail.com
--===============9166587576933754280==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Alexander,
I don't quite understand the docs bug. Could you please file the BZ
and send it to me. I am happy to drive our docs team to update to use the
"podman generate systemd" stuff instead of manually copy/pasting/modifying
the configs in a static doc.
Best Regards
Scott M
On Mon, Nov 4, 2019 at 3:41 PM Alexander E. Patrakov
wrote:
> "Matt,
>
> no, I don't use static IPs. I let podman allocate them. I have already
> tried `podman generate systemd` as per earlier suggestion.
>
> The issue is definitely not with stale reservations persisting across
> a reboot, otherwise adding "flock" would not have helped.
>
> Regarding the "`start --attach` can exit while the container is still
> running comment: if it is true, please ask the appropriate person to
> fix the systemd unit example in RHEL7 documentation.
>
> =D0=B2=D1=82, 5 =D0=BD=D0=BE=D1=8F=D0=B1. 2019 =D0=B3. =D0=B2 01:19, Matt=
Heon :
> >
> > On 2019-11-04 23:40, Alexander E. Patrakov wrote:
> > >Hello.
> > >
> > >I have tried Podman in Fedora 31. Not a rootless setup.
> > >
> > >Software versions:
> > >
> > >podman-1.6.2-2.fc31.x86_64
> > >containernetworking-plugins-0.8.2-2.1.dev.git485be65.fc31.x86_64
> > >
> > >I have created two containers:
> > >
> > ># podman container run -d --name nginx_1 -p 80:80 nginx
> > ># podman container run -d --name nginx_2 -p 81:80 nginx
> > >
> > >Then I wanted to make sure that they start on boot.
> > >
> > >According to RHEL 7 documentation,
> > >
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_at=
omic_host/7/html/managing_containers/running_containers_as_systemd_services=
_with_podman
> > >, I am supposed to create systemd units. OK, let's take the documented
> > >form of the unit and turn it into a template:
> > >
> > >[Unit]
> > >Description=3DContainer %i
> > >
> > >[Service]
> > >ExecStart=3D/usr/bin/podman start -a %i
> > >ExecStop=3D/usr/bin/podman stop -t 2 %i
> > >
> > >[Install]
> > >WantedBy=3Dmulti-user.target
> > >
> > >This doesn't work if there is more than one container. The error
> > >is:
> > >
> > >Nov 04 21:35:57 podman[2268]: time=3D"2019-11-04T21:35:57+05:00"
> > >level=3Derror msg=3D"Error adding network: failed to allocate for rang=
e 0:
> > >10.88.0.19 has been allocated to
> > >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
> > >duplicate allocation is not allowed"
> > >Nov 04 21:35:57 podman[2268]: time=3D"2019-11-04T21:35:57+05:00"
> > >level=3Derror msg=3D"Error while adding pod to CNI network \"podman\":
> > >failed to allocate for range 0: 10.88.0.19 has been allocated to
> > >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
> > >duplicate allocation is not allowed"
> > >Nov 04 21:35:57 podman[2268]: Error: unable to start container
> > >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019:
> > >error configuring network namespace for container
> > >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019:
> > >failed to allocate for range 0: 10.88.0.19 has been allocated to
> > >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
> > >duplicate allocation is not allowed
> > >
> > >(as you can see, the conflict is against the container itself)
> > >
> > >Apparently different runs of podman need to be serialized against each
> > >other. This works:
> > >
> > >[Unit]
> > >Description=3DContainer %i
> > >Wants=3Dnetwork-online.target
> > >After=3Dnetwork-online.target
> > >
> > >[Service]
> > >Type=3Doneshot
> > >RemainAfterExit=3Dyes
> > >ExecStart=3Dflock /run/lock/subsys/container.lck /usr/bin/podman start=
%i
> > >ExecStop=3D/usr/bin/podman stop -t 2 %i
> > >
> > >[Install]
> > >WantedBy=3Dmulti-user.target
> > >
> > >Questions:
> > >
> > >a) Why isn't some equivalent of this unit shipped with podman? Or, am
> > >I missing some package that ships it?
> > >b) Why isn't the necessary locking built into podman itself? Or, is it
> > >a bug in containernetworking-plugins?
> >
> > These containers aren't using static IPs, correct?
> >
> > I can recall an issue where static IP allocations were leaving address
> > reservations around after reboot, causing issues... But that should be
> > fixed on the Podman we ship in F31.
> >
> > Otherwise, this sounds suspiciously like a CNI bug. I would hope that
> > CNI has sufficient locking to prevent this from racing, but I could be
> > wrong.
> >
> > Also, you should try using `podman generate systemd` for unit files.
> > Looking at your unit files, I don't think they operate as advertised
> > (`start --attach` can exit while the container is still running, so
> > tracking it is not a reliable way of tracking the container).
> >
> > Thanks,
> > Matt Heon
> >
> > >
> > >--
> > >Alexander E. Patrakov
> > >_______________________________________________
> > >Podman mailing list -- podman(a)lists.podman.io
> > >To unsubscribe send an email to podman-leave(a)lists.podman.io
>
>
>
> --
> Alexander E. Patrakov
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
-- =
-- =
Scott McCarty, RHCA
Product Management - Containers, Red Hat Enterprise Linux & OpenShift
Email: smccarty(a)redhat.com
Phone: 312-660-3535
Cell: 330-807-1043
Web: http://crunchtools.com
Have you ever wondered what happens behind the scenes when you type
www.redhat.com into a browser and hit enter?
https://www.redhat.com/en/blog/what-happens-when-you-hit-enter
--===============9166587576933754280==
Content-Type: text/html
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="attachment.html"
PGRpdiBkaXI9Imx0ciI+PGRpdiBjbGFzcz0iZ21haWxfZGVmYXVsdCIgc3R5bGU9ImZvbnQtc2l6
ZTpzbWFsbCI+QWxleGFuZGVyLDwvZGl2PjxkaXYgY2xhc3M9ImdtYWlsX2RlZmF1bHQiIHN0eWxl
PSJmb250LXNpemU6c21hbGwiPsKgIMKgIMKgSSBkb24mIzM5O3QgcXVpdGUgdW5kZXJzdGFuZCB0
aGUgZG9jcyBidWcuIENvdWxkIHlvdSBwbGVhc2UgZmlsZSB0aGUgQlogYW5kIHNlbmQgaXQgdG8g
bWUuIEkgYW0gaGFwcHkgdG8gZHJpdmUgb3VyIGRvY3MgdGVhbSB0byB1cGRhdGUgdG8gdXNlIHRo
ZSAmcXVvdDtwb2RtYW4gZ2VuZXJhdGUgc3lzdGVtZCZxdW90OyBzdHVmZiBpbnN0ZWFkIG9mIG1h
bnVhbGx5IGNvcHkvcGFzdGluZy9tb2RpZnlpbmcgdGhlIGNvbmZpZ3MgaW4gYSBzdGF0aWMgZG9j
LjwvZGl2PjxkaXYgY2xhc3M9ImdtYWlsX2RlZmF1bHQiIHN0eWxlPSJmb250LXNpemU6c21hbGwi
Pjxicj48L2Rpdj48ZGl2IGNsYXNzPSJnbWFpbF9kZWZhdWx0IiBzdHlsZT0iZm9udC1zaXplOnNt
YWxsIj5CZXN0IFJlZ2FyZHM8L2Rpdj48ZGl2IGNsYXNzPSJnbWFpbF9kZWZhdWx0IiBzdHlsZT0i
Zm9udC1zaXplOnNtYWxsIj5TY290dCBNPC9kaXY+PC9kaXY+PGJyPjxkaXYgY2xhc3M9ImdtYWls
X3F1b3RlIj48ZGl2IGRpcj0ibHRyIiBjbGFzcz0iZ21haWxfYXR0ciI+T24gTW9uLCBOb3YgNCwg
MjAxOSBhdCAzOjQxIFBNIEFsZXhhbmRlciBFLiBQYXRyYWtvdiAmbHQ7PGEgaHJlZj0ibWFpbHRv
OnBhdHJha292QGdtYWlsLmNvbSI+cGF0cmFrb3ZAZ21haWwuY29tPC9hPiZndDsgd3JvdGU6PGJy
PjwvZGl2PjxibG9ja3F1b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdpbjowcHgg
MHB4IDBweCAwLjhleDtib3JkZXItbGVmdDoxcHggc29saWQgcmdiKDIwNCwyMDQsMjA0KTtwYWRk
aW5nLWxlZnQ6MWV4Ij4mcXVvdDtNYXR0LDxicj4KPGJyPgpubywgSSBkb24mIzM5O3QgdXNlIHN0
YXRpYyBJUHMuIEkgbGV0IHBvZG1hbiBhbGxvY2F0ZSB0aGVtLiBJIGhhdmUgYWxyZWFkeTxicj4K
dHJpZWQgYHBvZG1hbiBnZW5lcmF0ZSBzeXN0ZW1kYCBhcyBwZXIgZWFybGllciBzdWdnZXN0aW9u
Ljxicj4KPGJyPgpUaGUgaXNzdWUgaXMgZGVmaW5pdGVseSBub3Qgd2l0aCBzdGFsZSByZXNlcnZh
dGlvbnMgcGVyc2lzdGluZyBhY3Jvc3M8YnI+CmEgcmVib290LCBvdGhlcndpc2UgYWRkaW5nICZx
dW90O2Zsb2NrJnF1b3Q7IHdvdWxkIG5vdCBoYXZlIGhlbHBlZC48YnI+Cjxicj4KUmVnYXJkaW5n
IHRoZSAmcXVvdDtgc3RhcnQgLS1hdHRhY2hgIGNhbiBleGl0IHdoaWxlIHRoZSBjb250YWluZXIg
aXMgc3RpbGw8YnI+CnJ1bm5pbmcgY29tbWVudDogaWYgaXQgaXMgdHJ1ZSwgcGxlYXNlIGFzayB0
aGUgYXBwcm9wcmlhdGUgcGVyc29uIHRvPGJyPgpmaXggdGhlIHN5c3RlbWQgdW5pdCBleGFtcGxl
IGluIFJIRUw3IGRvY3VtZW50YXRpb24uPGJyPgo8YnI+CtCy0YIsIDUg0L3QvtGP0LEuIDIwMTkg
0LMuINCyIDAxOjE5LCBNYXR0IEhlb24gJmx0OzxhIGhyZWY9Im1haWx0bzptaGVvbkByZWRoYXQu
Y29tIiB0YXJnZXQ9Il9ibGFuayI+bWhlb25AcmVkaGF0LmNvbTwvYT4mZ3Q7Ojxicj4KJmd0Ozxi
cj4KJmd0OyBPbiAyMDE5LTExLTA0IDIzOjQwLCBBbGV4YW5kZXIgRS4gUGF0cmFrb3Ygd3JvdGU6
PGJyPgomZ3Q7ICZndDtIZWxsby48YnI+CiZndDsgJmd0Ozxicj4KJmd0OyAmZ3Q7SSBoYXZlIHRy
aWVkIFBvZG1hbiBpbiBGZWRvcmEgMzEuIE5vdCBhIHJvb3RsZXNzIHNldHVwLjxicj4KJmd0OyAm
Z3Q7PGJyPgomZ3Q7ICZndDtTb2Z0d2FyZSB2ZXJzaW9uczo8YnI+CiZndDsgJmd0Ozxicj4KJmd0
OyAmZ3Q7cG9kbWFuLTEuNi4yLTIuZmMzMS54ODZfNjQ8YnI+CiZndDsgJmd0O2NvbnRhaW5lcm5l
dHdvcmtpbmctcGx1Z2lucy0wLjguMi0yLjEuZGV2LmdpdDQ4NWJlNjUuZmMzMS54ODZfNjQ8YnI+
CiZndDsgJmd0Ozxicj4KJmd0OyAmZ3Q7SSBoYXZlIGNyZWF0ZWQgdHdvIGNvbnRhaW5lcnM6PGJy
PgomZ3Q7ICZndDs8YnI+CiZndDsgJmd0OyMgcG9kbWFuIGNvbnRhaW5lciBydW4gLWQgLS1uYW1l
IG5naW54XzEgLXAgODA6ODAgbmdpbng8YnI+CiZndDsgJmd0OyMgcG9kbWFuIGNvbnRhaW5lciBy
dW4gLWQgLS1uYW1lIG5naW54XzIgLXAgODE6ODAgbmdpbng8YnI+CiZndDsgJmd0Ozxicj4KJmd0
OyAmZ3Q7VGhlbiBJIHdhbnRlZCB0byBtYWtlIHN1cmUgdGhhdCB0aGV5IHN0YXJ0IG9uIGJvb3Qu
PGJyPgomZ3Q7ICZndDs8YnI+CiZndDsgJmd0O0FjY29yZGluZyB0byBSSEVMIDcgZG9jdW1lbnRh
dGlvbiw8YnI+CiZndDsgJmd0OzxhIGhyZWY9Imh0dHBzOi8vYWNjZXNzLnJlZGhhdC5jb20vZG9j
dW1lbnRhdGlvbi9lbi11cy9yZWRfaGF0X2VudGVycHJpc2VfbGludXhfYXRvbWljX2hvc3QvNy9o
dG1sL21hbmFnaW5nX2NvbnRhaW5lcnMvcnVubmluZ19jb250YWluZXJzX2FzX3N5c3RlbWRfc2Vy
dmljZXNfd2l0aF9wb2RtYW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBz
Oi8vYWNjZXNzLnJlZGhhdC5jb20vZG9jdW1lbnRhdGlvbi9lbi11cy9yZWRfaGF0X2VudGVycHJp
c2VfbGludXhfYXRvbWljX2hvc3QvNy9odG1sL21hbmFnaW5nX2NvbnRhaW5lcnMvcnVubmluZ19j
b250YWluZXJzX2FzX3N5c3RlbWRfc2VydmljZXNfd2l0aF9wb2RtYW48L2E+PGJyPgomZ3Q7ICZn
dDssIEkgYW0gc3VwcG9zZWQgdG8gY3JlYXRlIHN5c3RlbWQgdW5pdHMuIE9LLCBsZXQmIzM5O3Mg
dGFrZSB0aGUgZG9jdW1lbnRlZDxicj4KJmd0OyAmZ3Q7Zm9ybSBvZiB0aGUgdW5pdCBhbmQgdHVy
biBpdCBpbnRvIGEgdGVtcGxhdGU6PGJyPgomZ3Q7ICZndDs8YnI+CiZndDsgJmd0O1tVbml0XTxi
cj4KJmd0OyAmZ3Q7RGVzY3JpcHRpb249Q29udGFpbmVyICVpPGJyPgomZ3Q7ICZndDs8YnI+CiZn
dDsgJmd0O1tTZXJ2aWNlXTxicj4KJmd0OyAmZ3Q7RXhlY1N0YXJ0PS91c3IvYmluL3BvZG1hbiBz
dGFydCAtYSAlaTxicj4KJmd0OyAmZ3Q7RXhlY1N0b3A9L3Vzci9iaW4vcG9kbWFuIHN0b3AgLXQg
MiAlaTxicj4KJmd0OyAmZ3Q7PGJyPgomZ3Q7ICZndDtbSW5zdGFsbF08YnI+CiZndDsgJmd0O1dh
bnRlZEJ5PW11bHRpLXVzZXIudGFyZ2V0PGJyPgomZ3Q7ICZndDs8YnI+CiZndDsgJmd0O1RoaXMg
ZG9lc24mIzM5O3Qgd29yayBpZiB0aGVyZSBpcyBtb3JlIHRoYW4gb25lIGNvbnRhaW5lci4gVGhl
IGVycm9yPGJyPgomZ3Q7ICZndDtpczo8YnI+CiZndDsgJmd0Ozxicj4KJmd0OyAmZ3Q7Tm92IDA0
IDIxOjM1OjU3IHBvZG1hblsyMjY4XTogdGltZT0mcXVvdDsyMDE5LTExLTA0VDIxOjM1OjU3KzA1
OjAwJnF1b3Q7PGJyPgomZ3Q7ICZndDtsZXZlbD1lcnJvciBtc2c9JnF1b3Q7RXJyb3IgYWRkaW5n
IG5ldHdvcms6IGZhaWxlZCB0byBhbGxvY2F0ZSBmb3IgcmFuZ2UgMDo8YnI+CiZndDsgJmd0OzEw
Ljg4LjAuMTkgaGFzIGJlZW4gYWxsb2NhdGVkIHRvPGJyPgomZ3Q7ICZndDthY2UyZGU0NDA1MjA1
YTlhNzY3NGEyNTI0Y2Q2N2MxZjBlMzk1YTkyMzRiMDQ1NmM1NTg4MWExYTRhZGQ2MDE5LDxicj4K
Jmd0OyAmZ3Q7ZHVwbGljYXRlIGFsbG9jYXRpb24gaXMgbm90IGFsbG93ZWQmcXVvdDs8YnI+CiZn
dDsgJmd0O05vdiAwNCAyMTozNTo1NyBwb2RtYW5bMjI2OF06IHRpbWU9JnF1b3Q7MjAxOS0xMS0w
NFQyMTozNTo1NyswNTowMCZxdW90Ozxicj4KJmd0OyAmZ3Q7bGV2ZWw9ZXJyb3IgbXNnPSZxdW90
O0Vycm9yIHdoaWxlIGFkZGluZyBwb2QgdG8gQ05JIG5ldHdvcmsgXCZxdW90O3BvZG1hblwmcXVv
dDs6PGJyPgomZ3Q7ICZndDtmYWlsZWQgdG8gYWxsb2NhdGUgZm9yIHJhbmdlIDA6IDEwLjg4LjAu
MTkgaGFzIGJlZW4gYWxsb2NhdGVkIHRvPGJyPgomZ3Q7ICZndDthY2UyZGU0NDA1MjA1YTlhNzY3
NGEyNTI0Y2Q2N2MxZjBlMzk1YTkyMzRiMDQ1NmM1NTg4MWExYTRhZGQ2MDE5LDxicj4KJmd0OyAm
Z3Q7ZHVwbGljYXRlIGFsbG9jYXRpb24gaXMgbm90IGFsbG93ZWQmcXVvdDs8YnI+CiZndDsgJmd0
O05vdiAwNCAyMTozNTo1NyBwb2RtYW5bMjI2OF06IEVycm9yOiB1bmFibGUgdG8gc3RhcnQgY29u
dGFpbmVyPGJyPgomZ3Q7ICZndDthY2UyZGU0NDA1MjA1YTlhNzY3NGEyNTI0Y2Q2N2MxZjBlMzk1
YTkyMzRiMDQ1NmM1NTg4MWExYTRhZGQ2MDE5Ojxicj4KJmd0OyAmZ3Q7ZXJyb3IgY29uZmlndXJp
bmcgbmV0d29yayBuYW1lc3BhY2UgZm9yIGNvbnRhaW5lcjxicj4KJmd0OyAmZ3Q7YWNlMmRlNDQw
NTIwNWE5YTc2NzRhMjUyNGNkNjdjMWYwZTM5NWE5MjM0YjA0NTZjNTU4ODFhMWE0YWRkNjAxOTo8
YnI+CiZndDsgJmd0O2ZhaWxlZCB0byBhbGxvY2F0ZSBmb3IgcmFuZ2UgMDogMTAuODguMC4xOSBo
YXMgYmVlbiBhbGxvY2F0ZWQgdG88YnI+CiZndDsgJmd0O2FjZTJkZTQ0MDUyMDVhOWE3Njc0YTI1
MjRjZDY3YzFmMGUzOTVhOTIzNGIwNDU2YzU1ODgxYTFhNGFkZDYwMTksPGJyPgomZ3Q7ICZndDtk
dXBsaWNhdGUgYWxsb2NhdGlvbiBpcyBub3QgYWxsb3dlZDxicj4KJmd0OyAmZ3Q7PGJyPgomZ3Q7
ICZndDsoYXMgeW91IGNhbiBzZWUsIHRoZSBjb25mbGljdCBpcyBhZ2FpbnN0IHRoZSBjb250YWlu
ZXIgaXRzZWxmKTxicj4KJmd0OyAmZ3Q7PGJyPgomZ3Q7ICZndDtBcHBhcmVudGx5IGRpZmZlcmVu
dCBydW5zIG9mIHBvZG1hbiBuZWVkIHRvIGJlIHNlcmlhbGl6ZWQgYWdhaW5zdCBlYWNoPGJyPgom
Z3Q7ICZndDtvdGhlci4gVGhpcyB3b3Jrczo8YnI+CiZndDsgJmd0Ozxicj4KJmd0OyAmZ3Q7W1Vu
aXRdPGJyPgomZ3Q7ICZndDtEZXNjcmlwdGlvbj1Db250YWluZXIgJWk8YnI+CiZndDsgJmd0O1dh
bnRzPW5ldHdvcmstb25saW5lLnRhcmdldDxicj4KJmd0OyAmZ3Q7QWZ0ZXI9bmV0d29yay1vbmxp
bmUudGFyZ2V0PGJyPgomZ3Q7ICZndDs8YnI+CiZndDsgJmd0O1tTZXJ2aWNlXTxicj4KJmd0OyAm
Z3Q7VHlwZT1vbmVzaG90PGJyPgomZ3Q7ICZndDtSZW1haW5BZnRlckV4aXQ9eWVzPGJyPgomZ3Q7
ICZndDtFeGVjU3RhcnQ9ZmxvY2sgL3J1bi9sb2NrL3N1YnN5cy9jb250YWluZXIubGNrIC91c3Iv
YmluL3BvZG1hbiBzdGFydCAlaTxicj4KJmd0OyAmZ3Q7RXhlY1N0b3A9L3Vzci9iaW4vcG9kbWFu
IHN0b3AgLXQgMiAlaTxicj4KJmd0OyAmZ3Q7PGJyPgomZ3Q7ICZndDtbSW5zdGFsbF08YnI+CiZn
dDsgJmd0O1dhbnRlZEJ5PW11bHRpLXVzZXIudGFyZ2V0PGJyPgomZ3Q7ICZndDs8YnI+CiZndDsg
Jmd0O1F1ZXN0aW9uczo8YnI+CiZndDsgJmd0Ozxicj4KJmd0OyAmZ3Q7YSkgV2h5IGlzbiYjMzk7
dCBzb21lIGVxdWl2YWxlbnQgb2YgdGhpcyB1bml0IHNoaXBwZWQgd2l0aCBwb2RtYW4/IE9yLCBh
bTxicj4KJmd0OyAmZ3Q7SSBtaXNzaW5nIHNvbWUgcGFja2FnZSB0aGF0IHNoaXBzIGl0Pzxicj4K
Jmd0OyAmZ3Q7YikgV2h5IGlzbiYjMzk7dCB0aGUgbmVjZXNzYXJ5IGxvY2tpbmcgYnVpbHQgaW50
byBwb2RtYW4gaXRzZWxmPyBPciwgaXMgaXQ8YnI+CiZndDsgJmd0O2EgYnVnIGluIGNvbnRhaW5l
cm5ldHdvcmtpbmctcGx1Z2lucz88YnI+CiZndDs8YnI+CiZndDsgVGhlc2UgY29udGFpbmVycyBh
cmVuJiMzOTt0IHVzaW5nIHN0YXRpYyBJUHMsIGNvcnJlY3Q/PGJyPgomZ3Q7PGJyPgomZ3Q7IEkg
Y2FuIHJlY2FsbCBhbiBpc3N1ZSB3aGVyZSBzdGF0aWMgSVAgYWxsb2NhdGlvbnMgd2VyZSBsZWF2
aW5nIGFkZHJlc3M8YnI+CiZndDsgcmVzZXJ2YXRpb25zIGFyb3VuZCBhZnRlciByZWJvb3QsIGNh
dXNpbmcgaXNzdWVzLi4uIEJ1dCB0aGF0IHNob3VsZCBiZTxicj4KJmd0OyBmaXhlZCBvbiB0aGUg
UG9kbWFuIHdlIHNoaXAgaW4gRjMxLjxicj4KJmd0Ozxicj4KJmd0OyBPdGhlcndpc2UsIHRoaXMg
c291bmRzIHN1c3BpY2lvdXNseSBsaWtlIGEgQ05JIGJ1Zy4gSSB3b3VsZCBob3BlIHRoYXQ8YnI+
CiZndDsgQ05JIGhhcyBzdWZmaWNpZW50IGxvY2tpbmcgdG8gcHJldmVudCB0aGlzIGZyb20gcmFj
aW5nLCBidXQgSSBjb3VsZCBiZTxicj4KJmd0OyB3cm9uZy48YnI+CiZndDs8YnI+CiZndDsgQWxz
bywgeW91IHNob3VsZCB0cnkgdXNpbmcgYHBvZG1hbiBnZW5lcmF0ZSBzeXN0ZW1kYCBmb3IgdW5p
dCBmaWxlcy48YnI+CiZndDsgTG9va2luZyBhdCB5b3VyIHVuaXQgZmlsZXMsIEkgZG9uJiMzOTt0
IHRoaW5rIHRoZXkgb3BlcmF0ZSBhcyBhZHZlcnRpc2VkPGJyPgomZ3Q7IChgc3RhcnQgLS1hdHRh
Y2hgIGNhbiBleGl0IHdoaWxlIHRoZSBjb250YWluZXIgaXMgc3RpbGwgcnVubmluZywgc288YnI+
CiZndDsgdHJhY2tpbmcgaXQgaXMgbm90IGEgcmVsaWFibGUgd2F5IG9mIHRyYWNraW5nIHRoZSBj
b250YWluZXIpLjxicj4KJmd0Ozxicj4KJmd0OyBUaGFua3MsPGJyPgomZ3Q7IE1hdHQgSGVvbjxi
cj4KJmd0Ozxicj4KJmd0OyAmZ3Q7PGJyPgomZ3Q7ICZndDstLTxicj4KJmd0OyAmZ3Q7QWxleGFu
ZGVyIEUuIFBhdHJha292PGJyPgomZ3Q7ICZndDtfX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fXzxicj4KJmd0OyAmZ3Q7UG9kbWFuIG1haWxpbmcgbGlzdCAtLSA8
YSBocmVmPSJtYWlsdG86cG9kbWFuQGxpc3RzLnBvZG1hbi5pbyIgdGFyZ2V0PSJfYmxhbmsiPnBv
ZG1hbkBsaXN0cy5wb2RtYW4uaW88L2E+PGJyPgomZ3Q7ICZndDtUbyB1bnN1YnNjcmliZSBzZW5k
IGFuIGVtYWlsIHRvIDxhIGhyZWY9Im1haWx0bzpwb2RtYW4tbGVhdmVAbGlzdHMucG9kbWFuLmlv
IiB0YXJnZXQ9Il9ibGFuayI+cG9kbWFuLWxlYXZlQGxpc3RzLnBvZG1hbi5pbzwvYT48YnI+Cjxi
cj4KPGJyPgo8YnI+Ci0tIDxicj4KQWxleGFuZGVyIEUuIFBhdHJha292PGJyPgpfX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXzxicj4KUG9kbWFuIG1haWxpbmcg
bGlzdCAtLSA8YSBocmVmPSJtYWlsdG86cG9kbWFuQGxpc3RzLnBvZG1hbi5pbyIgdGFyZ2V0PSJf
YmxhbmsiPnBvZG1hbkBsaXN0cy5wb2RtYW4uaW88L2E+PGJyPgpUbyB1bnN1YnNjcmliZSBzZW5k
IGFuIGVtYWlsIHRvIDxhIGhyZWY9Im1haWx0bzpwb2RtYW4tbGVhdmVAbGlzdHMucG9kbWFuLmlv
IiB0YXJnZXQ9Il9ibGFuayI+cG9kbWFuLWxlYXZlQGxpc3RzLnBvZG1hbi5pbzwvYT48YnI+Cjwv
YmxvY2txdW90ZT48L2Rpdj48YnIgY2xlYXI9ImFsbCI+PGRpdj48YnI+PC9kaXY+LS0gPGJyPjxk
aXYgZGlyPSJsdHIiIGNsYXNzPSJnbWFpbF9zaWduYXR1cmUiPjxkaXYgZGlyPSJsdHIiPjxkaXY+
PGRpdiBkaXI9Imx0ciI+PGRpdiBkaXI9Imx0ciI+PGRpdiBkaXI9Imx0ciI+PGRpdiBkaXI9Imx0
ciI+PGRpdiBkaXI9Imx0ciI+PGRpdiBkaXI9Imx0ciI+PGRpdiBkaXI9Imx0ciI+PGRpdiBkaXI9
Imx0ciI+PGRpdiBkaXI9Imx0ciI+PHByZT4tLSA8YnI+PC9wcmU+PHByZT48Zm9udCBzaXplPSIy
Ij5TY290dCBNY0NhcnR5LCBSSENBClByb2R1Y3QgTWFuYWdlbWVudCAtIENvbnRhaW5lcnMsIFJl
ZCBIYXQgRW50ZXJwcmlzZSBMaW51eCAmYW1wOyBPcGVuU2hpZnQKRW1haWw6IDxhIGhyZWY9Im1h
aWx0bzpzbWNjYXJ0eUByZWRoYXQuY29tIiB0YXJnZXQ9Il9ibGFuayI+c21jY2FydHlAcmVkaGF0
LmNvbTwvYT4KUGhvbmU6IDMxMi02NjAtMzUzNQpDZWxsOiAzMzAtODA3LTEwNDMKV2ViOiA8YSBo
cmVmPSJodHRwOi8vY3J1bmNodG9vbHMuY29tIiB0YXJnZXQ9Il9ibGFuayI+aHR0cDovL2NydW5j
aHRvb2xzLmNvbTwvYT48L2ZvbnQ+PHByZT5IYXZlIHlvdSBldmVyIHdvbmRlcmVkIHdoYXQgaGFw
cGVucyBiZWhpbmQgdGhlIHNjZW5lcyB3aGVuIHlvdSB0eXBlIDxhIGhyZWY9Imh0dHA6Ly93d3cu
cmVkaGF0LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPnd3dy5yZWRoYXQuY29tPC9hPiBpbnRvIGEgYnJv
d3NlciBhbmQgaGl0IGVudGVyPyA8YSBocmVmPSJodHRwczovL3d3dy5yZWRoYXQuY29tL2VuL2Js
b2cvd2hhdC1oYXBwZW5zLXdoZW4teW91LWhpdC1lbnRlciIgc3R5bGU9ImZvbnQtZmFtaWx5OkFy
aWFsLEhlbHZldGljYSxzYW5zLXNlcmlmIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly93d3cucmVk
aGF0LmNvbS9lbi9ibG9nL3doYXQtaGFwcGVucy13aGVuLXlvdS1oaXQtZW50ZXI8L2E+PC9wcmU+
PC9wcmU+PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9k
aXY+PC9kaXY+PC9kaXY+PC9kaXY+Cg==
--===============9166587576933754280==--
From patrakov at gmail.com Tue Nov 5 12:35:32 2019
Content-Type: multipart/mixed; boundary="===============2818757490462703162=="
MIME-Version: 1.0
From: Alexander E. Patrakov
To: podman at lists.podman.io
Subject: [Podman] Re: Locking issue?
Date: Tue, 05 Nov 2019 17:35:15 +0500
Message-ID:
In-Reply-To: CAL+7UBY1fsqeHuFUJBnmR7Z_8SzHFoydD6P4hKuiEnvmGVc8yQ@mail.gmail.com
--===============2818757490462703162==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
https://bugzilla.redhat.com/show_bug.cgi?id=3D1768866
=D0=B2=D1=82, 5 =D0=BD=D0=BE=D1=8F=D0=B1. 2019 =D0=B3. =D0=B2 16:56, Scott =
McCarty :
>
> Alexander,
> I don't quite understand the docs bug. Could you please file the BZ =
and send it to me. I am happy to drive our docs team to update to use the "=
podman generate systemd" stuff instead of manually copy/pasting/modifying t=
he configs in a static doc.
>
> Best Regards
> Scott M
>
> On Mon, Nov 4, 2019 at 3:41 PM Alexander E. Patrakov wrote:
>>
>> "Matt,
>>
>> no, I don't use static IPs. I let podman allocate them. I have already
>> tried `podman generate systemd` as per earlier suggestion.
>>
>> The issue is definitely not with stale reservations persisting across
>> a reboot, otherwise adding "flock" would not have helped.
>>
>> Regarding the "`start --attach` can exit while the container is still
>> running comment: if it is true, please ask the appropriate person to
>> fix the systemd unit example in RHEL7 documentation.
>>
>> =D0=B2=D1=82, 5 =D0=BD=D0=BE=D1=8F=D0=B1. 2019 =D0=B3. =D0=B2 01:19, Mat=
t Heon :
>> >
>> > On 2019-11-04 23:40, Alexander E. Patrakov wrote:
>> > >Hello.
>> > >
>> > >I have tried Podman in Fedora 31. Not a rootless setup.
>> > >
>> > >Software versions:
>> > >
>> > >podman-1.6.2-2.fc31.x86_64
>> > >containernetworking-plugins-0.8.2-2.1.dev.git485be65.fc31.x86_64
>> > >
>> > >I have created two containers:
>> > >
>> > ># podman container run -d --name nginx_1 -p 80:80 nginx
>> > ># podman container run -d --name nginx_2 -p 81:80 nginx
>> > >
>> > >Then I wanted to make sure that they start on boot.
>> > >
>> > >According to RHEL 7 documentation,
>> > >https://access.redhat.com/documentation/en-us/red_hat_enterprise_linu=
x_atomic_host/7/html/managing_containers/running_containers_as_systemd_serv=
ices_with_podman
>> > >, I am supposed to create systemd units. OK, let's take the documented
>> > >form of the unit and turn it into a template:
>> > >
>> > >[Unit]
>> > >Description=3DContainer %i
>> > >
>> > >[Service]
>> > >ExecStart=3D/usr/bin/podman start -a %i
>> > >ExecStop=3D/usr/bin/podman stop -t 2 %i
>> > >
>> > >[Install]
>> > >WantedBy=3Dmulti-user.target
>> > >
>> > >This doesn't work if there is more than one container. The error
>> > >is:
>> > >
>> > >Nov 04 21:35:57 podman[2268]: time=3D"2019-11-04T21:35:57+05:00"
>> > >level=3Derror msg=3D"Error adding network: failed to allocate for ran=
ge 0:
>> > >10.88.0.19 has been allocated to
>> > >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
>> > >duplicate allocation is not allowed"
>> > >Nov 04 21:35:57 podman[2268]: time=3D"2019-11-04T21:35:57+05:00"
>> > >level=3Derror msg=3D"Error while adding pod to CNI network \"podman\":
>> > >failed to allocate for range 0: 10.88.0.19 has been allocated to
>> > >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
>> > >duplicate allocation is not allowed"
>> > >Nov 04 21:35:57 podman[2268]: Error: unable to start container
>> > >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019:
>> > >error configuring network namespace for container
>> > >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019:
>> > >failed to allocate for range 0: 10.88.0.19 has been allocated to
>> > >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
>> > >duplicate allocation is not allowed
>> > >
>> > >(as you can see, the conflict is against the container itself)
>> > >
>> > >Apparently different runs of podman need to be serialized against each
>> > >other. This works:
>> > >
>> > >[Unit]
>> > >Description=3DContainer %i
>> > >Wants=3Dnetwork-online.target
>> > >After=3Dnetwork-online.target
>> > >
>> > >[Service]
>> > >Type=3Doneshot
>> > >RemainAfterExit=3Dyes
>> > >ExecStart=3Dflock /run/lock/subsys/container.lck /usr/bin/podman star=
t %i
>> > >ExecStop=3D/usr/bin/podman stop -t 2 %i
>> > >
>> > >[Install]
>> > >WantedBy=3Dmulti-user.target
>> > >
>> > >Questions:
>> > >
>> > >a) Why isn't some equivalent of this unit shipped with podman? Or, am
>> > >I missing some package that ships it?
>> > >b) Why isn't the necessary locking built into podman itself? Or, is it
>> > >a bug in containernetworking-plugins?
>> >
>> > These containers aren't using static IPs, correct?
>> >
>> > I can recall an issue where static IP allocations were leaving address
>> > reservations around after reboot, causing issues... But that should be
>> > fixed on the Podman we ship in F31.
>> >
>> > Otherwise, this sounds suspiciously like a CNI bug. I would hope that
>> > CNI has sufficient locking to prevent this from racing, but I could be
>> > wrong.
>> >
>> > Also, you should try using `podman generate systemd` for unit files.
>> > Looking at your unit files, I don't think they operate as advertised
>> > (`start --attach` can exit while the container is still running, so
>> > tracking it is not a reliable way of tracking the container).
>> >
>> > Thanks,
>> > Matt Heon
>> >
>> > >
>> > >--
>> > >Alexander E. Patrakov
>> > >_______________________________________________
>> > >Podman mailing list -- podman(a)lists.podman.io
>> > >To unsubscribe send an email to podman-leave(a)lists.podman.io
>>
>>
>>
>> --
>> Alexander E. Patrakov
>> _______________________________________________
>> Podman mailing list -- podman(a)lists.podman.io
>> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
>
>
> --
>
> --
>
> Scott McCarty, RHCA
> Product Management - Containers, Red Hat Enterprise Linux & OpenShift
> Email: smccarty(a)redhat.com
> Phone: 312-660-3535
> Cell: 330-807-1043
> Web: http://crunchtools.com
>
> Have you ever wondered what happens behind the scenes when you type www.r=
edhat.com into a browser and hit enter? https://www.redhat.com/en/blog/what=
-happens-when-you-hit-enter
-- =
Alexander E. Patrakov
--===============2818757490462703162==--
From bryan.hepworth at gmail.com Tue Nov 5 14:24:12 2019
Content-Type: multipart/mixed; boundary="===============7865964974439365896=="
MIME-Version: 1.0
From: bryan.hepworth at gmail.com
To: podman at lists.podman.io
Subject: [Podman] ubi8 epel8
Date: Tue, 05 Nov 2019 14:24:07 +0000
Message-ID: <20191105142407.6000.81008@lists.podman.io>
--===============7865964974439365896==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Just a quick question..
Is there a best practice for getting epel in to a ubi8 container? =
I'm trying to do this with yum -y install and the epel url. Do I need to en=
able anything else?
--===============7865964974439365896==--
From dwalsh at redhat.com Tue Nov 5 14:38:43 2019
Content-Type: multipart/mixed; boundary="===============8811728784995114782=="
MIME-Version: 1.0
From: Daniel Walsh
To: podman at lists.podman.io
Subject: [Podman] Re: ubi8 epel8
Date: Tue, 05 Nov 2019 09:38:28 -0500
Message-ID: <5555f1e0-c297-628f-e826-0c743c626585@redhat.com>
In-Reply-To: 20191105142407.6000.81008@lists.podman.io
--===============8811728784995114782==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On 11/5/19 9:24 AM, bryan.hepworth(a)gmail.com wrote:
> Just a quick question..
> Is there a best practice for getting epel in to a ubi8 container? =
>
> I'm trying to do this with yum -y install and the epel url. Do I need to =
enable anything else?
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
Not really something we cover with Podman.
Scott, can=C2=A0 you answer this one?
--===============8811728784995114782==--
From jwboyer at redhat.com Tue Nov 5 14:47:41 2019
Content-Type: multipart/mixed; boundary="===============1700063062631515228=="
MIME-Version: 1.0
From: Josh Boyer
To: podman at lists.podman.io
Subject: [Podman] Re: ubi8 epel8
Date: Tue, 05 Nov 2019 09:47:23 -0500
Message-ID:
In-Reply-To: 5555f1e0-c297-628f-e826-0c743c626585@redhat.com
--===============1700063062631515228==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On Tue, Nov 5, 2019 at 9:39 AM Daniel Walsh wrote:
>
> On 11/5/19 9:24 AM, bryan.hepworth(a)gmail.com wrote:
> > Just a quick question..
> > Is there a best practice for getting epel in to a ubi8 container?
> >
> > I'm trying to do this with yum -y install and the epel url. Do I need t=
o enable anything else?
That works. There's not really a better way. You need to get the
repo file included somehow, and either that or doing it via copying a
local repo file in an image build works fine.
If you want to then use that as a base image for other container
builds that need content from EPEL, you can build that image and use
it in the FROM line for any subsequent containers.
josh
--===============1700063062631515228==--
From mh+podman at scrit.ch Tue Nov 5 20:57:39 2019
Content-Type: multipart/mixed; boundary="===============8112152505416167639=="
MIME-Version: 1.0
From: mh
To: podman at lists.podman.io
Subject: [Podman] Re: feasible to upgrade podman on CentOS 8 to current
version?
Date: Tue, 05 Nov 2019 21:57:28 +0100
Message-ID:
In-Reply-To: CAL+7UBah3bSfVMPwexwAWS3Dv74oaV120=HR8T3U8+vULp199w@mail.gmail.com
--===============8112152505416167639==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On 05.11.19 12:54, Scott McCarty wrote:
> Long story, but we weren't able to get an update into the RHEl 8.0 - 12
> week release (aka six weeks after RHEl 8.0 launched. Instead we have to
> wait for RHEl 8.1. This combined with the fact that Podman is moving
> very quickly has combined to create the perception of a very old version
> on RHEL/CentOS. Our next updates to RHEL will be in 8.1 (impending), 12
> weeks after that, and again at 8.. In a nutshell, container-tools:rhel8
> should be updated every 12 weeks from now on.=C2=A0
8.1 is here :) and yay for faster updates!
~mh
--===============8112152505416167639==--
From mh+podman at scrit.ch Tue Nov 5 22:48:45 2019
Content-Type: multipart/mixed; boundary="===============0440169577693205382=="
MIME-Version: 1.0
From: mh
To: podman at lists.podman.io
Subject: [Podman] userns=keep-id and volumes requires all paths as user?
Date: Tue, 05 Nov 2019 23:48:36 +0100
Message-ID: <2373fa0f-2fb0-0eb6-3704-4376270f26ad@scrit.ch>
--===============0440169577693205382==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Hi All,
trying to do the following, but it won't work neither on fedora nor EL7
$ cat /etc/fedora-release
Fedora release 30 (Thirty)
$ podman version
Version: 1.6.2
RemoteAPI Version: 1
Go Version: go1.12.10
OS/Arch: linux/amd64
$ id -u
1000
$ id -g
1000
$ mkdir /tmp/foo/bar -p
$ chmod 0750 /tmp/foo /tmp/foo/bar
$ echo hello > /tmp/foo/bar/msg
$ podman run -it --userns=3Dkeep-id -v \
/tmp/foo/bar:/data:rw,Z fedora:31 cat /data/msg
hello
-> this works
$ sudo chown root /tmp/foo
$ ls -anl /tmp/foo
total 0
drwxr-x---. 3 0 1000 60 5. Nov 23:29 .
drwxrwxrwt. 30 0 0 2420 5. Nov 23:34 ..
drwxr-x---. 2 1000 1000 60 5. Nov 23:30 bar
$ podman run -it --userns=3Dkeep-id -v \
/tmp/foo/bar:/data:rw,Z fedora:31 cat /data/msg
Error: time=3D"2019-11-05T23:35:13+01:00" level=3Dwarning msg=3D"exit statu=
s 1"
time=3D"2019-11-05T23:35:13+01:00" level=3Derror
msg=3D"container_linux.go:346: starting container process caused
\"process_linux.go:449: container init caused \\\"rootfs_linux.go:58:
mounting \\\\\\\"/tmp/foo/bar\\\\\\\" to rootfs
\\\\\\\"/home/mh/.local/share/containers/storage/overlay/d7b7bfe26e90a616a8=
18c9210ad63da0d74c0c13c0b78c671034c7a6bb9e5cde/merged\\\\\\\"
at \\\\\\\"/data\\\\\\\" caused \\\\\\\"stat /tmp/foo/bar: permission
denied\\\\\\\"\\\"\""
container_linux.go:346: starting container process caused
"process_linux.go:449: container init caused \"rootfs_linux.go:58:
mounting \\\"/tmp/foo/bar\\\" to rootfs
\\\"/home/mh/.local/share/containers/storage/overlay/d7b7bfe26e90a616a818c9=
210ad63da0d74c0c13c0b78c671034c7a6bb9e5cde/merged\\\"
at \\\"/data\\\" caused \\\"stat /tmp/foo/bar: permission denied\\\"\"":
OCI runtime permission denied error
-> this fails somehow, although my user has rights in that path.
$ sudo chmod 0755 /tmp/foo
$ ls -anl /tmp/foo
total 0
drwxr-xr-x. 3 0 1000 60 5. Nov 23:29 .
drwxrwxrwt. 30 0 0 2420 5. Nov 23:35 ..
drwxr-x---. 2 1000 1000 60 5. Nov 23:30 bar
$ podman run -it --userns=3Dkeep-id -v \
/tmp/foo/bar:/data:rw,Z fedora:31 cat /data/msg
hello
So whenever a directory in the path to the volume that should go into my
container is not browseable by my uid (although my group can!) I cannot
mount it as a volume.
debug logs won't give any further info.
Why do I want to do that?
I have userdirectories that are purely used as chroots for SFTP through
sshd. Thus they *must* be root owned, but group readable/listable, so
the root of the chroot can't be overwritten. See sshd_config for more
details.
Now I'd like to run containers as the particular user, operating on some
directories within that chroot path.
By default these chroot-directories are setup with 0750 and thus failing
in my case.
While 0755 might still be an option/workaround, I am wondering what the
reason for that requirement is?
It looks like a bug to me. Shall I open an issue, but where?
~mh
--===============0440169577693205382==--
From rpjday at crashcourse.ca Wed Nov 6 10:24:23 2019
Content-Type: multipart/mixed; boundary="===============3704266949467437387=="
MIME-Version: 1.0
From: Robert P. J. Day
To: podman at lists.podman.io
Subject: [Podman] suggestions for container security vulnerability scanners?
Date: Wed, 06 Nov 2019 05:24:14 -0500
Message-ID:
--===============3704266949467437387==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
not really a podman-related question, but a colleague asked about
the options for open source container security scanners. i know about
commercial offerings like black duck; what are the choices of the
denizens of this list? thank you kindly.
rday
-- =
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Robert P. J. Day Ottawa, Ontario, CANADA
http://crashcourse.ca
Twitter: http://twitter.com/rpjday
LinkedIn: http://ca.linkedin.com/in/rpjday
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--===============3704266949467437387==--
From jruariveiro at gmail.com Wed Nov 6 11:39:13 2019
Content-Type: multipart/mixed; boundary="===============1318955816719203303=="
MIME-Version: 1.0
From: =?utf-8?q?Jorge_R=C3=BAa_=3Cjruariveiro_at_gmail=2Ecom=3E?=
To: podman at lists.podman.io
Subject: [Podman] Re: suggestions for container security vulnerability
scanners?
Date: Wed, 06 Nov 2019 11:38:53 +0000
Message-ID:
In-Reply-To: alpine.LFD.2.21.1911060522580.7864@localhost.localdomain
--===============1318955816719203303==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
I'd recommend you Clair [1]
[1] - https://github.com/coreos/clair
El mi=C3=A9., 6 nov. 2019 a las 10:24, Robert P. J. Day ()
escribi=C3=B3:
>
> not really a podman-related question, but a colleague asked about
> the options for open source container security scanners. i know about
> commercial offerings like black duck; what are the choices of the
> denizens of this list? thank you kindly.
>
> rday
>
> --
>
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> Robert P. J. Day Ottawa, Ontario, CANADA
> http://crashcourse.ca
>
> Twitter: http://twitter.com/rpjday
> LinkedIn: http://ca.linkedin.com/in/rpjday
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
--===============1318955816719203303==
Content-Type: text/html
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="attachment.html"
PGRpdiBkaXI9Imx0ciI+SSYjMzk7ZCByZWNvbW1lbmQgeW91IENsYWlyIFsxXTxkaXY+PGJyPjwv
ZGl2PjxkaXY+WzFdIC3CoDxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS9jb3Jlb3MvY2xhaXIi
Pmh0dHBzOi8vZ2l0aHViLmNvbS9jb3Jlb3MvY2xhaXI8L2E+PC9kaXY+PGRpdj48YnI+PC9kaXY+
PGRpdj48YnI+PC9kaXY+PC9kaXY+PGJyPjxkaXYgY2xhc3M9ImdtYWlsX3F1b3RlIj48ZGl2IGRp
cj0ibHRyIiBjbGFzcz0iZ21haWxfYXR0ciI+RWwgbWnDqS4sIDYgbm92LiAyMDE5IGEgbGFzIDEw
OjI0LCBSb2JlcnQgUC4gSi4gRGF5ICgmbHQ7PGEgaHJlZj0ibWFpbHRvOnJwamRheUBjcmFzaGNv
dXJzZS5jYSI+cnBqZGF5QGNyYXNoY291cnNlLmNhPC9hPiZndDspIGVzY3JpYmnDszo8YnI+PC9k
aXY+PGJsb2NrcXVvdGUgY2xhc3M9ImdtYWlsX3F1b3RlIiBzdHlsZT0ibWFyZ2luOjBweCAwcHgg
MHB4IDAuOGV4O2JvcmRlci1sZWZ0OjFweCBzb2xpZCByZ2IoMjA0LDIwNCwyMDQpO3BhZGRpbmct
bGVmdDoxZXgiPjxicj4KwqAgbm90IHJlYWxseSBhIHBvZG1hbi1yZWxhdGVkIHF1ZXN0aW9uLCBi
dXQgYSBjb2xsZWFndWUgYXNrZWQgYWJvdXQ8YnI+CnRoZSBvcHRpb25zIGZvciBvcGVuIHNvdXJj
ZSBjb250YWluZXIgc2VjdXJpdHkgc2Nhbm5lcnMuIGkga25vdyBhYm91dDxicj4KY29tbWVyY2lh
bCBvZmZlcmluZ3MgbGlrZSBibGFjayBkdWNrOyB3aGF0IGFyZSB0aGUgY2hvaWNlcyBvZiB0aGU8
YnI+CmRlbml6ZW5zIG9mIHRoaXMgbGlzdD8gdGhhbmsgeW91IGtpbmRseS48YnI+Cjxicj4KcmRh
eTxicj4KPGJyPgotLSA8YnI+Cjxicj4KPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PGJyPgpSb2JlcnQgUC4gSi4g
RGF5wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqBPdHRh
d2EsIE9udGFyaW8sIENBTkFEQTxicj4KwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqA8YSBocmVmPSJodHRwOi8vY3Jhc2hjb3Vyc2UuY2EiIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHA6Ly9jcmFzaGNvdXJzZS5jYTwvYT48YnI+Cjxicj4KVHdpdHRlcjrC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oDxhIGhyZWY9Imh0dHA6Ly90d2l0dGVyLmNvbS9ycGpkYXkiIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHA6Ly90d2l0dGVyLmNvbS9ycGpkYXk8L2E+PGJyPgpMaW5rZWRJbjrC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoDxhIGhyZWY9Imh0
dHA6Ly9jYS5saW5rZWRpbi5jb20vaW4vcnBqZGF5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0i
X2JsYW5rIj5odHRwOi8vY2EubGlua2VkaW4uY29tL2luL3JwamRheTwvYT48YnI+Cj09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PTxicj4KX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X188YnI+ClBvZG1hbiBtYWlsaW5nIGxpc3QgLS0gPGEgaHJlZj0ibWFpbHRvOnBvZG1hbkBsaXN0
cy5wb2RtYW4uaW8iIHRhcmdldD0iX2JsYW5rIj5wb2RtYW5AbGlzdHMucG9kbWFuLmlvPC9hPjxi
cj4KVG8gdW5zdWJzY3JpYmUgc2VuZCBhbiBlbWFpbCB0byA8YSBocmVmPSJtYWlsdG86cG9kbWFu
LWxlYXZlQGxpc3RzLnBvZG1hbi5pbyIgdGFyZ2V0PSJfYmxhbmsiPnBvZG1hbi1sZWF2ZUBsaXN0
cy5wb2RtYW4uaW88L2E+PGJyPgo8L2Jsb2NrcXVvdGU+PC9kaXY+Cg==
--===============1318955816719203303==--
From ashaikfe at gmail.com Wed Nov 6 12:34:53 2019
Content-Type: multipart/mixed; boundary="===============7184708940777599235=="
MIME-Version: 1.0
From: ashaikfe at gmail.com
To: podman at lists.podman.io
Subject: [Podman] Site & Process Automation Engineer
Date: Wed, 06 Nov 2019 12:34:48 +0000
Message-ID: <20191106123448.6000.39208@lists.podman.io>
--===============7184708940777599235==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
A site and Process Automation Engineer is somebody who helps organizations =
automate their production processes. The role of a site and Process Automat=
ion Engineer includes both installing, testing, troubleshooting, and mainta=
ining automation systems.
Many firms in the IT and telecom sector need people with the skills to help=
them automate repetitive tasks or tasks that require a high level of preci=
sion. Site and process automation engineers focus on reducing the labor req=
uired to perform operational activities in these industries, helping firms =
to cut costs.
Read More: https://www.fieldengineer.com/skills/site-process-automation-eng=
ineer-ericsson
--===============7184708940777599235==--
From rozerfederatfieldengineer at gmail.com Wed Nov 6 12:51:35 2019
Content-Type: multipart/mixed; boundary="===============4500100492601052513=="
MIME-Version: 1.0
From: rozerfederatfieldengineer at gmail.com
To: podman at lists.podman.io
Subject: [Podman] Cloud Integration Engineer
Date: Wed, 06 Nov 2019 12:51:29 +0000
Message-ID: <20191106125129.6000.19694@lists.podman.io>
--===============4500100492601052513==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
A cloud integration engineer is somebody who helps migrate existing company=
network and IT assets into the cloud. By integrating a firm=E2=80=99s syst=
ems with the cloud services, cloud integration engineers help firms improve=
accessibility, backup, and connectivity.
https://www.fieldengineer.com/skills/cloud-integration-engineer-ericsson
--===============4500100492601052513==--
From kaitlyn.kristy9494 at gmail.com Wed Nov 6 15:05:53 2019
Content-Type: multipart/mixed; boundary="===============3645635763929081855=="
MIME-Version: 1.0
From: kaitlyn.kristy9494 at gmail.com
To: podman at lists.podman.io
Subject: [Podman] A Global Marketplace connecting Engineers and Businesses
Date: Wed, 06 Nov 2019 15:05:47 +0000
Message-ID: <20191106150547.6000.73239@lists.podman.io>
--===============3645635763929081855==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
A Cisco Certified Network Professional executes and configures EIGRP-based =
solutions. Certified professionals must develop multi-area OSPF networks an=
d configure OSPF routing as well. It is further the responsibility of Certi=
fied Professionals to develop eBGP-based solutions and perform routing conf=
iguration.
Professionals must know how to set up an IPv6-based solution, and they must=
record all the results of their implementation. Certified Professionals ar=
e responsible for IPv4 and IPv6 redistribution solutions as well. They furt=
her must design and develop Layer 3 Path Control Solutions and broadband co=
nnections. Certified professionals must also have a strong understanding of=
what resources are required, and they must implement VLAN-based solutions.
Read More: https://www.fieldengineer.com/skills/cisco-certified-network-p=
rofessional
--===============3645635763929081855==--
From smccarty at redhat.com Wed Nov 6 15:08:18 2019
Content-Type: multipart/mixed; boundary="===============5899635238961464806=="
MIME-Version: 1.0
From: Scott McCarty
To: podman at lists.podman.io
Subject: [Podman] Re: ubi8 epel8
Date: Wed, 06 Nov 2019 15:08:14 +0000
Message-ID:
In-Reply-To: CANyg3HhzuUgF4dAdSqtGzF2kU8WqxYFU4z7ufQz_-D+fF0MTrQ@mail.gmail.com
--===============5899635238961464806==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Bryan,
Short answer is mileage may vary. See question #21 here:
https://developers.redhat.com/articles/ubi-faq/#resources
Best Regards
Scott M
On Tue, Nov 5, 2019 at 9:47 AM Josh Boyer wrote:
> On Tue, Nov 5, 2019 at 9:39 AM Daniel Walsh wrote:
> >
> > On 11/5/19 9:24 AM, bryan.hepworth(a)gmail.com wrote:
> > > Just a quick question..
> > > Is there a best practice for getting epel in to a ubi8 container?
> > >
> > > I'm trying to do this with yum -y install and the epel url. Do I need
> to enable anything else?
>
> That works. There's not really a better way. You need to get the
> repo file included somehow, and either that or doing it via copying a
> local repo file in an image build works fine.
>
> If you want to then use that as a base image for other container
> builds that need content from EPEL, you can build that image and use
> it in the FROM line for any subsequent containers.
>
> josh
>
-- =
-- =
Scott McCarty, RHCA
Product Management - Containers, Red Hat Enterprise Linux & OpenShift
Email: smccarty(a)redhat.com
Phone: 312-660-3535
Cell: 330-807-1043
Web: http://crunchtools.com
Have you ever wondered what happens behind the scenes when you type
www.redhat.com into a browser and hit enter?
https://www.redhat.com/en/blog/what-happens-when-you-hit-enter
--===============5899635238961464806==
Content-Type: text/html
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="attachment.html"
PGRpdiBkaXI9Imx0ciI+PGRpdiBjbGFzcz0iZ21haWxfZGVmYXVsdCIgc3R5bGU9ImZvbnQtc2l6
ZTpzbWFsbCI+QnJ5YW4sPC9kaXY+PGRpdiBjbGFzcz0iZ21haWxfZGVmYXVsdCIgc3R5bGU9ImZv
bnQtc2l6ZTpzbWFsbCI+wqAgwqAgU2hvcnQgYW5zd2VyIGlzIG1pbGVhZ2UgbWF5IHZhcnkuIFNl
ZSBxdWVzdGlvbiAjMjEgaGVyZTrCoDxhIGhyZWY9Imh0dHBzOi8vZGV2ZWxvcGVycy5yZWRoYXQu
Y29tL2FydGljbGVzL3ViaS1mYXEvI3Jlc291cmNlcyI+aHR0cHM6Ly9kZXZlbG9wZXJzLnJlZGhh
dC5jb20vYXJ0aWNsZXMvdWJpLWZhcS8jcmVzb3VyY2VzPC9hPjwvZGl2PjxkaXYgY2xhc3M9Imdt
YWlsX2RlZmF1bHQiIHN0eWxlPSJmb250LXNpemU6c21hbGwiPjxicj48L2Rpdj48ZGl2IGNsYXNz
PSJnbWFpbF9kZWZhdWx0IiBzdHlsZT0iZm9udC1zaXplOnNtYWxsIj5CZXN0IFJlZ2FyZHM8L2Rp
dj48ZGl2IGNsYXNzPSJnbWFpbF9kZWZhdWx0IiBzdHlsZT0iZm9udC1zaXplOnNtYWxsIj5TY290
dCBNPC9kaXY+PC9kaXY+PGJyPjxkaXYgY2xhc3M9ImdtYWlsX3F1b3RlIj48ZGl2IGRpcj0ibHRy
IiBjbGFzcz0iZ21haWxfYXR0ciI+T24gVHVlLCBOb3YgNSwgMjAxOSBhdCA5OjQ3IEFNIEpvc2gg
Qm95ZXIgJmx0OzxhIGhyZWY9Im1haWx0bzpqd2JveWVyQHJlZGhhdC5jb20iPmp3Ym95ZXJAcmVk
aGF0LmNvbTwvYT4mZ3Q7IHdyb3RlOjxicj48L2Rpdj48YmxvY2txdW90ZSBjbGFzcz0iZ21haWxf
cXVvdGUiIHN0eWxlPSJtYXJnaW46MHB4IDBweCAwcHggMC44ZXg7Ym9yZGVyLWxlZnQ6MXB4IHNv
bGlkIHJnYigyMDQsMjA0LDIwNCk7cGFkZGluZy1sZWZ0OjFleCI+T24gVHVlLCBOb3YgNSwgMjAx
OSBhdCA5OjM5IEFNIERhbmllbCBXYWxzaCAmbHQ7PGEgaHJlZj0ibWFpbHRvOmR3YWxzaEByZWRo
YXQuY29tIiB0YXJnZXQ9Il9ibGFuayI+ZHdhbHNoQHJlZGhhdC5jb208L2E+Jmd0OyB3cm90ZTo8
YnI+CiZndDs8YnI+CiZndDsgT24gMTEvNS8xOSA5OjI0IEFNLCA8YSBocmVmPSJtYWlsdG86YnJ5
YW4uaGVwd29ydGhAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+YnJ5YW4uaGVwd29ydGhAZ21h
aWwuY29tPC9hPiB3cm90ZTo8YnI+CiZndDsgJmd0OyBKdXN0IGEgcXVpY2sgcXVlc3Rpb24uLjxi
cj4KJmd0OyAmZ3Q7IElzIHRoZXJlIGEgYmVzdCBwcmFjdGljZSBmb3IgZ2V0dGluZyBlcGVsIGlu
IHRvIGEgdWJpOCBjb250YWluZXI/PGJyPgomZ3Q7ICZndDs8YnI+CiZndDsgJmd0OyBJJiMzOTtt
IHRyeWluZyB0byBkbyB0aGlzIHdpdGggeXVtIC15IGluc3RhbGwgYW5kIHRoZSBlcGVsIHVybC4g
RG8gSSBuZWVkIHRvIGVuYWJsZSBhbnl0aGluZyBlbHNlPzxicj4KPGJyPgpUaGF0IHdvcmtzLsKg
IFRoZXJlJiMzOTtzIG5vdCByZWFsbHkgYSBiZXR0ZXIgd2F5LsKgIFlvdSBuZWVkIHRvIGdldCB0
aGU8YnI+CnJlcG8gZmlsZSBpbmNsdWRlZCBzb21laG93LCBhbmQgZWl0aGVyIHRoYXQgb3IgZG9p
bmcgaXQgdmlhIGNvcHlpbmcgYTxicj4KbG9jYWwgcmVwbyBmaWxlIGluIGFuIGltYWdlIGJ1aWxk
IHdvcmtzIGZpbmUuPGJyPgo8YnI+CklmIHlvdSB3YW50IHRvIHRoZW4gdXNlIHRoYXQgYXMgYSBi
YXNlIGltYWdlIGZvciBvdGhlciBjb250YWluZXI8YnI+CmJ1aWxkcyB0aGF0IG5lZWQgY29udGVu
dCBmcm9tIEVQRUwsIHlvdSBjYW4gYnVpbGQgdGhhdCBpbWFnZSBhbmQgdXNlPGJyPgppdCBpbiB0
aGUgRlJPTSBsaW5lIGZvciBhbnkgc3Vic2VxdWVudCBjb250YWluZXJzLjxicj4KPGJyPgpqb3No
PGJyPgo8L2Jsb2NrcXVvdGU+PC9kaXY+PGJyIGNsZWFyPSJhbGwiPjxkaXY+PGJyPjwvZGl2Pi0t
IDxicj48ZGl2IGRpcj0ibHRyIiBjbGFzcz0iZ21haWxfc2lnbmF0dXJlIj48ZGl2IGRpcj0ibHRy
Ij48ZGl2PjxkaXYgZGlyPSJsdHIiPjxkaXYgZGlyPSJsdHIiPjxkaXYgZGlyPSJsdHIiPjxkaXYg
ZGlyPSJsdHIiPjxkaXYgZGlyPSJsdHIiPjxkaXYgZGlyPSJsdHIiPjxkaXYgZGlyPSJsdHIiPjxk
aXYgZGlyPSJsdHIiPjxkaXYgZGlyPSJsdHIiPjxwcmU+LS0gPGJyPjwvcHJlPjxwcmU+PGZvbnQg
c2l6ZT0iMiI+U2NvdHQgTWNDYXJ0eSwgUkhDQQpQcm9kdWN0IE1hbmFnZW1lbnQgLSBDb250YWlu
ZXJzLCBSZWQgSGF0IEVudGVycHJpc2UgTGludXggJmFtcDsgT3BlblNoaWZ0CkVtYWlsOiA8YSBo
cmVmPSJtYWlsdG86c21jY2FydHlAcmVkaGF0LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPnNtY2NhcnR5
QHJlZGhhdC5jb208L2E+ClBob25lOiAzMTItNjYwLTM1MzUKQ2VsbDogMzMwLTgwNy0xMDQzCldl
YjogPGEgaHJlZj0iaHR0cDovL2NydW5jaHRvb2xzLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPmh0dHA6
Ly9jcnVuY2h0b29scy5jb208L2E+PC9mb250PjxwcmU+SGF2ZSB5b3UgZXZlciB3b25kZXJlZCB3
aGF0IGhhcHBlbnMgYmVoaW5kIHRoZSBzY2VuZXMgd2hlbiB5b3UgdHlwZSA8YSBocmVmPSJodHRw
Oi8vd3d3LnJlZGhhdC5jb20iIHRhcmdldD0iX2JsYW5rIj53d3cucmVkaGF0LmNvbTwvYT4gaW50
byBhIGJyb3dzZXIgYW5kIGhpdCBlbnRlcj8gPGEgaHJlZj0iaHR0cHM6Ly93d3cucmVkaGF0LmNv
bS9lbi9ibG9nL3doYXQtaGFwcGVucy13aGVuLXlvdS1oaXQtZW50ZXIiIHN0eWxlPSJmb250LWZh
bWlseTpBcmlhbCxIZWx2ZXRpY2Esc2Fucy1zZXJpZiIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8v
d3d3LnJlZGhhdC5jb20vZW4vYmxvZy93aGF0LWhhcHBlbnMtd2hlbi15b3UtaGl0LWVudGVyPC9h
PjwvcHJlPjwvcHJlPjwvZGl2PjwvZGl2PjwvZGl2PjwvZGl2PjwvZGl2PjwvZGl2PjwvZGl2Pjwv
ZGl2PjwvZGl2PjwvZGl2PjwvZGl2PjwvZGl2Pgo=
--===============5899635238961464806==--
From rpjday at crashcourse.ca Wed Nov 6 15:11:30 2019
Content-Type: multipart/mixed; boundary="===============2318373251124687288=="
MIME-Version: 1.0
From: Robert P. J. Day
To: podman at lists.podman.io
Subject: [Podman] Re: A Global Marketplace connecting Engineers and Businesses
Date: Wed, 06 Nov 2019 10:11:22 -0500
Message-ID:
In-Reply-To: 20191106150547.6000.73239@lists.podman.io
--===============2318373251124687288==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On Wed, 6 Nov 2019, kaitlyn.kristy9494(a)gmail.com wrote:
> A Cisco Certified Network Professional executes and configures EIGRP-base=
d solutions. Certified professionals must develop multi-area OSPF networks =
and configure OSPF routing as well. It is further the responsibility of Cer=
tified Professionals to develop eBGP-based solutions and perform routing co=
nfiguration.
>
> Professionals must know how to set up an IPv6-based solution, and they mu=
st record all the results of their implementation. Certified Professionals =
are responsible for IPv4 and IPv6 redistribution solutions as well. They fu=
rther must design and develop Layer 3 Path Control Solutions and broadband =
connections. Certified professionals must also have a strong understanding =
of what resources are required, and they must implement VLAN-based solution=
s.
>
> Read More: https://www.fieldengineer.com/skills/cisco-certified-network=
-professional
marketing posts promoting that URL (fieldengineer.com) are currently
spamming numerous technical mailing lists. is there any way to simply
blacklist any post that contains that URL?
rday
-- =
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Robert P. J. Day Ottawa, Ontario, CANADA
http://crashcourse.ca
Twitter: http://twitter.com/rpjday
LinkedIn: http://ca.linkedin.com/in/rpjday
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--===============2318373251124687288==--
From tsweeney at redhat.com Wed Nov 6 15:15:18 2019
Content-Type: multipart/mixed; boundary="===============4844825315146607639=="
MIME-Version: 1.0
From: Tom Sweeney
To: podman at lists.podman.io
Subject: [Podman] Re: A Global Marketplace connecting Engineers and Businesses
Date: Wed, 06 Nov 2019 10:15:08 -0500
Message-ID:
In-Reply-To: alpine.LFD.2.21.1911061010360.12836@localhost.localdomain
--===============4844825315146607639==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On 11/06/2019 10:11 AM, Robert P. J. Day wrote:
> On Wed, 6 Nov 2019, kaitlyn.kristy9494(a)gmail.com wrote:
>
>> A Cisco Certified Network Professional executes and configures EIGRP-bas=
ed solutions. Certified professionals must develop multi-area OSPF networks=
and configure OSPF routing as well. It is further the responsibility of Ce=
rtified Professionals to develop eBGP-based solutions and perform routing c=
onfiguration.
>>
>> Professionals must know how to set up an IPv6-based solution, and they m=
ust record all the results of their implementation. Certified Professionals=
are responsible for IPv4 and IPv6 redistribution solutions as well. They f=
urther must design and develop Layer 3 Path Control Solutions and broadband=
connections. Certified professionals must also have a strong understanding=
of what resources are required, and they must implement VLAN-based solutio=
ns.
>>
>> Read More: https://www.fieldengineer.com/skills/cisco-certified-networ=
k-professional
> marketing posts promoting that URL (fieldengineer.com) are currently
> spamming numerous technical mailing lists. is there any way to simply
> blacklist any post that contains that URL?
>
> rday
>
Looking into it, unfortunately at the moment the list server seems to be
down, I think Postorius is doing something on their end to stop this.
t
--===============4844825315146607639==--
From tsweeney at redhat.com Wed Nov 6 15:25:45 2019
Content-Type: multipart/mixed; boundary="===============8875238034809774124=="
MIME-Version: 1.0
From: Tom Sweeney
To: podman at lists.podman.io
Subject: [Podman] Re: A Global Marketplace connecting Engineers and Businesses
Date: Wed, 06 Nov 2019 10:25:34 -0500
Message-ID: <65092cfb-0962-d352-2159-1ffe7009c228@redhat.com>
In-Reply-To: f398511a-4a0e-684b-251f-c7c2b080ad94@redhat.com
--===============8875238034809774124==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On 11/06/2019 10:15 AM, Tom Sweeney wrote:
> On 11/06/2019 10:11 AM, Robert P. J. Day wrote:
>> On Wed, 6 Nov 2019, kaitlyn.kristy9494(a)gmail.com wrote:
>>
>>> A Cisco Certified Network Professional executes and configures EIGRP-ba=
sed solutions. Certified professionals must develop multi-area OSPF network=
s and configure OSPF routing as well. It is further the responsibility of C=
ertified Professionals to develop eBGP-based solutions and perform routing =
configuration.
>>>
>>> Professionals must know how to set up an IPv6-based solution, and they =
must record all the results of their implementation. Certified Professional=
s are responsible for IPv4 and IPv6 redistribution solutions as well. They =
further must design and develop Layer 3 Path Control Solutions and broadban=
d connections. Certified professionals must also have a strong understandin=
g of what resources are required, and they must implement VLAN-based soluti=
ons.
>>>
>>> Read More: https://www.fieldengineer.com/skills/cisco-certified-netwo=
rk-professional
>> marketing posts promoting that URL (fieldengineer.com) are currently
>> spamming numerous technical mailing lists. is there any way to simply
>> blacklist any post that contains that URL?
>>
>> rday
>>
> Looking into it, unfortunately at the moment the list server seems to be
> down, I think Postorius is doing something on their end to stop this.
>
> t
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
I've banned the three bogus emails that sent emails in earlier today.=C2=A0
I've also changed the subscription policy such at a moderator will have
to approve any confirmed subscription requests.=C2=A0 That should slow them
down a bit.
t
--===============8875238034809774124==--
From bbaude at redhat.com Wed Nov 6 15:39:27 2019
Content-Type: multipart/mixed; boundary="===============5162808997805854115=="
MIME-Version: 1.0
From: Brent Baude
To: podman at lists.podman.io
Subject: [Podman] Basic security principles for containers and container
runtimes
Date: Wed, 06 Nov 2019 09:39:16 -0600
Message-ID:
--===============5162808997805854115==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Folks,
We recently published an article on some basic security ideas around
Podman and container images. Socialize as you see fit.
https://www.redhat.com/sysadmin/users/brent-baude
--===============5162808997805854115==--
From bryan.hepworth at gmail.com Wed Nov 6 15:40:35 2019
Content-Type: multipart/mixed; boundary="===============2676443768624977666=="
MIME-Version: 1.0
From: bryan.hepworth at gmail.com
To: podman at lists.podman.io
Subject: [Podman] Re: ubi8 epel8
Date: Wed, 06 Nov 2019 15:40:31 +0000
Message-ID: <20191106154031.6000.70186@lists.podman.io>
In-Reply-To: CANyg3HhzuUgF4dAdSqtGzF2kU8WqxYFU4z7ufQz_-D+fF0MTrQ@mail.gmail.com
--===============2676443768624977666==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Thanks Josh.
I'm working through a few things and want to get everything to be based on =
ubi8
--===============2676443768624977666==--
From rpjday at crashcourse.ca Wed Nov 6 15:41:31 2019
Content-Type: multipart/mixed; boundary="===============0916199748636762134=="
MIME-Version: 1.0
From: Robert P. J. Day
To: podman at lists.podman.io
Subject: [Podman] Re: Basic security principles for containers and container
runtimes
Date: Wed, 06 Nov 2019 10:41:22 -0500
Message-ID:
In-Reply-To: d00df7f6a12a20ab81cb0773eb638ecda40f44fe.camel@redhat.com
--===============0916199748636762134==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On Wed, 6 Nov 2019, Brent Baude wrote:
> Folks,
>
> We recently published an article on some basic security ideas around
> Podman and container images. Socialize as you see fit.
>
> https://www.redhat.com/sysadmin/users/brent-baude
thank you kindly.
rday
--===============0916199748636762134==--
From bryan.hepworth at gmail.com Wed Nov 6 15:48:17 2019
Content-Type: multipart/mixed; boundary="===============8444333937420578383=="
MIME-Version: 1.0
From: bryan.hepworth at gmail.com
To: podman at lists.podman.io
Subject: [Podman] Re: ubi8 epel8
Date: Wed, 06 Nov 2019 15:48:13 +0000
Message-ID: <20191106154813.6000.48220@lists.podman.io>
In-Reply-To: CAL+7UBZgTzfeuO8r34ke03f=pjtN+J3ovc0bM=W0R9t75WSOig@mail.gmail.com
--===============8444333937420578383==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Thanks Scott
I'd spied the note about epel elsewhere, but am determined to work through =
things. We use epel for R amongst other things and have done so for years r=
eliably.
The 8.1 release is making me change how we provide software to researchers =
and do this via containers.=20
--===============8444333937420578383==--
From gscrivan at redhat.com Wed Nov 6 16:01:57 2019
Content-Type: multipart/mixed; boundary="===============6054555588009433647=="
MIME-Version: 1.0
From: Giuseppe Scrivano
To: podman at lists.podman.io
Subject: [Podman] Re: userns=keep-id and volumes requires all paths as user?
Date: Wed, 06 Nov 2019 17:01:43 +0100
Message-ID: <871rulf0yg.fsf@redhat.com>
In-Reply-To: 2373fa0f-2fb0-0eb6-3704-4376270f26ad@scrit.ch
--===============6054555588009433647==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
the issue here is that you are mapping your own user to the same id
inside of the user namespace.
That means the root user inside of the user namespace will be mapped to
another ID, which is the first ID specified in /etc/sub?id for your
user. It is the same user that will configure the mount namespace,
including the bind mount that fails in your test.
The OCI runtime, after changing uid/gid to the specified ones (with
--userns=3Dkeep-id are the same $UID:$GID you have on the host), drops any
additional group that the user had when launching the container.
I've added a function to crun 0.10.4 to have the possibility to not drop
additional groups for such cases, it can be enabled from podman with
"--annotation io.crun.keep_original_groups=3D1", it might help you.
Giuseppe
mh writes:
> Hi All,
>
> trying to do the following, but it won't work neither on fedora nor EL7
>
> $ cat /etc/fedora-release
> Fedora release 30 (Thirty)
>
> $ podman version
> Version: 1.6.2
> RemoteAPI Version: 1
> Go Version: go1.12.10
> OS/Arch: linux/amd64
>
> $ id -u
> 1000
> $ id -g
> 1000
>
>
> $ mkdir /tmp/foo/bar -p
> $ chmod 0750 /tmp/foo /tmp/foo/bar
> $ echo hello > /tmp/foo/bar/msg
>
> $ podman run -it --userns=3Dkeep-id -v \
> /tmp/foo/bar:/data:rw,Z fedora:31 cat /data/msg
> hello
>
> -> this works
>
> $ sudo chown root /tmp/foo
> $ ls -anl /tmp/foo
> total 0
> drwxr-x---. 3 0 1000 60 5. Nov 23:29 .
> drwxrwxrwt. 30 0 0 2420 5. Nov 23:34 ..
> drwxr-x---. 2 1000 1000 60 5. Nov 23:30 bar
>
> $ podman run -it --userns=3Dkeep-id -v \
> /tmp/foo/bar:/data:rw,Z fedora:31 cat /data/msg
> Error: time=3D"2019-11-05T23:35:13+01:00" level=3Dwarning msg=3D"exit sta=
tus 1"
> time=3D"2019-11-05T23:35:13+01:00" level=3Derror
> msg=3D"container_linux.go:346: starting container process caused
> \"process_linux.go:449: container init caused \\\"rootfs_linux.go:58:
> mounting \\\\\\\"/tmp/foo/bar\\\\\\\" to rootfs
> \\\\\\\"/home/mh/.local/share/containers/storage/overlay/d7b7bfe26e90a616=
a818c9210ad63da0d74c0c13c0b78c671034c7a6bb9e5cde/merged\\\\\\\"
> at \\\\\\\"/data\\\\\\\" caused \\\\\\\"stat /tmp/foo/bar: permission
> denied\\\\\\\"\\\"\""
> container_linux.go:346: starting container process caused
> "process_linux.go:449: container init caused \"rootfs_linux.go:58:
> mounting \\\"/tmp/foo/bar\\\" to rootfs
> \\\"/home/mh/.local/share/containers/storage/overlay/d7b7bfe26e90a616a818=
c9210ad63da0d74c0c13c0b78c671034c7a6bb9e5cde/merged\\\"
> at \\\"/data\\\" caused \\\"stat /tmp/foo/bar: permission denied\\\"\"":
> OCI runtime permission denied error
>
> -> this fails somehow, although my user has rights in that path.
>
> $ sudo chmod 0755 /tmp/foo
> $ ls -anl /tmp/foo
> total 0
> drwxr-xr-x. 3 0 1000 60 5. Nov 23:29 .
> drwxrwxrwt. 30 0 0 2420 5. Nov 23:35 ..
> drwxr-x---. 2 1000 1000 60 5. Nov 23:30 bar
>
> $ podman run -it --userns=3Dkeep-id -v \
> /tmp/foo/bar:/data:rw,Z fedora:31 cat /data/msg
> hello
>
> So whenever a directory in the path to the volume that should go into my
> container is not browseable by my uid (although my group can!) I cannot
> mount it as a volume.
>
> debug logs won't give any further info.
>
> Why do I want to do that?
>
> I have userdirectories that are purely used as chroots for SFTP through
> sshd. Thus they *must* be root owned, but group readable/listable, so
> the root of the chroot can't be overwritten. See sshd_config for more
> details.
>
> Now I'd like to run containers as the particular user, operating on some
> directories within that chroot path.
>
> By default these chroot-directories are setup with 0750 and thus failing
> in my case.
>
> While 0755 might still be an option/workaround, I am wondering what the
> reason for that requirement is?
>
> It looks like a bug to me. Shall I open an issue, but where?
>
> ~mh
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
--===============6054555588009433647==--
From rpjday at crashcourse.ca Wed Nov 6 16:30:36 2019
Content-Type: multipart/mixed; boundary="===============7312579405271399782=="
MIME-Version: 1.0
From: Robert P. J. Day
To: podman at lists.podman.io
Subject: [Podman] can you define only a "RUN" runlabel for a container?
Date: Wed, 06 Nov 2019 11:30:26 -0500
Message-ID:
--===============7312579405271399782==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
reading brent's recent piece on security, and noticed the "podman
container runlabel" command which allows one to define a label for
convenience. however, every example i've seen of that uses precisely
the label of "RUN," as if that's the only possibility.
can you not define multiple runlabels for a single image? that seems
like the obvious thing to support, but if one looks at examples, it's
not clear.
what's the story here?
rday
-- =
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Robert P. J. Day Ottawa, Ontario, CANADA
http://crashcourse.ca
Twitter: http://twitter.com/rpjday
LinkedIn: http://ca.linkedin.com/in/rpjday
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--===============7312579405271399782==--
From rpjday at crashcourse.ca Wed Nov 6 16:36:15 2019
Content-Type: multipart/mixed; boundary="===============5788707793368148634=="
MIME-Version: 1.0
From: Robert P. J. Day
To: podman at lists.podman.io
Subject: [Podman] terminology: why "podman container runlabel",
not "podman image runlabel"?
Date: Wed, 06 Nov 2019 11:36:04 -0500
Message-ID:
--===============5788707793368148634==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
more pedantic nitpickery, but i'm a stickler for terminology and
i've always defined "image" (or "container image") as, well, something
that *can* be run, and "container" as an image in the process of
execution.
so "podman container runlabel" seems awkward as it clearly(?) refers
to an image, not a container. am i overthinking this?
rday
--===============5788707793368148634==--
From rothberg at redhat.com Wed Nov 6 17:16:35 2019
Content-Type: multipart/mixed; boundary="===============0337389017810432610=="
MIME-Version: 1.0
From: Valentin Rothberg
To: podman at lists.podman.io
Subject: [Podman] Re: terminology: why "podman container runlabel",
not "podman image runlabel"?
Date: Wed, 06 Nov 2019 18:16:12 +0100
Message-ID:
In-Reply-To: alpine.LFD.2.21.1911061134090.15690@localhost.localdomain
--===============0337389017810432610==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On Wed, Nov 6, 2019 at 5:36 PM Robert P. J. Day
wrote:
>
> more pedantic nitpickery, but i'm a stickler for terminology and
> i've always defined "image" (or "container image") as, well, something
> that *can* be run, and "container" as an image in the process of
> execution.
>
> so "podman container runlabel" seems awkward as it clearly(?) refers
> to an image, not a container. am i overthinking this?
>
All (sub) commands to manage containers are placed under podman-container.
Same applies to podman-container-runlabel as it is meant to execute a
container (as specified in the image's label).
Kind regards,
Valentin
--===============0337389017810432610==
Content-Type: text/html
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="attachment.html"
PGRpdiBkaXI9Imx0ciI+PGRpdiBkaXI9Imx0ciI+T24gV2VkLCBOb3YgNiwgMjAxOSBhdCA1OjM2
IFBNIFJvYmVydCBQLiBKLiBEYXkgJmx0OzxhIGhyZWY9Im1haWx0bzpycGpkYXlAY3Jhc2hjb3Vy
c2UuY2EiPnJwamRheUBjcmFzaGNvdXJzZS5jYTwvYT4mZ3Q7IHdyb3RlOjxicj48L2Rpdj48ZGl2
IGNsYXNzPSJnbWFpbF9xdW90ZSI+PGJsb2NrcXVvdGUgY2xhc3M9ImdtYWlsX3F1b3RlIiBzdHls
ZT0ibWFyZ2luOjBweCAwcHggMHB4IDAuOGV4O2JvcmRlci1sZWZ0OjFweCBzb2xpZCByZ2IoMjA0
LDIwNCwyMDQpO3BhZGRpbmctbGVmdDoxZXgiPjxicj4KwqAgbW9yZSBwZWRhbnRpYyBuaXRwaWNr
ZXJ5LCBidXQgaSYjMzk7bSBhIHN0aWNrbGVyIGZvciB0ZXJtaW5vbG9neSBhbmQ8YnI+CmkmIzM5
O3ZlIGFsd2F5cyBkZWZpbmVkICZxdW90O2ltYWdlJnF1b3Q7IChvciAmcXVvdDtjb250YWluZXIg
aW1hZ2UmcXVvdDspIGFzLCB3ZWxsLCBzb21ldGhpbmc8YnI+CnRoYXQgKmNhbiogYmUgcnVuLCBh
bmQgJnF1b3Q7Y29udGFpbmVyJnF1b3Q7IGFzIGFuIGltYWdlIGluIHRoZSBwcm9jZXNzIG9mPGJy
PgpleGVjdXRpb24uPGJyPgo8YnI+CsKgIHNvICZxdW90O3BvZG1hbiBjb250YWluZXIgcnVubGFi
ZWwmcXVvdDsgc2VlbXMgYXdrd2FyZCBhcyBpdCBjbGVhcmx5KD8pIHJlZmVyczxicj4KdG8gYW4g
aW1hZ2UsIG5vdCBhIGNvbnRhaW5lci4gYW0gaSBvdmVydGhpbmtpbmcgdGhpcz88YnI+PC9ibG9j
a3F1b3RlPjwvZGl2PjxkaXYgY2xhc3M9ImdtYWlsX3F1b3RlIj48YnI+PC9kaXY+PGRpdiBjbGFz
cz0iZ21haWxfcXVvdGUiPkFsbCAoc3ViKSBjb21tYW5kcyB0byBtYW5hZ2UgY29udGFpbmVycyBh
cmUgcGxhY2VkIHVuZGVyIHBvZG1hbi1jb250YWluZXIuIFNhbWUgYXBwbGllcyB0byBwb2RtYW4t
Y29udGFpbmVyLXJ1bmxhYmVsIGFzIGl0IGlzIG1lYW50IHRvIGV4ZWN1dGUgYSBjb250YWluZXIg
KGFzIHNwZWNpZmllZCBpbiB0aGUgaW1hZ2UmIzM5O3MgbGFiZWwpLjwvZGl2PjxkaXYgY2xhc3M9
ImdtYWlsX3F1b3RlIj48YnI+PC9kaXY+PGRpdj5LaW5kIHJlZ2FyZHMsPC9kaXY+PGRpdj7CoFZh
bGVudGluPGJyPjwvZGl2PjwvZGl2Pgo=
--===============0337389017810432610==--
From rpjday at crashcourse.ca Wed Nov 6 17:22:36 2019
Content-Type: multipart/mixed; boundary="===============6156406505458624730=="
MIME-Version: 1.0
From: Robert P. J. Day
To: podman at lists.podman.io
Subject: [Podman] Re: terminology: why "podman container runlabel",
not "podman image runlabel"?
Date: Wed, 06 Nov 2019 12:22:28 -0500
Message-ID:
In-Reply-To: CALxX1+dNsoeWtOFKSjf8gM2d_KvgTuQ0TC-kmx3GU_DUOBLY7Q@mail.gmail.com
--===============6156406505458624730==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On Wed, 6 Nov 2019, Valentin Rothberg wrote:
> On Wed, Nov 6, 2019 at 5:36 PM Robert P. J. Day =
wrote:
>
> =C2=A0 more pedantic nitpickery, but i'm a stickler for terminology=
and
> i've always defined "image" (or "container image") as, well, someth=
ing
> that *can* be run, and "container" as an image in the process of
> execution.
>
> =C2=A0 so "podman container runlabel" seems awkward as it clearly(?=
) refers
> to an image, not a container. am i overthinking this?
>
>
> All (sub) commands to manage containers are placed under
> podman-container. Same applies to podman-container-runlabel as it is
> meant to execute a container (as specified in the image's label).
understood, thanks.
rday
--===============6156406505458624730==--
From rothberg at redhat.com Wed Nov 6 17:23:24 2019
Content-Type: multipart/mixed; boundary="===============1452642489631563246=="
MIME-Version: 1.0
From: Valentin Rothberg
To: podman at lists.podman.io
Subject: [Podman] Re: can you define only a "RUN" runlabel for a container?
Date: Wed, 06 Nov 2019 18:21:38 +0100
Message-ID:
In-Reply-To: alpine.LFD.2.21.1911061125190.15429@localhost.localdomain
--===============1452642489631563246==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On Wed, Nov 6, 2019 at 5:31 PM Robert P. J. Day
wrote:
>
> reading brent's recent piece on security, and noticed the "podman
> container runlabel" command which allows one to define a label for
> convenience. however, every example i've seen of that uses precisely
> the label of "RUN," as if that's the only possibility.
>
> can you not define multiple runlabels for a single image? that seems
> like the obvious thing to support, but if one looks at examples, it's
> not clear.
>
Yes, an image can have multiple "runlabels". The label to be used for
execution can be specified via the CLI and there is no requirement for it
to be named "RUN". It's described in the man page [1] but I understand the
question and think this example is a good addition to the man page which
should help to make it clearer.
[1]
https://github.com/containers/libpod/blob/master/docs/source/markdown/podma=
n-container-runlabel.1.md
--===============1452642489631563246==
Content-Type: text/html
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="attachment.html"
PGRpdiBkaXI9Imx0ciI+PGRpdiBkaXI9Imx0ciI+PGJyPjwvZGl2Pjxicj48ZGl2IGNsYXNzPSJn
bWFpbF9xdW90ZSI+PGRpdiBkaXI9Imx0ciIgY2xhc3M9ImdtYWlsX2F0dHIiPk9uIFdlZCwgTm92
IDYsIDIwMTkgYXQgNTozMSBQTSBSb2JlcnQgUC4gSi4gRGF5ICZsdDs8YSBocmVmPSJtYWlsdG86
cnBqZGF5QGNyYXNoY291cnNlLmNhIj5ycGpkYXlAY3Jhc2hjb3Vyc2UuY2E8L2E+Jmd0OyB3cm90
ZTo8YnI+PC9kaXY+PGJsb2NrcXVvdGUgY2xhc3M9ImdtYWlsX3F1b3RlIiBzdHlsZT0ibWFyZ2lu
OjBweCAwcHggMHB4IDAuOGV4O2JvcmRlci1sZWZ0OjFweCBzb2xpZCByZ2IoMjA0LDIwNCwyMDQp
O3BhZGRpbmctbGVmdDoxZXgiPjxicj4KwqAgcmVhZGluZyBicmVudCYjMzk7cyByZWNlbnQgcGll
Y2Ugb24gc2VjdXJpdHksIGFuZCBub3RpY2VkIHRoZSAmcXVvdDtwb2RtYW48YnI+CmNvbnRhaW5l
ciBydW5sYWJlbCZxdW90OyBjb21tYW5kIHdoaWNoIGFsbG93cyBvbmUgdG8gZGVmaW5lIGEgbGFi
ZWwgZm9yPGJyPgpjb252ZW5pZW5jZS4gaG93ZXZlciwgZXZlcnkgZXhhbXBsZSBpJiMzOTt2ZSBz
ZWVuIG9mIHRoYXQgdXNlcyBwcmVjaXNlbHk8YnI+CnRoZSBsYWJlbCBvZiAmcXVvdDtSVU4sJnF1
b3Q7IGFzIGlmIHRoYXQmIzM5O3MgdGhlIG9ubHkgcG9zc2liaWxpdHkuPGJyPgo8YnI+CsKgIGNh
biB5b3Ugbm90IGRlZmluZSBtdWx0aXBsZSBydW5sYWJlbHMgZm9yIGEgc2luZ2xlIGltYWdlPyB0
aGF0IHNlZW1zPGJyPgpsaWtlIHRoZSBvYnZpb3VzIHRoaW5nIHRvIHN1cHBvcnQsIGJ1dCBpZiBv
bmUgbG9va3MgYXQgZXhhbXBsZXMsIGl0JiMzOTtzPGJyPgpub3QgY2xlYXIuPGJyPjwvYmxvY2tx
dW90ZT48ZGl2Pjxicj48L2Rpdj5ZZXMsIGFuIGltYWdlIGNhbiBoYXZlIG11bHRpcGxlICZxdW90
O3J1bmxhYmVscyZxdW90Oy4gVGhlIGxhYmVsIHRvIGJlIHVzZWQgZm9yIGV4ZWN1dGlvbiBjYW4g
YmUgc3BlY2lmaWVkIHZpYSB0aGUgQ0xJIGFuZCB0aGVyZSBpcyBubyByZXF1aXJlbWVudCBmb3Ig
aXQgdG8gYmUgbmFtZWQgJnF1b3Q7UlVOJnF1b3Q7LiBJdCYjMzk7cyBkZXNjcmliZWQgaW4gdGhl
IG1hbiBwYWdlIFsxXSBidXQgSSB1bmRlcnN0YW5kIHRoZSBxdWVzdGlvbiBhbmQgdGhpbmsgdGhp
cyBleGFtcGxlIGlzIGEgZ29vZCBhZGRpdGlvbiB0byB0aGUgbWFuIHBhZ2Ugd2hpY2ggc2hvdWxk
IGhlbHAgdG8gbWFrZSBpdCBjbGVhcmVyLjwvZGl2PjxkaXYgY2xhc3M9ImdtYWlsX3F1b3RlIj48
YnI+PC9kaXY+PGRpdiBjbGFzcz0iZ21haWxfcXVvdGUiPlsxXSA8YSBocmVmPSJodHRwczovL2dp
dGh1Yi5jb20vY29udGFpbmVycy9saWJwb2QvYmxvYi9tYXN0ZXIvZG9jcy9zb3VyY2UvbWFya2Rv
d24vcG9kbWFuLWNvbnRhaW5lci1ydW5sYWJlbC4xLm1kIj5odHRwczovL2dpdGh1Yi5jb20vY29u
dGFpbmVycy9saWJwb2QvYmxvYi9tYXN0ZXIvZG9jcy9zb3VyY2UvbWFya2Rvd24vcG9kbWFuLWNv
bnRhaW5lci1ydW5sYWJlbC4xLm1kPC9hPjwvZGl2PjwvZGl2Pgo=
--===============1452642489631563246==--
From rpjday at crashcourse.ca Wed Nov 6 17:29:26 2019
Content-Type: multipart/mixed; boundary="===============6336022676501749787=="
MIME-Version: 1.0
From: Robert P. J. Day
To: podman at lists.podman.io
Subject: [Podman] Re: can you define only a "RUN" runlabel for a container?
Date: Wed, 06 Nov 2019 12:29:18 -0500
Message-ID:
In-Reply-To: CALxX1+ctVLp1a2+UfwEeuJo4EKzOKP-nVKr9R7UzZ7AOFxCqnQ@mail.gmail.com
--===============6336022676501749787==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On Wed, 6 Nov 2019, Valentin Rothberg wrote:
>
>
> On Wed, Nov 6, 2019 at 5:31 PM Robert P. J. Day =
wrote:
>
> =C2=A0 reading brent's recent piece on security, and noticed the "p=
odman
> container runlabel" command which allows one to define a label for
> convenience. however, every example i've seen of that uses precisely
> the label of "RUN," as if that's the only possibility.
>
> =C2=A0 can you not define multiple runlabels for a single image? th=
at seems
> like the obvious thing to support, but if one looks at examples, it=
's
> not clear.
>
>
> Yes, an image can have multiple "runlabels". The label to be used
> for execution can be specified via the CLI and there is no
> requirement for it to be named "RUN". It's described in the man page
> [1] but I understand the question and think this example is a good
> addition to the man page which should help to make it clearer.
>
> [1]https://github.com/containers/libpod/blob/master/docs/source/markdown/=
podman-container-
> runlabel.1.md
i would suggest two tweaks to make this absolutely clear. first, the
example should include at least two "LABEL" lines -- if all an example
ever supplies is a single LABEL line, it might still leave the
impression that only one is allowed.
also, use an example with a goofy name, to make it clear that the
label name is arbitrary, something like:
LABEL INSTALL ...
LABEL RUN ...
LABEL BUILDMYSTUFF ...
those changes would make it obvious what is supported, i think.
rday
--===============6336022676501749787==--
From rothberg at redhat.com Wed Nov 6 17:41:58 2019
Content-Type: multipart/mixed; boundary="===============8383757106802849599=="
MIME-Version: 1.0
From: Valentin Rothberg
To: podman at lists.podman.io
Subject: [Podman] Re: can you define only a "RUN" runlabel for a container?
Date: Wed, 06 Nov 2019 18:41:35 +0100
Message-ID:
In-Reply-To: alpine.LFD.2.21.1911061227140.16681@localhost.localdomain
--===============8383757106802849599==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On Wed, Nov 6, 2019 at 6:29 PM Robert P. J. Day
wrote:
> On Wed, 6 Nov 2019, Valentin Rothberg wrote:
>
> >
> >
> > On Wed, Nov 6, 2019 at 5:31 PM Robert P. J. Day
> wrote:
> >
> > reading brent's recent piece on security, and noticed the "podm=
an
> > container runlabel" command which allows one to define a label for
> > convenience. however, every example i've seen of that uses
> precisely
> > the label of "RUN," as if that's the only possibility.
> >
> > can you not define multiple runlabels for a single image? that
> seems
> > like the obvious thing to support, but if one looks at examples,
> it's
> > not clear.
> >
> >
> > Yes, an image can have multiple "runlabels". The label to be used
> > for execution can be specified via the CLI and there is no
> > requirement for it to be named "RUN". It's described in the man page
> > [1] but I understand the question and think this example is a good
> > addition to the man page which should help to make it clearer.
> >
> > [1]
> https://github.com/containers/libpod/blob/master/docs/source/markdown/pod=
man-container-
> > runlabel.1.md
>
> i would suggest two tweaks to make this absolutely clear. first, the
> example should include at least two "LABEL" lines -- if all an example
> ever supplies is a single LABEL line, it might still leave the
> impression that only one is allowed.
>
> also, use an example with a goofy name, to make it clear that the
> label name is arbitrary, something like:
>
> LABEL INSTALL ...
> LABEL RUN ...
> LABEL BUILDMYSTUFF ...
>
> those changes would make it obvious what is supported, i think.
>
That's great, thanks! Are you interested in opening a pull request upstream?
Kind regards,
Valentin
--===============8383757106802849599==
Content-Type: text/html
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="attachment.html"
PGRpdiBkaXI9Imx0ciI+PGRpdiBkaXI9Imx0ciI+PGJyPjwvZGl2Pjxicj48ZGl2IGNsYXNzPSJn
bWFpbF9xdW90ZSI+PGRpdiBkaXI9Imx0ciIgY2xhc3M9ImdtYWlsX2F0dHIiPk9uIFdlZCwgTm92
IDYsIDIwMTkgYXQgNjoyOSBQTSBSb2JlcnQgUC4gSi4gRGF5ICZsdDs8YSBocmVmPSJtYWlsdG86
cnBqZGF5QGNyYXNoY291cnNlLmNhIj5ycGpkYXlAY3Jhc2hjb3Vyc2UuY2E8L2E+Jmd0OyB3cm90
ZTo8YnI+PC9kaXY+PGJsb2NrcXVvdGUgY2xhc3M9ImdtYWlsX3F1b3RlIiBzdHlsZT0ibWFyZ2lu
OjBweCAwcHggMHB4IDAuOGV4O2JvcmRlci1sZWZ0OjFweCBzb2xpZCByZ2IoMjA0LDIwNCwyMDQp
O3BhZGRpbmctbGVmdDoxZXgiPk9uIFdlZCwgNiBOb3YgMjAxOSwgVmFsZW50aW4gUm90aGJlcmcg
d3JvdGU6PGJyPgo8YnI+CiZndDs8YnI+CiZndDs8YnI+CiZndDsgT24gV2VkLCBOb3YgNiwgMjAx
OSBhdCA1OjMxIFBNIFJvYmVydCBQLiBKLiBEYXkgJmx0OzxhIGhyZWY9Im1haWx0bzpycGpkYXlA
Y3Jhc2hjb3Vyc2UuY2EiIHRhcmdldD0iX2JsYW5rIj5ycGpkYXlAY3Jhc2hjb3Vyc2UuY2E8L2E+
Jmd0OyB3cm90ZTo8YnI+CiZndDs8YnI+CiZndDvCoCDCoCDCoCDCoMKgIHJlYWRpbmcgYnJlbnQm
IzM5O3MgcmVjZW50IHBpZWNlIG9uIHNlY3VyaXR5LCBhbmQgbm90aWNlZCB0aGUgJnF1b3Q7cG9k
bWFuPGJyPgomZ3Q7wqAgwqAgwqAgwqBjb250YWluZXIgcnVubGFiZWwmcXVvdDsgY29tbWFuZCB3
aGljaCBhbGxvd3Mgb25lIHRvIGRlZmluZSBhIGxhYmVsIGZvcjxicj4KJmd0O8KgIMKgIMKgIMKg
Y29udmVuaWVuY2UuIGhvd2V2ZXIsIGV2ZXJ5IGV4YW1wbGUgaSYjMzk7dmUgc2VlbiBvZiB0aGF0
IHVzZXMgcHJlY2lzZWx5PGJyPgomZ3Q7wqAgwqAgwqAgwqB0aGUgbGFiZWwgb2YgJnF1b3Q7UlVO
LCZxdW90OyBhcyBpZiB0aGF0JiMzOTtzIHRoZSBvbmx5IHBvc3NpYmlsaXR5Ljxicj4KJmd0Ozxi
cj4KJmd0O8KgIMKgIMKgIMKgwqAgY2FuIHlvdSBub3QgZGVmaW5lIG11bHRpcGxlIHJ1bmxhYmVs
cyBmb3IgYSBzaW5nbGUgaW1hZ2U/IHRoYXQgc2VlbXM8YnI+CiZndDvCoCDCoCDCoCDCoGxpa2Ug
dGhlIG9idmlvdXMgdGhpbmcgdG8gc3VwcG9ydCwgYnV0IGlmIG9uZSBsb29rcyBhdCBleGFtcGxl
cywgaXQmIzM5O3M8YnI+CiZndDvCoCDCoCDCoCDCoG5vdCBjbGVhci48YnI+CiZndDs8YnI+CiZn
dDs8YnI+CiZndDsgWWVzLCBhbiBpbWFnZSBjYW4gaGF2ZSBtdWx0aXBsZSAmcXVvdDtydW5sYWJl
bHMmcXVvdDsuIFRoZSBsYWJlbCB0byBiZSB1c2VkPGJyPgomZ3Q7IGZvciBleGVjdXRpb24gY2Fu
IGJlIHNwZWNpZmllZCB2aWEgdGhlIENMSSBhbmQgdGhlcmUgaXMgbm88YnI+CiZndDsgcmVxdWly
ZW1lbnQgZm9yIGl0IHRvIGJlIG5hbWVkICZxdW90O1JVTiZxdW90Oy4gSXQmIzM5O3MgZGVzY3Jp
YmVkIGluIHRoZSBtYW4gcGFnZTxicj4KJmd0OyBbMV0gYnV0IEkgdW5kZXJzdGFuZCB0aGUgcXVl
c3Rpb24gYW5kIHRoaW5rIHRoaXMgZXhhbXBsZSBpcyBhIGdvb2Q8YnI+CiZndDsgYWRkaXRpb24g
dG8gdGhlIG1hbiBwYWdlIHdoaWNoIHNob3VsZCBoZWxwIHRvIG1ha2UgaXQgY2xlYXJlci48YnI+
CiZndDs8YnI+CiZndDsgWzFdPGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL2NvbnRhaW5lcnMv
bGlicG9kL2Jsb2IvbWFzdGVyL2RvY3Mvc291cmNlL21hcmtkb3duL3BvZG1hbi1jb250YWluZXIt
IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20vY29u
dGFpbmVycy9saWJwb2QvYmxvYi9tYXN0ZXIvZG9jcy9zb3VyY2UvbWFya2Rvd24vcG9kbWFuLWNv
bnRhaW5lci08L2E+PGJyPgomZ3Q7IDxhIGhyZWY9Imh0dHA6Ly9ydW5sYWJlbC4xLm1kIiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5ydW5sYWJlbC4xLm1kPC9hPjxicj4KPGJyPgrC
oCBpIHdvdWxkIHN1Z2dlc3QgdHdvIHR3ZWFrcyB0byBtYWtlIHRoaXMgYWJzb2x1dGVseSBjbGVh
ci4gZmlyc3QsIHRoZTxicj4KZXhhbXBsZSBzaG91bGQgaW5jbHVkZSBhdCBsZWFzdCB0d28gJnF1
b3Q7TEFCRUwmcXVvdDsgbGluZXMgLS0gaWYgYWxsIGFuIGV4YW1wbGU8YnI+CmV2ZXIgc3VwcGxp
ZXMgaXMgYSBzaW5nbGUgTEFCRUwgbGluZSwgaXQgbWlnaHQgc3RpbGwgbGVhdmUgdGhlPGJyPgpp
bXByZXNzaW9uIHRoYXQgb25seSBvbmUgaXMgYWxsb3dlZC48YnI+Cjxicj4KwqAgYWxzbywgdXNl
IGFuIGV4YW1wbGUgd2l0aCBhIGdvb2Z5IG5hbWUsIHRvIG1ha2UgaXQgY2xlYXIgdGhhdCB0aGU8
YnI+CmxhYmVsIG5hbWUgaXMgYXJiaXRyYXJ5LCBzb21ldGhpbmcgbGlrZTo8YnI+Cjxicj4KwqAg
TEFCRUwgSU5TVEFMTCAuLi48YnI+CsKgIExBQkVMIFJVTiAuLi48YnI+CsKgIExBQkVMIEJVSUxE
TVlTVFVGRiAuLi48YnI+Cjxicj4KdGhvc2UgY2hhbmdlcyB3b3VsZCBtYWtlIGl0IG9idmlvdXMg
d2hhdCBpcyBzdXBwb3J0ZWQsIGkgdGhpbmsuPGJyPjwvYmxvY2txdW90ZT48ZGl2Pjxicj48L2Rp
dj48ZGl2PlRoYXQmIzM5O3MgZ3JlYXQsIHRoYW5rcyEgQXJlIHlvdSBpbnRlcmVzdGVkIGluIG9w
ZW5pbmcgYSBwdWxsIHJlcXVlc3QgdXBzdHJlYW0/PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5L
aW5kIHJlZ2FyZHMsPC9kaXY+PGRpdj7CoFZhbGVudGluPGJyPjwvZGl2PjwvZGl2PjwvZGl2Pgo=
--===============8383757106802849599==--
From rpjday at crashcourse.ca Wed Nov 6 17:57:25 2019
Content-Type: multipart/mixed; boundary="===============5584149999865724987=="
MIME-Version: 1.0
From: Robert P. J. Day
To: podman at lists.podman.io
Subject: [Podman] Re: can you define only a "RUN" runlabel for a container?
Date: Wed, 06 Nov 2019 12:57:10 -0500
Message-ID:
In-Reply-To: CALxX1+e+c2NdNzYYFJfejj+WmZGq9YKdpVOHEsDGtb01aVJfew@mail.gmail.com
--===============5584149999865724987==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On Wed, 6 Nov 2019, Valentin Rothberg wrote:
> On Wed, Nov 6, 2019 at 6:29 PM Robert P. J. Day =
wrote:
> On Wed, 6 Nov 2019, Valentin Rothberg wrote:
>
> >
> >
> > On Wed, Nov 6, 2019 at 5:31 PM Robert P. J. Day
> wrote:
> >
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0 reading brent's recent piece on =
security, and noticed the "podman
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0container runlabel" command which allow=
s one to define a label for
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0convenience. however, every example i'v=
e seen of that uses precisely
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0the label of "RUN," as if that's the on=
ly possibility.
> >
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0 can you not define multiple runl=
abels for a single image? that
> seems
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0like the obvious thing to support, but =
if one looks at examples,
> it's
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0not clear.
> >
> >
> > Yes, an image can have multiple "runlabels". The label to be used
> > for execution can be specified via the CLI and there is no
> > requirement for it to be named "RUN". It's described in the man p=
age
> > [1] but I understand the question and think this example is a good
> > addition to the man page which should help to make it clearer.
> >
> >[1]https://github.com/containers/libpod/blob/master/docs/source/ma=
rkdown/podman-contain
> er-
> > runlabel.1.md
>
> =C2=A0 i would suggest two tweaks to make this absolutely clear. fi=
rst, the
> example should include at least two "LABEL" lines -- if all an exam=
ple
> ever supplies is a single LABEL line, it might still leave the
> impression that only one is allowed.
>
> =C2=A0 also, use an example with a goofy name, to make it clear tha=
t the
> label name is arbitrary, something like:
>
> =C2=A0 LABEL INSTALL ...
> =C2=A0 LABEL RUN ...
> =C2=A0 LABEL BUILDMYSTUFF ...
>
> those changes would make it obvious what is supported, i think.
>
>
> That's great, thanks! Are you interested in opening a pull request upstre=
am?
i'll take a look at that tonight if i can.
rday
--===============5584149999865724987==--
From mh+podman at scrit.ch Wed Nov 6 21:19:20 2019
Content-Type: multipart/mixed; boundary="===============4099521628838738748=="
MIME-Version: 1.0
From: mh
To: podman at lists.podman.io
Subject: [Podman] Re: userns=keep-id and volumes requires all paths as user?
Date: Wed, 06 Nov 2019 22:19:07 +0100
Message-ID:
In-Reply-To: 871rulf0yg.fsf@redhat.com
--===============4099521628838738748==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
On 06.11.19 17:01, Giuseppe Scrivano wrote:
> the issue here is that you are mapping your own user to the same id
> inside of the user namespace.
> =
> That means the root user inside of the user namespace will be mapped to
> another ID, which is the first ID specified in /etc/sub?id for your
> user. It is the same user that will configure the mount namespace,
> including the bind mount that fails in your test.
Thank you! This explanation really helped and it confirms the theory I
had why it's not working. Assumed the mount is done as that "fake"-root uid.
I was able to workaround it by doing:
setfacl -m user:FIRST_SUBUID:rx /tmp/foo
This made the container start :)
> The OCI runtime, after changing uid/gid to the specified ones (with
> --userns=3Dkeep-id are the same $UID:$GID you have on the host), drops any
> additional group that the user had when launching the container.
> =
> I've added a function to crun 0.10.4 to have the possibility to not drop
> additional groups for such cases, it can be enabled from podman with
> "--annotation io.crun.keep_original_groups=3D1", it might help you.
Good to know, though crun won't make it to EL7 I guess, so the approach
above is probably the way to go for me in that situation.
~mh
--===============4099521628838738748==--
From smccarty at redhat.com Wed Nov 6 22:20:26 2019
Content-Type: multipart/mixed; boundary="===============8293895631495245296=="
MIME-Version: 1.0
From: Scott McCarty
To: podman at lists.podman.io
Subject: [Podman] Re: suggestions for container security vulnerability
scanners?
Date: Wed, 06 Nov 2019 17:20:50 -0500
Message-ID:
In-Reply-To: alpine.LFD.2.21.1911060522580.7864@localhost.localdomain
--===============8293895631495245296==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Robert,
Scanners like Clair are open source, but use data from the Linux
distros. To be honest, there's really nothing that great for content
layered on top of a Linux distro (pypi, Ruby Gems, home grown code, etc).
This stuff is expensive to scan, analyze and tag for vulnerabilities.
Scanners will try to use Mitre as a database, but honestly, you kinda get
what you pay for in this space. For me, I just rely on the errata [1] in
RHEL (and UBI) for "most" of my trust:
My 2c.
[1]: https://access.redhat.com/articles/2130961
Best Regards
Scott M
On Wed, Nov 6, 2019 at 5:25 AM Robert P. J. Day
wrote:
>
> not really a podman-related question, but a colleague asked about
> the options for open source container security scanners. i know about
> commercial offerings like black duck; what are the choices of the
> denizens of this list? thank you kindly.
>
> rday
>
> --
>
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> Robert P. J. Day Ottawa, Ontario, CANADA
> http://crashcourse.ca
>
> Twitter: http://twitter.com/rpjday
> LinkedIn: http://ca.linkedin.com/in/rpjday
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
-- =
-- =
Scott McCarty, RHCA
Product Management - Containers, Red Hat Enterprise Linux & OpenShift
Email: smccarty(a)redhat.com
Phone: 312-660-3535
Cell: 330-807-1043
Web: http://crunchtools.com
Have you ever wondered what happens behind the scenes when you type
www.redhat.com into a browser and hit enter?
https://www.redhat.com/en/blog/what-happens-when-you-hit-enter
--===============8293895631495245296==
Content-Type: text/html
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="attachment.html"
PGRpdiBkaXI9Imx0ciI+PGRpdiBjbGFzcz0iZ21haWxfZGVmYXVsdCIgc3R5bGU9ImZvbnQtc2l6
ZTpzbWFsbCI+Um9iZXJ0LDwvZGl2PjxkaXYgY2xhc3M9ImdtYWlsX2RlZmF1bHQiIHN0eWxlPSJm
b250LXNpemU6c21hbGwiPsKgIMKgIMKgU2Nhbm5lcnMgbGlrZSBDbGFpciBhcmUgb3BlbiBzb3Vy
Y2UsIGJ1dCB1c2UgZGF0YSBmcm9tIHRoZSBMaW51eCBkaXN0cm9zLiBUbyBiZSBob25lc3QsIHRo
ZXJlJiMzOTtzIHJlYWxseSBub3RoaW5nIHRoYXQgZ3JlYXQgZm9yIGNvbnRlbnQgbGF5ZXJlZCBv
biB0b3Agb2YgYSBMaW51eCBkaXN0cm8gKHB5cGksIFJ1YnkgR2VtcywgaG9tZSBncm93biBjb2Rl
LCBldGMpLiBUaGlzIHN0dWZmIGlzIGV4cGVuc2l2ZSB0byBzY2FuLCBhbmFseXplIGFuZCB0YWcg
Zm9yIHZ1bG5lcmFiaWxpdGllcy4gU2Nhbm5lcnMgd2lsbCB0cnkgdG8gdXNlIE1pdHJlIGFzIGEg
ZGF0YWJhc2UsIGJ1dCBob25lc3RseSwgeW91IGtpbmRhIGdldCB3aGF0IHlvdSBwYXkgZm9yIGlu
IHRoaXMgc3BhY2UuIEZvciBtZSwgSSBqdXN0IHJlbHkgb24gdGhlIGVycmF0YSBbMV0gaW4gUkhF
TCAoYW5kIFVCSSkgZm9yICZxdW90O21vc3QmcXVvdDsgb2YgbXkgdHJ1c3Q6PC9kaXY+PGRpdiBj
bGFzcz0iZ21haWxfZGVmYXVsdCIgc3R5bGU9ImZvbnQtc2l6ZTpzbWFsbCI+PGJyPjwvZGl2Pjxk
aXYgY2xhc3M9ImdtYWlsX2RlZmF1bHQiIHN0eWxlPSJmb250LXNpemU6c21hbGwiPk15IDJjLjwv
ZGl2PjxkaXYgY2xhc3M9ImdtYWlsX2RlZmF1bHQiIHN0eWxlPSJmb250LXNpemU6c21hbGwiPjxi
cj48L2Rpdj48ZGl2IGNsYXNzPSJnbWFpbF9kZWZhdWx0IiBzdHlsZT0iZm9udC1zaXplOnNtYWxs
Ij5bMV06wqA8YSBocmVmPSJodHRwczovL2FjY2Vzcy5yZWRoYXQuY29tL2FydGljbGVzLzIxMzA5
NjEiPmh0dHBzOi8vYWNjZXNzLnJlZGhhdC5jb20vYXJ0aWNsZXMvMjEzMDk2MTwvYT48L2Rpdj48
ZGl2IGNsYXNzPSJnbWFpbF9kZWZhdWx0IiBzdHlsZT0iZm9udC1zaXplOnNtYWxsIj48YnI+PC9k
aXY+PGRpdiBjbGFzcz0iZ21haWxfZGVmYXVsdCIgc3R5bGU9ImZvbnQtc2l6ZTpzbWFsbCI+QmVz
dCBSZWdhcmRzPGJyPlNjb3R0IE08L2Rpdj48L2Rpdj48YnI+PGRpdiBjbGFzcz0iZ21haWxfcXVv
dGUiPjxkaXYgZGlyPSJsdHIiIGNsYXNzPSJnbWFpbF9hdHRyIj5PbiBXZWQsIE5vdiA2LCAyMDE5
IGF0IDU6MjUgQU0gUm9iZXJ0IFAuIEouIERheSAmbHQ7PGEgaHJlZj0ibWFpbHRvOnJwamRheUBj
cmFzaGNvdXJzZS5jYSI+cnBqZGF5QGNyYXNoY291cnNlLmNhPC9hPiZndDsgd3JvdGU6PGJyPjwv
ZGl2PjxibG9ja3F1b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdpbjowcHggMHB4
IDBweCAwLjhleDtib3JkZXItbGVmdDoxcHggc29saWQgcmdiKDIwNCwyMDQsMjA0KTtwYWRkaW5n
LWxlZnQ6MWV4Ij48YnI+CsKgIG5vdCByZWFsbHkgYSBwb2RtYW4tcmVsYXRlZCBxdWVzdGlvbiwg
YnV0IGEgY29sbGVhZ3VlIGFza2VkIGFib3V0PGJyPgp0aGUgb3B0aW9ucyBmb3Igb3BlbiBzb3Vy
Y2UgY29udGFpbmVyIHNlY3VyaXR5IHNjYW5uZXJzLiBpIGtub3cgYWJvdXQ8YnI+CmNvbW1lcmNp
YWwgb2ZmZXJpbmdzIGxpa2UgYmxhY2sgZHVjazsgd2hhdCBhcmUgdGhlIGNob2ljZXMgb2YgdGhl
PGJyPgpkZW5pemVucyBvZiB0aGlzIGxpc3Q/IHRoYW5rIHlvdSBraW5kbHkuPGJyPgo8YnI+CnJk
YXk8YnI+Cjxicj4KLS0gPGJyPgo8YnI+Cj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PTxicj4KUm9iZXJ0IFAuIEou
IERhecKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgT3R0
YXdhLCBPbnRhcmlvLCBDQU5BREE8YnI+CsKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgPGEgaHJlZj0iaHR0cDovL2NyYXNoY291cnNlLmNhIiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwOi8vY3Jhc2hjb3Vyc2UuY2E8L2E+PGJyPgo8YnI+ClR3aXR0ZXI6
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqA8YSBocmVmPSJodHRwOi8vdHdpdHRlci5jb20vcnBqZGF5IiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwOi8vdHdpdHRlci5jb20vcnBqZGF5PC9hPjxicj4KTGlua2VkSW46
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqA8YSBocmVmPSJo
dHRwOi8vY2EubGlua2VkaW4uY29tL2luL3JwamRheSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9
Il9ibGFuayI+aHR0cDovL2NhLmxpbmtlZGluLmNvbS9pbi9ycGpkYXk8L2E+PGJyPgo9PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT08YnI+Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fPGJyPgpQb2RtYW4gbWFpbGluZyBsaXN0IC0tIDxhIGhyZWY9Im1haWx0bzpwb2RtYW5AbGlz
dHMucG9kbWFuLmlvIiB0YXJnZXQ9Il9ibGFuayI+cG9kbWFuQGxpc3RzLnBvZG1hbi5pbzwvYT48
YnI+ClRvIHVuc3Vic2NyaWJlIHNlbmQgYW4gZW1haWwgdG8gPGEgaHJlZj0ibWFpbHRvOnBvZG1h
bi1sZWF2ZUBsaXN0cy5wb2RtYW4uaW8iIHRhcmdldD0iX2JsYW5rIj5wb2RtYW4tbGVhdmVAbGlz
dHMucG9kbWFuLmlvPC9hPjxicj4KPC9ibG9ja3F1b3RlPjwvZGl2PjxiciBjbGVhcj0iYWxsIj48
ZGl2Pjxicj48L2Rpdj4tLSA8YnI+PGRpdiBkaXI9Imx0ciIgY2xhc3M9ImdtYWlsX3NpZ25hdHVy
ZSI+PGRpdiBkaXI9Imx0ciI+PGRpdj48ZGl2IGRpcj0ibHRyIj48ZGl2IGRpcj0ibHRyIj48ZGl2
IGRpcj0ibHRyIj48ZGl2IGRpcj0ibHRyIj48ZGl2IGRpcj0ibHRyIj48ZGl2IGRpcj0ibHRyIj48
ZGl2IGRpcj0ibHRyIj48ZGl2IGRpcj0ibHRyIj48ZGl2IGRpcj0ibHRyIj48cHJlPi0tIDxicj48
L3ByZT48cHJlPjxmb250IHNpemU9IjIiPlNjb3R0IE1jQ2FydHksIFJIQ0EKUHJvZHVjdCBNYW5h
Z2VtZW50IC0gQ29udGFpbmVycywgUmVkIEhhdCBFbnRlcnByaXNlIExpbnV4ICZhbXA7IE9wZW5T
aGlmdApFbWFpbDogPGEgaHJlZj0ibWFpbHRvOnNtY2NhcnR5QHJlZGhhdC5jb20iIHRhcmdldD0i
X2JsYW5rIj5zbWNjYXJ0eUByZWRoYXQuY29tPC9hPgpQaG9uZTogMzEyLTY2MC0zNTM1CkNlbGw6
IDMzMC04MDctMTA0MwpXZWI6IDxhIGhyZWY9Imh0dHA6Ly9jcnVuY2h0b29scy5jb20iIHRhcmdl
dD0iX2JsYW5rIj5odHRwOi8vY3J1bmNodG9vbHMuY29tPC9hPjwvZm9udD48cHJlPkhhdmUgeW91
IGV2ZXIgd29uZGVyZWQgd2hhdCBoYXBwZW5zIGJlaGluZCB0aGUgc2NlbmVzIHdoZW4geW91IHR5
cGUgPGEgaHJlZj0iaHR0cDovL3d3dy5yZWRoYXQuY29tIiB0YXJnZXQ9Il9ibGFuayI+d3d3LnJl
ZGhhdC5jb208L2E+IGludG8gYSBicm93c2VyIGFuZCBoaXQgZW50ZXI/IDxhIGhyZWY9Imh0dHBz
Oi8vd3d3LnJlZGhhdC5jb20vZW4vYmxvZy93aGF0LWhhcHBlbnMtd2hlbi15b3UtaGl0LWVudGVy
IiBzdHlsZT0iZm9udC1mYW1pbHk6QXJpYWwsSGVsdmV0aWNhLHNhbnMtc2VyaWYiIHRhcmdldD0i
X2JsYW5rIj5odHRwczovL3d3dy5yZWRoYXQuY29tL2VuL2Jsb2cvd2hhdC1oYXBwZW5zLXdoZW4t
eW91LWhpdC1lbnRlcjwvYT48L3ByZT48L3ByZT48L2Rpdj48L2Rpdj48L2Rpdj48L2Rpdj48L2Rp
dj48L2Rpdj48L2Rpdj48L2Rpdj48L2Rpdj48L2Rpdj48L2Rpdj48L2Rpdj4K
--===============8293895631495245296==--
From smccarty at redhat.com Thu Nov 7 15:34:29 2019
Content-Type: multipart/mixed; boundary="===============1378555493266511359=="
MIME-Version: 1.0
From: Scott McCarty
To: podman at lists.podman.io
Subject: [Podman] Re: Locking issue?
Date: Thu, 07 Nov 2019 10:34:56 -0500
Message-ID:
In-Reply-To: CAN_LGv2WdgAgUMjaQbEy1K2jQdiXAFt40hpV-Pms5N7BcMbMSQ@mail.gmail.com
--===============1378555493266511359==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Alexander,
Thanks for filing this! I'll get it on our list to update!
Best Regards
Scott M
On Tue, Nov 5, 2019 at 7:35 AM Alexander E. Patrakov
wrote:
> https://bugzilla.redhat.com/show_bug.cgi?id=3D1768866
>
> =D0=B2=D1=82, 5 =D0=BD=D0=BE=D1=8F=D0=B1. 2019 =D0=B3. =D0=B2 16:56, Scot=
t McCarty :
> >
> > Alexander,
> > I don't quite understand the docs bug. Could you please file the BZ
> and send it to me. I am happy to drive our docs team to update to use the
> "podman generate systemd" stuff instead of manually copy/pasting/modifying
> the configs in a static doc.
> >
> > Best Regards
> > Scott M
> >
> > On Mon, Nov 4, 2019 at 3:41 PM Alexander E. Patrakov
> wrote:
> >>
> >> "Matt,
> >>
> >> no, I don't use static IPs. I let podman allocate them. I have already
> >> tried `podman generate systemd` as per earlier suggestion.
> >>
> >> The issue is definitely not with stale reservations persisting across
> >> a reboot, otherwise adding "flock" would not have helped.
> >>
> >> Regarding the "`start --attach` can exit while the container is still
> >> running comment: if it is true, please ask the appropriate person to
> >> fix the systemd unit example in RHEL7 documentation.
> >>
> >> =D0=B2=D1=82, 5 =D0=BD=D0=BE=D1=8F=D0=B1. 2019 =D0=B3. =D0=B2 01:19, M=
att Heon :
> >> >
> >> > On 2019-11-04 23:40, Alexander E. Patrakov wrote:
> >> > >Hello.
> >> > >
> >> > >I have tried Podman in Fedora 31. Not a rootless setup.
> >> > >
> >> > >Software versions:
> >> > >
> >> > >podman-1.6.2-2.fc31.x86_64
> >> > >containernetworking-plugins-0.8.2-2.1.dev.git485be65.fc31.x86_64
> >> > >
> >> > >I have created two containers:
> >> > >
> >> > ># podman container run -d --name nginx_1 -p 80:80 nginx
> >> > ># podman container run -d --name nginx_2 -p 81:80 nginx
> >> > >
> >> > >Then I wanted to make sure that they start on boot.
> >> > >
> >> > >According to RHEL 7 documentation,
> >> > >
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_at=
omic_host/7/html/managing_containers/running_containers_as_systemd_services=
_with_podman
> >> > >, I am supposed to create systemd units. OK, let's take the
> documented
> >> > >form of the unit and turn it into a template:
> >> > >
> >> > >[Unit]
> >> > >Description=3DContainer %i
> >> > >
> >> > >[Service]
> >> > >ExecStart=3D/usr/bin/podman start -a %i
> >> > >ExecStop=3D/usr/bin/podman stop -t 2 %i
> >> > >
> >> > >[Install]
> >> > >WantedBy=3Dmulti-user.target
> >> > >
> >> > >This doesn't work if there is more than one container. The error
> >> > >is:
> >> > >
> >> > >Nov 04 21:35:57 podman[2268]: time=3D"2019-11-04T21:35:57+05:00"
> >> > >level=3Derror msg=3D"Error adding network: failed to allocate for r=
ange
> 0:
> >> > >10.88.0.19 has been allocated to
> >> > >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
> >> > >duplicate allocation is not allowed"
> >> > >Nov 04 21:35:57 podman[2268]: time=3D"2019-11-04T21:35:57+05:00"
> >> > >level=3Derror msg=3D"Error while adding pod to CNI network \"podman=
\":
> >> > >failed to allocate for range 0: 10.88.0.19 has been allocated to
> >> > >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
> >> > >duplicate allocation is not allowed"
> >> > >Nov 04 21:35:57 podman[2268]: Error: unable to start container
> >> > >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019:
> >> > >error configuring network namespace for container
> >> > >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019:
> >> > >failed to allocate for range 0: 10.88.0.19 has been allocated to
> >> > >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
> >> > >duplicate allocation is not allowed
> >> > >
> >> > >(as you can see, the conflict is against the container itself)
> >> > >
> >> > >Apparently different runs of podman need to be serialized against
> each
> >> > >other. This works:
> >> > >
> >> > >[Unit]
> >> > >Description=3DContainer %i
> >> > >Wants=3Dnetwork-online.target
> >> > >After=3Dnetwork-online.target
> >> > >
> >> > >[Service]
> >> > >Type=3Doneshot
> >> > >RemainAfterExit=3Dyes
> >> > >ExecStart=3Dflock /run/lock/subsys/container.lck /usr/bin/podman st=
art
> %i
> >> > >ExecStop=3D/usr/bin/podman stop -t 2 %i
> >> > >
> >> > >[Install]
> >> > >WantedBy=3Dmulti-user.target
> >> > >
> >> > >Questions:
> >> > >
> >> > >a) Why isn't some equivalent of this unit shipped with podman? Or, =
am
> >> > >I missing some package that ships it?
> >> > >b) Why isn't the necessary locking built into podman itself? Or, is
> it
> >> > >a bug in containernetworking-plugins?
> >> >
> >> > These containers aren't using static IPs, correct?
> >> >
> >> > I can recall an issue where static IP allocations were leaving addre=
ss
> >> > reservations around after reboot, causing issues... But that should =
be
> >> > fixed on the Podman we ship in F31.
> >> >
> >> > Otherwise, this sounds suspiciously like a CNI bug. I would hope that
> >> > CNI has sufficient locking to prevent this from racing, but I could =
be
> >> > wrong.
> >> >
> >> > Also, you should try using `podman generate systemd` for unit files.
> >> > Looking at your unit files, I don't think they operate as advertised
> >> > (`start --attach` can exit while the container is still running, so
> >> > tracking it is not a reliable way of tracking the container).
> >> >
> >> > Thanks,
> >> > Matt Heon
> >> >
> >> > >
> >> > >--
> >> > >Alexander E. Patrakov
> >> > >_______________________________________________
> >> > >Podman mailing list -- podman(a)lists.podman.io
> >> > >To unsubscribe send an email to podman-leave(a)lists.podman.io
> >>
> >>
> >>
> >> --
> >> Alexander E. Patrakov
> >> _______________________________________________
> >> Podman mailing list -- podman(a)lists.podman.io
> >> To unsubscribe send an email to podman-leave(a)lists.podman.io
> >
> >
> >
> > --
> >
> > --
> >
> > Scott McCarty, RHCA
> > Product Management - Containers, Red Hat Enterprise Linux & OpenShift
> > Email: smccarty(a)redhat.com
> > Phone: 312-660-3535
> > Cell: 330-807-1043
> > Web: http://crunchtools.com
> >
> > Have you ever wondered what happens behind the scenes when you type
> www.redhat.com into a browser and hit enter?
> https://www.redhat.com/en/blog/what-happens-when-you-hit-enter
>
>
>
> --
> Alexander E. Patrakov
>
-- =
-- =
Scott McCarty, RHCA
Product Management - Containers, Red Hat Enterprise Linux & OpenShift
Email: smccarty(a)redhat.com
Phone: 312-660-3535
Cell: 330-807-1043
Web: http://crunchtools.com
Have you ever wondered what happens behind the scenes when you type
www.redhat.com into a browser and hit enter?
https://www.redhat.com/en/blog/what-happens-when-you-hit-enter
--===============1378555493266511359==
Content-Type: text/html
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="attachment.html"
PGRpdiBkaXI9Imx0ciI+PGRpdiBjbGFzcz0iZ21haWxfZGVmYXVsdCIgc3R5bGU9ImZvbnQtc2l6
ZTpzbWFsbCI+QWxleGFuZGVyLDwvZGl2PjxkaXYgY2xhc3M9ImdtYWlsX2RlZmF1bHQiIHN0eWxl
PSJmb250LXNpemU6c21hbGwiPsKgIMKgIMKgVGhhbmtzIGZvciBmaWxpbmcgdGhpcyEgSSYjMzk7
bGwgZ2V0IGl0IG9uIG91ciBsaXN0IHRvIHVwZGF0ZSE8L2Rpdj48ZGl2IGNsYXNzPSJnbWFpbF9k
ZWZhdWx0IiBzdHlsZT0iZm9udC1zaXplOnNtYWxsIj48YnI+PC9kaXY+PGRpdiBjbGFzcz0iZ21h
aWxfZGVmYXVsdCIgc3R5bGU9ImZvbnQtc2l6ZTpzbWFsbCI+QmVzdCBSZWdhcmRzPC9kaXY+PGRp
diBjbGFzcz0iZ21haWxfZGVmYXVsdCIgc3R5bGU9ImZvbnQtc2l6ZTpzbWFsbCI+U2NvdHQgTTwv
ZGl2PjwvZGl2Pjxicj48ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+PGRpdiBkaXI9Imx0ciIgY2xh
c3M9ImdtYWlsX2F0dHIiPk9uIFR1ZSwgTm92IDUsIDIwMTkgYXQgNzozNSBBTSBBbGV4YW5kZXIg
RS4gUGF0cmFrb3YgJmx0OzxhIGhyZWY9Im1haWx0bzpwYXRyYWtvdkBnbWFpbC5jb20iPnBhdHJh
a292QGdtYWlsLmNvbTwvYT4mZ3Q7IHdyb3RlOjxicj48L2Rpdj48YmxvY2txdW90ZSBjbGFzcz0i
Z21haWxfcXVvdGUiIHN0eWxlPSJtYXJnaW46MHB4IDBweCAwcHggMC44ZXg7Ym9yZGVyLWxlZnQ6
MXB4IHNvbGlkIHJnYigyMDQsMjA0LDIwNCk7cGFkZGluZy1sZWZ0OjFleCI+PGEgaHJlZj0iaHR0
cHM6Ly9idWd6aWxsYS5yZWRoYXQuY29tL3Nob3dfYnVnLmNnaT9pZD0xNzY4ODY2IiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2J1Z3ppbGxhLnJlZGhhdC5jb20vc2hv
d19idWcuY2dpP2lkPTE3Njg4NjY8L2E+PGJyPgo8YnI+CtCy0YIsIDUg0L3QvtGP0LEuIDIwMTkg
0LMuINCyIDE2OjU2LCBTY290dCBNY0NhcnR5ICZsdDs8YSBocmVmPSJtYWlsdG86c21jY2FydHlA
cmVkaGF0LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPnNtY2NhcnR5QHJlZGhhdC5jb208L2E+Jmd0Ozo8
YnI+CiZndDs8YnI+CiZndDsgQWxleGFuZGVyLDxicj4KJmd0O8KgIMKgIMKgIEkgZG9uJiMzOTt0
IHF1aXRlIHVuZGVyc3RhbmQgdGhlIGRvY3MgYnVnLiBDb3VsZCB5b3UgcGxlYXNlIGZpbGUgdGhl
IEJaIGFuZCBzZW5kIGl0IHRvIG1lLiBJIGFtIGhhcHB5IHRvIGRyaXZlIG91ciBkb2NzIHRlYW0g
dG8gdXBkYXRlIHRvIHVzZSB0aGUgJnF1b3Q7cG9kbWFuIGdlbmVyYXRlIHN5c3RlbWQmcXVvdDsg
c3R1ZmYgaW5zdGVhZCBvZiBtYW51YWxseSBjb3B5L3Bhc3RpbmcvbW9kaWZ5aW5nIHRoZSBjb25m
aWdzIGluIGEgc3RhdGljIGRvYy48YnI+CiZndDs8YnI+CiZndDsgQmVzdCBSZWdhcmRzPGJyPgom
Z3Q7IFNjb3R0IE08YnI+CiZndDs8YnI+CiZndDsgT24gTW9uLCBOb3YgNCwgMjAxOSBhdCAzOjQx
IFBNIEFsZXhhbmRlciBFLiBQYXRyYWtvdiAmbHQ7PGEgaHJlZj0ibWFpbHRvOnBhdHJha292QGdt
YWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPnBhdHJha292QGdtYWlsLmNvbTwvYT4mZ3Q7IHdyb3Rl
Ojxicj4KJmd0OyZndDs8YnI+CiZndDsmZ3Q7ICZxdW90O01hdHQsPGJyPgomZ3Q7Jmd0Ozxicj4K
Jmd0OyZndDsgbm8sIEkgZG9uJiMzOTt0IHVzZSBzdGF0aWMgSVBzLiBJIGxldCBwb2RtYW4gYWxs
b2NhdGUgdGhlbS4gSSBoYXZlIGFscmVhZHk8YnI+CiZndDsmZ3Q7IHRyaWVkIGBwb2RtYW4gZ2Vu
ZXJhdGUgc3lzdGVtZGAgYXMgcGVyIGVhcmxpZXIgc3VnZ2VzdGlvbi48YnI+CiZndDsmZ3Q7PGJy
PgomZ3Q7Jmd0OyBUaGUgaXNzdWUgaXMgZGVmaW5pdGVseSBub3Qgd2l0aCBzdGFsZSByZXNlcnZh
dGlvbnMgcGVyc2lzdGluZyBhY3Jvc3M8YnI+CiZndDsmZ3Q7IGEgcmVib290LCBvdGhlcndpc2Ug
YWRkaW5nICZxdW90O2Zsb2NrJnF1b3Q7IHdvdWxkIG5vdCBoYXZlIGhlbHBlZC48YnI+CiZndDsm
Z3Q7PGJyPgomZ3Q7Jmd0OyBSZWdhcmRpbmcgdGhlICZxdW90O2BzdGFydCAtLWF0dGFjaGAgY2Fu
IGV4aXQgd2hpbGUgdGhlIGNvbnRhaW5lciBpcyBzdGlsbDxicj4KJmd0OyZndDsgcnVubmluZyBj
b21tZW50OiBpZiBpdCBpcyB0cnVlLCBwbGVhc2UgYXNrIHRoZSBhcHByb3ByaWF0ZSBwZXJzb24g
dG88YnI+CiZndDsmZ3Q7IGZpeCB0aGUgc3lzdGVtZCB1bml0IGV4YW1wbGUgaW4gUkhFTDcgZG9j
dW1lbnRhdGlvbi48YnI+CiZndDsmZ3Q7PGJyPgomZ3Q7Jmd0OyDQstGCLCA1INC90L7Rj9CxLiAy
MDE5INCzLiDQsiAwMToxOSwgTWF0dCBIZW9uICZsdDs8YSBocmVmPSJtYWlsdG86bWhlb25AcmVk
aGF0LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1oZW9uQHJlZGhhdC5jb208L2E+Jmd0Ozo8YnI+CiZn
dDsmZ3Q7ICZndDs8YnI+CiZndDsmZ3Q7ICZndDsgT24gMjAxOS0xMS0wNCAyMzo0MCwgQWxleGFu
ZGVyIEUuIFBhdHJha292IHdyb3RlOjxicj4KJmd0OyZndDsgJmd0OyAmZ3Q7SGVsbG8uPGJyPgom
Z3Q7Jmd0OyAmZ3Q7ICZndDs8YnI+CiZndDsmZ3Q7ICZndDsgJmd0O0kgaGF2ZSB0cmllZCBQb2Rt
YW4gaW4gRmVkb3JhIDMxLiBOb3QgYSByb290bGVzcyBzZXR1cC48YnI+CiZndDsmZ3Q7ICZndDsg
Jmd0Ozxicj4KJmd0OyZndDsgJmd0OyAmZ3Q7U29mdHdhcmUgdmVyc2lvbnM6PGJyPgomZ3Q7Jmd0
OyAmZ3Q7ICZndDs8YnI+CiZndDsmZ3Q7ICZndDsgJmd0O3BvZG1hbi0xLjYuMi0yLmZjMzEueDg2
XzY0PGJyPgomZ3Q7Jmd0OyAmZ3Q7ICZndDtjb250YWluZXJuZXR3b3JraW5nLXBsdWdpbnMtMC44
LjItMi4xLmRldi5naXQ0ODViZTY1LmZjMzEueDg2XzY0PGJyPgomZ3Q7Jmd0OyAmZ3Q7ICZndDs8
YnI+CiZndDsmZ3Q7ICZndDsgJmd0O0kgaGF2ZSBjcmVhdGVkIHR3byBjb250YWluZXJzOjxicj4K
Jmd0OyZndDsgJmd0OyAmZ3Q7PGJyPgomZ3Q7Jmd0OyAmZ3Q7ICZndDsjIHBvZG1hbiBjb250YWlu
ZXIgcnVuIC1kIC0tbmFtZSBuZ2lueF8xIC1wIDgwOjgwIG5naW54PGJyPgomZ3Q7Jmd0OyAmZ3Q7
ICZndDsjIHBvZG1hbiBjb250YWluZXIgcnVuIC1kIC0tbmFtZSBuZ2lueF8yIC1wIDgxOjgwIG5n
aW54PGJyPgomZ3Q7Jmd0OyAmZ3Q7ICZndDs8YnI+CiZndDsmZ3Q7ICZndDsgJmd0O1RoZW4gSSB3
YW50ZWQgdG8gbWFrZSBzdXJlIHRoYXQgdGhleSBzdGFydCBvbiBib290Ljxicj4KJmd0OyZndDsg
Jmd0OyAmZ3Q7PGJyPgomZ3Q7Jmd0OyAmZ3Q7ICZndDtBY2NvcmRpbmcgdG8gUkhFTCA3IGRvY3Vt
ZW50YXRpb24sPGJyPgomZ3Q7Jmd0OyAmZ3Q7ICZndDs8YSBocmVmPSJodHRwczovL2FjY2Vzcy5y
ZWRoYXQuY29tL2RvY3VtZW50YXRpb24vZW4tdXMvcmVkX2hhdF9lbnRlcnByaXNlX2xpbnV4X2F0
b21pY19ob3N0LzcvaHRtbC9tYW5hZ2luZ19jb250YWluZXJzL3J1bm5pbmdfY29udGFpbmVyc19h
c19zeXN0ZW1kX3NlcnZpY2VzX3dpdGhfcG9kbWFuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0i
X2JsYW5rIj5odHRwczovL2FjY2Vzcy5yZWRoYXQuY29tL2RvY3VtZW50YXRpb24vZW4tdXMvcmVk
X2hhdF9lbnRlcnByaXNlX2xpbnV4X2F0b21pY19ob3N0LzcvaHRtbC9tYW5hZ2luZ19jb250YWlu
ZXJzL3J1bm5pbmdfY29udGFpbmVyc19hc19zeXN0ZW1kX3NlcnZpY2VzX3dpdGhfcG9kbWFuPC9h
Pjxicj4KJmd0OyZndDsgJmd0OyAmZ3Q7LCBJIGFtIHN1cHBvc2VkIHRvIGNyZWF0ZSBzeXN0ZW1k
IHVuaXRzLiBPSywgbGV0JiMzOTtzIHRha2UgdGhlIGRvY3VtZW50ZWQ8YnI+CiZndDsmZ3Q7ICZn
dDsgJmd0O2Zvcm0gb2YgdGhlIHVuaXQgYW5kIHR1cm4gaXQgaW50byBhIHRlbXBsYXRlOjxicj4K
Jmd0OyZndDsgJmd0OyAmZ3Q7PGJyPgomZ3Q7Jmd0OyAmZ3Q7ICZndDtbVW5pdF08YnI+CiZndDsm
Z3Q7ICZndDsgJmd0O0Rlc2NyaXB0aW9uPUNvbnRhaW5lciAlaTxicj4KJmd0OyZndDsgJmd0OyAm
Z3Q7PGJyPgomZ3Q7Jmd0OyAmZ3Q7ICZndDtbU2VydmljZV08YnI+CiZndDsmZ3Q7ICZndDsgJmd0
O0V4ZWNTdGFydD0vdXNyL2Jpbi9wb2RtYW4gc3RhcnQgLWEgJWk8YnI+CiZndDsmZ3Q7ICZndDsg
Jmd0O0V4ZWNTdG9wPS91c3IvYmluL3BvZG1hbiBzdG9wIC10IDIgJWk8YnI+CiZndDsmZ3Q7ICZn
dDsgJmd0Ozxicj4KJmd0OyZndDsgJmd0OyAmZ3Q7W0luc3RhbGxdPGJyPgomZ3Q7Jmd0OyAmZ3Q7
ICZndDtXYW50ZWRCeT1tdWx0aS11c2VyLnRhcmdldDxicj4KJmd0OyZndDsgJmd0OyAmZ3Q7PGJy
PgomZ3Q7Jmd0OyAmZ3Q7ICZndDtUaGlzIGRvZXNuJiMzOTt0IHdvcmsgaWYgdGhlcmUgaXMgbW9y
ZSB0aGFuIG9uZSBjb250YWluZXIuIFRoZSBlcnJvcjxicj4KJmd0OyZndDsgJmd0OyAmZ3Q7aXM6
PGJyPgomZ3Q7Jmd0OyAmZ3Q7ICZndDs8YnI+CiZndDsmZ3Q7ICZndDsgJmd0O05vdiAwNCAyMToz
NTo1NyBwb2RtYW5bMjI2OF06IHRpbWU9JnF1b3Q7MjAxOS0xMS0wNFQyMTozNTo1NyswNTowMCZx
dW90Ozxicj4KJmd0OyZndDsgJmd0OyAmZ3Q7bGV2ZWw9ZXJyb3IgbXNnPSZxdW90O0Vycm9yIGFk
ZGluZyBuZXR3b3JrOiBmYWlsZWQgdG8gYWxsb2NhdGUgZm9yIHJhbmdlIDA6PGJyPgomZ3Q7Jmd0
OyAmZ3Q7ICZndDsxMC44OC4wLjE5IGhhcyBiZWVuIGFsbG9jYXRlZCB0bzxicj4KJmd0OyZndDsg
Jmd0OyAmZ3Q7YWNlMmRlNDQwNTIwNWE5YTc2NzRhMjUyNGNkNjdjMWYwZTM5NWE5MjM0YjA0NTZj
NTU4ODFhMWE0YWRkNjAxOSw8YnI+CiZndDsmZ3Q7ICZndDsgJmd0O2R1cGxpY2F0ZSBhbGxvY2F0
aW9uIGlzIG5vdCBhbGxvd2VkJnF1b3Q7PGJyPgomZ3Q7Jmd0OyAmZ3Q7ICZndDtOb3YgMDQgMjE6
MzU6NTcgcG9kbWFuWzIyNjhdOiB0aW1lPSZxdW90OzIwMTktMTEtMDRUMjE6MzU6NTcrMDU6MDAm
cXVvdDs8YnI+CiZndDsmZ3Q7ICZndDsgJmd0O2xldmVsPWVycm9yIG1zZz0mcXVvdDtFcnJvciB3
aGlsZSBhZGRpbmcgcG9kIHRvIENOSSBuZXR3b3JrIFwmcXVvdDtwb2RtYW5cJnF1b3Q7Ojxicj4K
Jmd0OyZndDsgJmd0OyAmZ3Q7ZmFpbGVkIHRvIGFsbG9jYXRlIGZvciByYW5nZSAwOiAxMC44OC4w
LjE5IGhhcyBiZWVuIGFsbG9jYXRlZCB0bzxicj4KJmd0OyZndDsgJmd0OyAmZ3Q7YWNlMmRlNDQw
NTIwNWE5YTc2NzRhMjUyNGNkNjdjMWYwZTM5NWE5MjM0YjA0NTZjNTU4ODFhMWE0YWRkNjAxOSw8
YnI+CiZndDsmZ3Q7ICZndDsgJmd0O2R1cGxpY2F0ZSBhbGxvY2F0aW9uIGlzIG5vdCBhbGxvd2Vk
JnF1b3Q7PGJyPgomZ3Q7Jmd0OyAmZ3Q7ICZndDtOb3YgMDQgMjE6MzU6NTcgcG9kbWFuWzIyNjhd
OiBFcnJvcjogdW5hYmxlIHRvIHN0YXJ0IGNvbnRhaW5lcjxicj4KJmd0OyZndDsgJmd0OyAmZ3Q7
YWNlMmRlNDQwNTIwNWE5YTc2NzRhMjUyNGNkNjdjMWYwZTM5NWE5MjM0YjA0NTZjNTU4ODFhMWE0
YWRkNjAxOTo8YnI+CiZndDsmZ3Q7ICZndDsgJmd0O2Vycm9yIGNvbmZpZ3VyaW5nIG5ldHdvcmsg
bmFtZXNwYWNlIGZvciBjb250YWluZXI8YnI+CiZndDsmZ3Q7ICZndDsgJmd0O2FjZTJkZTQ0MDUy
MDVhOWE3Njc0YTI1MjRjZDY3YzFmMGUzOTVhOTIzNGIwNDU2YzU1ODgxYTFhNGFkZDYwMTk6PGJy
PgomZ3Q7Jmd0OyAmZ3Q7ICZndDtmYWlsZWQgdG8gYWxsb2NhdGUgZm9yIHJhbmdlIDA6IDEwLjg4
LjAuMTkgaGFzIGJlZW4gYWxsb2NhdGVkIHRvPGJyPgomZ3Q7Jmd0OyAmZ3Q7ICZndDthY2UyZGU0
NDA1MjA1YTlhNzY3NGEyNTI0Y2Q2N2MxZjBlMzk1YTkyMzRiMDQ1NmM1NTg4MWExYTRhZGQ2MDE5
LDxicj4KJmd0OyZndDsgJmd0OyAmZ3Q7ZHVwbGljYXRlIGFsbG9jYXRpb24gaXMgbm90IGFsbG93
ZWQ8YnI+CiZndDsmZ3Q7ICZndDsgJmd0Ozxicj4KJmd0OyZndDsgJmd0OyAmZ3Q7KGFzIHlvdSBj
YW4gc2VlLCB0aGUgY29uZmxpY3QgaXMgYWdhaW5zdCB0aGUgY29udGFpbmVyIGl0c2VsZik8YnI+
CiZndDsmZ3Q7ICZndDsgJmd0Ozxicj4KJmd0OyZndDsgJmd0OyAmZ3Q7QXBwYXJlbnRseSBkaWZm
ZXJlbnQgcnVucyBvZiBwb2RtYW4gbmVlZCB0byBiZSBzZXJpYWxpemVkIGFnYWluc3QgZWFjaDxi
cj4KJmd0OyZndDsgJmd0OyAmZ3Q7b3RoZXIuIFRoaXMgd29ya3M6PGJyPgomZ3Q7Jmd0OyAmZ3Q7
ICZndDs8YnI+CiZndDsmZ3Q7ICZndDsgJmd0O1tVbml0XTxicj4KJmd0OyZndDsgJmd0OyAmZ3Q7
RGVzY3JpcHRpb249Q29udGFpbmVyICVpPGJyPgomZ3Q7Jmd0OyAmZ3Q7ICZndDtXYW50cz1uZXR3
b3JrLW9ubGluZS50YXJnZXQ8YnI+CiZndDsmZ3Q7ICZndDsgJmd0O0FmdGVyPW5ldHdvcmstb25s
aW5lLnRhcmdldDxicj4KJmd0OyZndDsgJmd0OyAmZ3Q7PGJyPgomZ3Q7Jmd0OyAmZ3Q7ICZndDtb
U2VydmljZV08YnI+CiZndDsmZ3Q7ICZndDsgJmd0O1R5cGU9b25lc2hvdDxicj4KJmd0OyZndDsg
Jmd0OyAmZ3Q7UmVtYWluQWZ0ZXJFeGl0PXllczxicj4KJmd0OyZndDsgJmd0OyAmZ3Q7RXhlY1N0
YXJ0PWZsb2NrIC9ydW4vbG9jay9zdWJzeXMvY29udGFpbmVyLmxjayAvdXNyL2Jpbi9wb2RtYW4g
c3RhcnQgJWk8YnI+CiZndDsmZ3Q7ICZndDsgJmd0O0V4ZWNTdG9wPS91c3IvYmluL3BvZG1hbiBz
dG9wIC10IDIgJWk8YnI+CiZndDsmZ3Q7ICZndDsgJmd0Ozxicj4KJmd0OyZndDsgJmd0OyAmZ3Q7
W0luc3RhbGxdPGJyPgomZ3Q7Jmd0OyAmZ3Q7ICZndDtXYW50ZWRCeT1tdWx0aS11c2VyLnRhcmdl
dDxicj4KJmd0OyZndDsgJmd0OyAmZ3Q7PGJyPgomZ3Q7Jmd0OyAmZ3Q7ICZndDtRdWVzdGlvbnM6
PGJyPgomZ3Q7Jmd0OyAmZ3Q7ICZndDs8YnI+CiZndDsmZ3Q7ICZndDsgJmd0O2EpIFdoeSBpc24m
IzM5O3Qgc29tZSBlcXVpdmFsZW50IG9mIHRoaXMgdW5pdCBzaGlwcGVkIHdpdGggcG9kbWFuPyBP
ciwgYW08YnI+CiZndDsmZ3Q7ICZndDsgJmd0O0kgbWlzc2luZyBzb21lIHBhY2thZ2UgdGhhdCBz
aGlwcyBpdD88YnI+CiZndDsmZ3Q7ICZndDsgJmd0O2IpIFdoeSBpc24mIzM5O3QgdGhlIG5lY2Vz
c2FyeSBsb2NraW5nIGJ1aWx0IGludG8gcG9kbWFuIGl0c2VsZj8gT3IsIGlzIGl0PGJyPgomZ3Q7
Jmd0OyAmZ3Q7ICZndDthIGJ1ZyBpbiBjb250YWluZXJuZXR3b3JraW5nLXBsdWdpbnM/PGJyPgom
Z3Q7Jmd0OyAmZ3Q7PGJyPgomZ3Q7Jmd0OyAmZ3Q7IFRoZXNlIGNvbnRhaW5lcnMgYXJlbiYjMzk7
dCB1c2luZyBzdGF0aWMgSVBzLCBjb3JyZWN0Pzxicj4KJmd0OyZndDsgJmd0Ozxicj4KJmd0OyZn
dDsgJmd0OyBJIGNhbiByZWNhbGwgYW4gaXNzdWUgd2hlcmUgc3RhdGljIElQIGFsbG9jYXRpb25z
IHdlcmUgbGVhdmluZyBhZGRyZXNzPGJyPgomZ3Q7Jmd0OyAmZ3Q7IHJlc2VydmF0aW9ucyBhcm91
bmQgYWZ0ZXIgcmVib290LCBjYXVzaW5nIGlzc3Vlcy4uLiBCdXQgdGhhdCBzaG91bGQgYmU8YnI+
CiZndDsmZ3Q7ICZndDsgZml4ZWQgb24gdGhlIFBvZG1hbiB3ZSBzaGlwIGluIEYzMS48YnI+CiZn
dDsmZ3Q7ICZndDs8YnI+CiZndDsmZ3Q7ICZndDsgT3RoZXJ3aXNlLCB0aGlzIHNvdW5kcyBzdXNw
aWNpb3VzbHkgbGlrZSBhIENOSSBidWcuIEkgd291bGQgaG9wZSB0aGF0PGJyPgomZ3Q7Jmd0OyAm
Z3Q7IENOSSBoYXMgc3VmZmljaWVudCBsb2NraW5nIHRvIHByZXZlbnQgdGhpcyBmcm9tIHJhY2lu
ZywgYnV0IEkgY291bGQgYmU8YnI+CiZndDsmZ3Q7ICZndDsgd3JvbmcuPGJyPgomZ3Q7Jmd0OyAm
Z3Q7PGJyPgomZ3Q7Jmd0OyAmZ3Q7IEFsc28sIHlvdSBzaG91bGQgdHJ5IHVzaW5nIGBwb2RtYW4g
Z2VuZXJhdGUgc3lzdGVtZGAgZm9yIHVuaXQgZmlsZXMuPGJyPgomZ3Q7Jmd0OyAmZ3Q7IExvb2tp
bmcgYXQgeW91ciB1bml0IGZpbGVzLCBJIGRvbiYjMzk7dCB0aGluayB0aGV5IG9wZXJhdGUgYXMg
YWR2ZXJ0aXNlZDxicj4KJmd0OyZndDsgJmd0OyAoYHN0YXJ0IC0tYXR0YWNoYCBjYW4gZXhpdCB3
aGlsZSB0aGUgY29udGFpbmVyIGlzIHN0aWxsIHJ1bm5pbmcsIHNvPGJyPgomZ3Q7Jmd0OyAmZ3Q7
IHRyYWNraW5nIGl0IGlzIG5vdCBhIHJlbGlhYmxlIHdheSBvZiB0cmFja2luZyB0aGUgY29udGFp
bmVyKS48YnI+CiZndDsmZ3Q7ICZndDs8YnI+CiZndDsmZ3Q7ICZndDsgVGhhbmtzLDxicj4KJmd0
OyZndDsgJmd0OyBNYXR0IEhlb248YnI+CiZndDsmZ3Q7ICZndDs8YnI+CiZndDsmZ3Q7ICZndDsg
Jmd0Ozxicj4KJmd0OyZndDsgJmd0OyAmZ3Q7LS08YnI+CiZndDsmZ3Q7ICZndDsgJmd0O0FsZXhh
bmRlciBFLiBQYXRyYWtvdjxicj4KJmd0OyZndDsgJmd0OyAmZ3Q7X19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX188YnI+CiZndDsmZ3Q7ICZndDsgJmd0O1BvZG1h
biBtYWlsaW5nIGxpc3QgLS0gPGEgaHJlZj0ibWFpbHRvOnBvZG1hbkBsaXN0cy5wb2RtYW4uaW8i
IHRhcmdldD0iX2JsYW5rIj5wb2RtYW5AbGlzdHMucG9kbWFuLmlvPC9hPjxicj4KJmd0OyZndDsg
Jmd0OyAmZ3Q7VG8gdW5zdWJzY3JpYmUgc2VuZCBhbiBlbWFpbCB0byA8YSBocmVmPSJtYWlsdG86
cG9kbWFuLWxlYXZlQGxpc3RzLnBvZG1hbi5pbyIgdGFyZ2V0PSJfYmxhbmsiPnBvZG1hbi1sZWF2
ZUBsaXN0cy5wb2RtYW4uaW88L2E+PGJyPgomZ3Q7Jmd0Ozxicj4KJmd0OyZndDs8YnI+CiZndDsm
Z3Q7PGJyPgomZ3Q7Jmd0OyAtLTxicj4KJmd0OyZndDsgQWxleGFuZGVyIEUuIFBhdHJha292PGJy
PgomZ3Q7Jmd0OyBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
Xzxicj4KJmd0OyZndDsgUG9kbWFuIG1haWxpbmcgbGlzdCAtLSA8YSBocmVmPSJtYWlsdG86cG9k
bWFuQGxpc3RzLnBvZG1hbi5pbyIgdGFyZ2V0PSJfYmxhbmsiPnBvZG1hbkBsaXN0cy5wb2RtYW4u
aW88L2E+PGJyPgomZ3Q7Jmd0OyBUbyB1bnN1YnNjcmliZSBzZW5kIGFuIGVtYWlsIHRvIDxhIGhy
ZWY9Im1haWx0bzpwb2RtYW4tbGVhdmVAbGlzdHMucG9kbWFuLmlvIiB0YXJnZXQ9Il9ibGFuayI+
cG9kbWFuLWxlYXZlQGxpc3RzLnBvZG1hbi5pbzwvYT48YnI+CiZndDs8YnI+CiZndDs8YnI+CiZn
dDs8YnI+CiZndDsgLS08YnI+CiZndDs8YnI+CiZndDsgLS08YnI+CiZndDs8YnI+CiZndDsgU2Nv
dHQgTWNDYXJ0eSwgUkhDQTxicj4KJmd0OyBQcm9kdWN0IE1hbmFnZW1lbnQgLSBDb250YWluZXJz
LCBSZWQgSGF0IEVudGVycHJpc2UgTGludXggJmFtcDsgT3BlblNoaWZ0PGJyPgomZ3Q7IEVtYWls
OiA8YSBocmVmPSJtYWlsdG86c21jY2FydHlAcmVkaGF0LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPnNt
Y2NhcnR5QHJlZGhhdC5jb208L2E+PGJyPgomZ3Q7IFBob25lOiAzMTItNjYwLTM1MzU8YnI+CiZn
dDsgQ2VsbDogMzMwLTgwNy0xMDQzPGJyPgomZ3Q7IFdlYjogPGEgaHJlZj0iaHR0cDovL2NydW5j
aHRvb2xzLmNvbSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cDovL2NydW5j
aHRvb2xzLmNvbTwvYT48YnI+CiZndDs8YnI+CiZndDsgSGF2ZSB5b3UgZXZlciB3b25kZXJlZCB3
aGF0IGhhcHBlbnMgYmVoaW5kIHRoZSBzY2VuZXMgd2hlbiB5b3UgdHlwZSA8YSBocmVmPSJodHRw
Oi8vd3d3LnJlZGhhdC5jb20iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPnd3dy5y
ZWRoYXQuY29tPC9hPiBpbnRvIGEgYnJvd3NlciBhbmQgaGl0IGVudGVyPyA8YSBocmVmPSJodHRw
czovL3d3dy5yZWRoYXQuY29tL2VuL2Jsb2cvd2hhdC1oYXBwZW5zLXdoZW4teW91LWhpdC1lbnRl
ciIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly93d3cucmVkaGF0LmNv
bS9lbi9ibG9nL3doYXQtaGFwcGVucy13aGVuLXlvdS1oaXQtZW50ZXI8L2E+PGJyPgo8YnI+Cjxi
cj4KPGJyPgotLSA8YnI+CkFsZXhhbmRlciBFLiBQYXRyYWtvdjxicj4KPC9ibG9ja3F1b3RlPjwv
ZGl2PjxiciBjbGVhcj0iYWxsIj48ZGl2Pjxicj48L2Rpdj4tLSA8YnI+PGRpdiBkaXI9Imx0ciIg
Y2xhc3M9ImdtYWlsX3NpZ25hdHVyZSI+PGRpdiBkaXI9Imx0ciI+PGRpdj48ZGl2IGRpcj0ibHRy
Ij48ZGl2IGRpcj0ibHRyIj48ZGl2IGRpcj0ibHRyIj48ZGl2IGRpcj0ibHRyIj48ZGl2IGRpcj0i
bHRyIj48ZGl2IGRpcj0ibHRyIj48ZGl2IGRpcj0ibHRyIj48ZGl2IGRpcj0ibHRyIj48ZGl2IGRp
cj0ibHRyIj48cHJlPi0tIDxicj48L3ByZT48cHJlPjxmb250IHNpemU9IjIiPlNjb3R0IE1jQ2Fy
dHksIFJIQ0EKUHJvZHVjdCBNYW5hZ2VtZW50IC0gQ29udGFpbmVycywgUmVkIEhhdCBFbnRlcnBy
aXNlIExpbnV4ICZhbXA7IE9wZW5TaGlmdApFbWFpbDogPGEgaHJlZj0ibWFpbHRvOnNtY2NhcnR5
QHJlZGhhdC5jb20iIHRhcmdldD0iX2JsYW5rIj5zbWNjYXJ0eUByZWRoYXQuY29tPC9hPgpQaG9u
ZTogMzEyLTY2MC0zNTM1CkNlbGw6IDMzMC04MDctMTA0MwpXZWI6IDxhIGhyZWY9Imh0dHA6Ly9j
cnVuY2h0b29scy5jb20iIHRhcmdldD0iX2JsYW5rIj5odHRwOi8vY3J1bmNodG9vbHMuY29tPC9h
PjwvZm9udD48cHJlPkhhdmUgeW91IGV2ZXIgd29uZGVyZWQgd2hhdCBoYXBwZW5zIGJlaGluZCB0
aGUgc2NlbmVzIHdoZW4geW91IHR5cGUgPGEgaHJlZj0iaHR0cDovL3d3dy5yZWRoYXQuY29tIiB0
YXJnZXQ9Il9ibGFuayI+d3d3LnJlZGhhdC5jb208L2E+IGludG8gYSBicm93c2VyIGFuZCBoaXQg
ZW50ZXI/IDxhIGhyZWY9Imh0dHBzOi8vd3d3LnJlZGhhdC5jb20vZW4vYmxvZy93aGF0LWhhcHBl
bnMtd2hlbi15b3UtaGl0LWVudGVyIiBzdHlsZT0iZm9udC1mYW1pbHk6QXJpYWwsSGVsdmV0aWNh
LHNhbnMtc2VyaWYiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3d3dy5yZWRoYXQuY29tL2VuL2Js
b2cvd2hhdC1oYXBwZW5zLXdoZW4teW91LWhpdC1lbnRlcjwvYT48L3ByZT48L3ByZT48L2Rpdj48
L2Rpdj48L2Rpdj48L2Rpdj48L2Rpdj48L2Rpdj48L2Rpdj48L2Rpdj48L2Rpdj48L2Rpdj48L2Rp
dj48L2Rpdj4K
--===============1378555493266511359==--
From alsadi at gmail.com Tue Dec 10 15:13:09 2019
Content-Type: multipart/mixed; boundary="===============2306282477948179520=="
MIME-Version: 1.0
From: Muayyad AlSadi
To: podman at lists.podman.io
Subject: [Podman] init-path and should it be statically linked?
Date: Thu, 21 Nov 2019 15:39:36 +0000
Message-ID:
--===============2306282477948179520==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Hi,
on fedora 30 as root
# dnf install dumb-init
let's try podman as normal user
the following command does not work (busybox image)
$ podman run --rm -ti --init --init-path=3D/bin/dumb-init busybox /bin/sh
standard_init_linux.go:211: exec user process caused "no such file or
directory"
but when using fedora image it works
$ podman run --rm -ti --init --init-path=3D/bin/dumb-init
registry.fedoraproject.org/fedora-minimal:30 /bin/sh
but when using statically
and as normal user
$ curl -sSL -o ~/.local/bin/dumb-init
https://github.com/Yelp/dumb-init/releases/download/v1.2.2/dumb-init_1.2.2_=
amd64
$ chmod +x ~/.local/bin/dumb-init
$ ldd ~/.local/bin/dumb-init
not a dynamic executable
$ podman run --rm -ti --init --init-path=3D~/.local/bin/dumb-init busybox
/bin/sh
it works fine
so should fedora ship statically linked dumb-init?
--===============2306282477948179520==
Content-Type: text/html
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="attachment.html"
PGRpdiBkaXI9Imx0ciI+PGEgaHJlZj0ibWFpbHRvOnBvZG1hbkBsaXN0cy5wb2RtYW4uaW8iIHN0
eWxlPSJtYXJnaW46MHB4O3BhZGRpbmc6MHB4O2JvcmRlcjowcHg7Zm9udC12YXJpYW50LW51bWVy
aWM6aW5oZXJpdDtmb250LXZhcmlhbnQtZWFzdC1hc2lhbjppbmhlcml0O2ZvbnQtd2VpZ2h0OmJv
bGQ7Zm9udC1zdHJldGNoOmluaGVyaXQ7Zm9udC1zaXplOjE0cHg7bGluZS1oZWlnaHQ6aW5oZXJp
dDtmb250LWZhbWlseTomcXVvdDtIZWx2ZXRpY2EgTmV1ZSZxdW90OyxIZWx2ZXRpY2EsQXJpYWws
c2Fucy1zZXJpZjt2ZXJ0aWNhbC1hbGlnbjpiYXNlbGluZTtjb2xvcjpyZ2IoNTEsNTEsNTEpO2Jh
Y2tncm91bmQtY29sb3I6cmdiKDI1MSwyNTAsMjQ3KSI+PC9hPkhpLDxkaXY+PGJyPjwvZGl2Pjxk
aXY+b24gZmVkb3JhIDMwIGFzIHJvb3Q8YnI+PC9kaXY+PGRpdj4jIGRuZiBpbnN0YWxsIGR1bWIt
aW5pdDwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+bGV0JiMzOTtzIHRyeSBwb2RtYW4gYXMgbm9y
bWFsIHVzZXI8YnI+PC9kaXY+PGRpdj50aGUgZm9sbG93aW5nIGNvbW1hbmQgZG9lcyBub3Qgd29y
ayAoYnVzeWJveCBpbWFnZSk8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PiQgcG9kbWFuIHJ1biAt
LXJtIC10aSAtLWluaXQgLS1pbml0LXBhdGg9L2Jpbi9kdW1iLWluaXQgYnVzeWJveCAvYmluL3No
PGJyPnN0YW5kYXJkX2luaXRfbGludXguZ286MjExOiBleGVjIHVzZXIgcHJvY2VzcyBjYXVzZWQg
JnF1b3Q7bm8gc3VjaCBmaWxlIG9yIGRpcmVjdG9yeSZxdW90Ozxicj48L2Rpdj48ZGl2Pjxicj48
L2Rpdj48ZGl2PmJ1dCB3aGVuIHVzaW5nIGZlZG9yYSBpbWFnZSBpdCB3b3JrczwvZGl2PjxkaXY+
PGJyPjwvZGl2PiQgcG9kbWFuIHJ1biAtLXJtIC10aSAtLWluaXQgLS1pbml0LXBhdGg9L2Jpbi9k
dW1iLWluaXQgPGEgaHJlZj0iaHR0cDovL3JlZ2lzdHJ5LmZlZG9yYXByb2plY3Qub3JnL2ZlZG9y
YS1taW5pbWFsOjMwIj5yZWdpc3RyeS5mZWRvcmFwcm9qZWN0Lm9yZy9mZWRvcmEtbWluaW1hbDoz
MDwvYT4gL2Jpbi9zaDxicj48ZGl2Pjxicj48L2Rpdj48ZGl2PmJ1dCB3aGVuIHVzaW5nIHN0YXRp
Y2FsbHk8L2Rpdj48ZGl2PmFuZCBhcyBub3JtYWwgdXNlcjwvZGl2PjxkaXY+JCBjdXJsIC1zU0wg
LW/CoH4vLmxvY2FsL2Jpbi9kdW1iLWluaXTCoCA8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20v
WWVscC9kdW1iLWluaXQvcmVsZWFzZXMvZG93bmxvYWQvdjEuMi4yL2R1bWItaW5pdF8xLjIuMl9h
bWQ2NCI+aHR0cHM6Ly9naXRodWIuY29tL1llbHAvZHVtYi1pbml0L3JlbGVhc2VzL2Rvd25sb2Fk
L3YxLjIuMi9kdW1iLWluaXRfMS4yLjJfYW1kNjQ8L2E+PC9kaXY+PGRpdj4kIGNobW9kwqAreMKg
wqB+Ly5sb2NhbC9iaW4vZHVtYi1pbml0PC9kaXY+PGRpdj4kIGxkZCB+Ly5sb2NhbC9iaW4vZHVt
Yi1pbml0IDxicj4Jbm90IGEgZHluYW1pYyBleGVjdXRhYmxlPGJyPjwvZGl2PjxkaXY+JCBwb2Rt
YW4gcnVuIC0tcm0gLXRpIC0taW5pdCAtLWluaXQtcGF0aD1+Ly5sb2NhbC9iaW4vZHVtYi1pbml0
IGJ1c3lib3ggL2Jpbi9zaDxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2Pml0IHdvcmtzIGZp
bmU8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PnNvIHNob3VsZCBmZWRvcmEgc2hpcCBzdGF0aWNh
bGx5IGxpbmtlZCBkdW1iLWluaXQ/PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj48YnI+PC9kaXY+
PC9kaXY+Cg==
--===============2306282477948179520==--
From James.Ault at unnpp.gov Tue Dec 10 15:13:38 2019
Content-Type: multipart/mixed; boundary="===============8492464328634021181=="
MIME-Version: 1.0
From: Ault, James R (Contractor)
To: podman at lists.podman.io
Subject: [Podman] subscribe
Date: Tue, 19 Nov 2019 20:57:27 +0000
Message-ID: <
>
--===============8492464328634021181==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
subscribe
-Jim Ault, HPC Future Studies, Naval Nuclear Laboratory
--===============8492464328634021181==--