Alexander,
     Thanks for filing this! I'll get it on our list to update!
Best Regards
Scott M
On Tue, Nov 5, 2019 at 7:35 AM Alexander E. Patrakov <patrakov(a)gmail.com>
wrote:
 https://bugzilla.redhat.com/show_bug.cgi?id=1768866
 вт, 5 нояб. 2019 г. в 16:56, Scott McCarty <smccarty(a)redhat.com>:
 >
 > Alexander,
 >      I don't quite understand the docs bug. Could you please file the BZ
 and send it to me. I am happy to drive our docs team to update to use the
 "podman generate systemd" stuff instead of manually copy/pasting/modifying
 the configs in a static doc.
 >
 > Best Regards
 > Scott M
 >
 > On Mon, Nov 4, 2019 at 3:41 PM Alexander E. Patrakov <patrakov(a)gmail.com>
 wrote:
 >>
 >> "Matt,
 >>
 >> no, I don't use static IPs. I let podman allocate them. I have already
 >> tried `podman generate systemd` as per earlier suggestion.
 >>
 >> The issue is definitely not with stale reservations persisting across
 >> a reboot, otherwise adding "flock" would not have helped.
 >>
 >> Regarding the "`start --attach` can exit while the container is still
 >> running comment: if it is true, please ask the appropriate person to
 >> fix the systemd unit example in RHEL7 documentation.
 >>
 >> вт, 5 нояб. 2019 г. в 01:19, Matt Heon <mheon(a)redhat.com>:
 >> >
 >> > On 2019-11-04 23:40, Alexander E. Patrakov wrote:
 >> > >Hello.
 >> > >
 >> > >I have tried Podman in Fedora 31. Not a rootless setup.
 >> > >
 >> > >Software versions:
 >> > >
 >> > >podman-1.6.2-2.fc31.x86_64
 >> > >containernetworking-plugins-0.8.2-2.1.dev.git485be65.fc31.x86_64
 >> > >
 >> > >I have created two containers:
 >> > >
 >> > ># podman container run -d --name nginx_1 -p 80:80 nginx
 >> > ># podman container run -d --name nginx_2 -p 81:80 nginx
 >> > >
 >> > >Then I wanted to make sure that they start on boot.
 >> > >
 >> > >According to RHEL 7 documentation,
 >> > >
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_at...
 >> > >, I am supposed to create systemd units. OK, let's take the
 documented
 >> > >form of the unit and turn it into a template:
 >> > >
 >> > >[Unit]
 >> > >Description=Container %i
 >> > >
 >> > >[Service]
 >> > >ExecStart=/usr/bin/podman start -a %i
 >> > >ExecStop=/usr/bin/podman stop -t 2 %i
 >> > >
 >> > >[Install]
 >> > >WantedBy=multi-user.target
 >> > >
 >> > >This doesn't work if there is more than one container. The error
 >> > >is:
 >> > >
 >> > >Nov 04 21:35:57 podman[2268]:
time="2019-11-04T21:35:57+05:00"
 >> > >level=error msg="Error adding network: failed to allocate for
range
 0:
 >> > >10.88.0.19 has been allocated to
 >> > >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
 >> > >duplicate allocation is not allowed"
 >> > >Nov 04 21:35:57 podman[2268]:
time="2019-11-04T21:35:57+05:00"
 >> > >level=error msg="Error while adding pod to CNI network
\"podman\":
 >> > >failed to allocate for range 0: 10.88.0.19 has been allocated to
 >> > >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
 >> > >duplicate allocation is not allowed"
 >> > >Nov 04 21:35:57 podman[2268]: Error: unable to start container
 >> > >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019:
 >> > >error configuring network namespace for container
 >> > >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019:
 >> > >failed to allocate for range 0: 10.88.0.19 has been allocated to
 >> > >ace2de4405205a9a7674a2524cd67c1f0e395a9234b0456c55881a1a4add6019,
 >> > >duplicate allocation is not allowed
 >> > >
 >> > >(as you can see, the conflict is against the container itself)
 >> > >
 >> > >Apparently different runs of podman need to be serialized against
 each
 >> > >other. This works:
 >> > >
 >> > >[Unit]
 >> > >Description=Container %i
 >> > >Wants=network-online.target
 >> > >After=network-online.target
 >> > >
 >> > >[Service]
 >> > >Type=oneshot
 >> > >RemainAfterExit=yes
 >> > >ExecStart=flock /run/lock/subsys/container.lck /usr/bin/podman start
 %i
 >> > >ExecStop=/usr/bin/podman stop -t 2 %i
 >> > >
 >> > >[Install]
 >> > >WantedBy=multi-user.target
 >> > >
 >> > >Questions:
 >> > >
 >> > >a) Why isn't some equivalent of this unit shipped with podman? Or,
am
 >> > >I missing some package that ships it?
 >> > >b) Why isn't the necessary locking built into podman itself? Or,
is
 it
 >> > >a bug in containernetworking-plugins?
 >> >
 >> > These containers aren't using static IPs, correct?
 >> >
 >> > I can recall an issue where static IP allocations were leaving address
 >> > reservations around after reboot, causing issues... But that should be
 >> > fixed on the Podman we ship in F31.
 >> >
 >> > Otherwise, this sounds suspiciously like a CNI bug. I would hope that
 >> > CNI has sufficient locking to prevent this from racing, but I could be
 >> > wrong.
 >> >
 >> > Also, you should try using `podman generate systemd` for unit files.
 >> > Looking at your unit files, I don't think they operate as advertised
 >> > (`start --attach` can exit while the container is still running, so
 >> > tracking it is not a reliable way of tracking the container).
 >> >
 >> > Thanks,
 >> > Matt Heon
 >> >
 >> > >
 >> > >--
 >> > >Alexander E. Patrakov
 >> > >_______________________________________________
 >> > >Podman mailing list -- podman(a)lists.podman.io
 >> > >To unsubscribe send an email to podman-leave(a)lists.podman.io
 >>
 >>
 >>
 >> --
 >> Alexander E. Patrakov
 >> _______________________________________________
 >> Podman mailing list -- podman(a)lists.podman.io
 >> To unsubscribe send an email to podman-leave(a)lists.podman.io
 >
 >
 >
 > --
 >
 > --
 >
 > Scott McCarty, RHCA
 > Product Management - Containers, Red Hat Enterprise Linux & OpenShift
 > Email: smccarty(a)redhat.com
 > Phone: 312-660-3535
 > Cell: 330-807-1043
 > Web: 
http://crunchtools.com
 >
 > Have you ever wondered what happens behind the scenes when you type
 
www.redhat.com into a browser and hit enter?
 
https://www.redhat.com/en/blog/what-happens-when-you-hit-enter
 --
 Alexander E. Patrakov
 
 
-- 
-- 
Scott McCarty, RHCA
Product Management - Containers, Red Hat Enterprise Linux & OpenShift
Email: smccarty(a)redhat.com
Phone: 312-660-3535
Cell: 330-807-1043
Web: