shouldn't the current directory be the default context for "podman build"?
by Robert P. J. Day
"man podman-build" suggests that the context argument is optional:
SYNOPSIS
podman build [options] [context]
podman image build [options] [context]
...
If no context directory is specified, then Podman will assume
the current working directory as the build context, which
should contain the Containerfile.
but if i have a directory with nothing but a Containerfile, i get:
$ podman build
Error: no context directory specified, and no containerfile specified
$
OTOH, specifying context of current directory:
$ podman build .
STEP 1: FROM alpine:latest
... etc etc ...
thoughts?
rday
2 weeks
RunRoot & mistaken IDs
by lejeczek
Hi guys.
I experience this:
-> $ podman images
WARN[0000] RunRoot is pointing to a path
(/run/user/1007/containers) which is not writable. Most
likely podman will fail.
Error: creating events dirs: mkdir /run/user/1007:
permission denied
-> $ id
uid=2001(podmania) gid=2001(podmania) groups=2001(podmania)
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
I think it might have something to do with the fact that I
changed UID for the user, but why would this be?
How troubleshoot & fix it, ideally without system reboot?
many thanks, L.
11 months
mqueue msg_max in rootless container
by Michael Ivanov
Hallo!
I'm trying to run my application in podman rootless container and I stumble
on following problem: my program needs /proc/sys/fs/mqueue/msg_max to be at
least 256, but in running container this value is just 10. When I try to
specify this parameter while running the image (--sysctl 'fs.mqueue.msg_max=256')
I get the following error:
Error: open /proc/sys/fs/mqueue/msg_max: Permission denied: OCI permission denied
and container is not created.
My host where container is being run has this parameter set to 256. How can I
expose current host setting for msg_max to my container?
Best regards,
--
\ / | |
(OvO) | Михаил Иванов |
(^^^) | |
\^/ | E-mail: ivans(a)isle.spb.ru |
^ ^ | |
1 year, 1 month
Ansible `template` tasks and rootless podman volume content management
by Chris Evich
Hey podman community,
While exploring Ansible management of rootless podman on a remote host,
I ran into a stinky volume-contents idempotency issue. I have an
idea[0] on how to solve this, but thought I'd reach out and see if/how
others have dealt with this situation.
---
Here's the setup:
1. I'm running an Ansible playbook against a host for which I ONLY have
access to a non-root (user) account.
2. The playbook configures `quadlet` for `systemd` management of a
configuration (podman) volume and a pod with several containers in it
running services.
3. The contents of the podman volume are 10-30 configuration files,
owned by several different UIDs/GIDs within the allocated
user-namespace. For example, some files are owned by $UID:$GID, others
may be 100123:100123, and others could be 100321:100321 (depending on
the exact user-namespace allocation details).
4. Ansible uses the 'template' module to manage 10-30 configuration
files and directories destined for the rootless podman volume. Ref:
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/templ...
5. When configuration files "change", Ansible uses a handler to restart
the pod. Ref:
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_handlers...
---
The problem:
The 'template' module knows nothing about user-namespaces. Because it's
running as a regular user, it can't `chown` the files into the
user-namespace range (permission denied). So the template module is
CONSTANTLY (and needlessly) triggering the handler to restart the pod
(due to file ownership differences). Also as you'd expect, when
`template` sets the file's UID/GID wrong, the containerized services
fail on restart.
---
Idea[0]: (untested) For the `template` task, set
`ansible_python_interpreter` to a wrapper script that execs `podman
unshare /usr/bin/python3 "$@"`.
--
Chris Evich (he/him), RHCA III
Senior Quality Assurance Engineer
If it ain't broke, your hammer isn't wide 'nough.
1 year, 4 months
podman slowly shows logs on windows
by Александр Илюшкин
Hey guys, I've switched from docker to podman and I noticed that command
podman logs <container name> works extremely slow
What should be done to fix this?
--
С уважением,
А.И.
1 year, 4 months
Should I run podman-based systemd services as root?
by Mark Raynsford
Hello!
I'm aware of the age-old advice of not running services as root; I've
been administering UNIX-like systems for decades now.
If you follow the advice given in, for example, this page:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_at...
... What you'll get is a redis container running as root (unless the
redis image drops privileges itself - I don't know, I've never run it).
I've set up a few production systems running services that are inside
podman containers. I'm lucky enough to be using 98% software that can
run inside completely unprivileged containers. For all of these
containers, I've run each container under its own user ID. The systemd
unit for each, for example, does something along these lines:
[Service]
Type=exec
User=_cardant
Group=_cardant
ExecStart=/usr/bin/podman run ...
However, doing things this way is a little messy. For example, if for
some reason I want to do something like `podman exec` in a container, I
have to `sudo -u _cardant podman exec ...`. `podman ps` will obviously
only show me the containers running for the current user. Additionally,
any images downloaded from the registry for each service will
effectively end up in the home directory of each service user,
complicating storage accounting somewhat. The UIDs/GIDs are yet another
thing I have to manage, even though they don't have any useful meaning
(they don't identify people, they're solely there because the
containers have to run as _something_). Containers also leak internal
UID/GID values (from the /etc/subuid ranges) into the filesystem, which
can complicate things.
Additionally, there are some containers that stubbornly make it awkward
to run as a non-root user despite not actually needing privileges. The
PostgreSQL image is a good example; you can run it as a non-root user
and it'll switch to another UID inside the container and then that
UID/GID will end on the database files that are inevitably mounted
inside the container. You'll also have to match these unpredictable
weird UID/GIDs if you want to supply the container with TLS keys/certs,
because postgres will refuse to open them unless the UID/GID matches.
You can't get around this by telling postgres to run as UID 0; it'll
refuse, even though UID 0 inside the container isn't UID 0 outside of
it when running unprivileged.
I'm running all of these services on systems that have SELinux in
enforcing mode. My understanding is that containers will all have the
container_t domain and therefore even if they all ran as root, a
compromised container would not be able to do any meaningful harm to
the system.
I'm therefore not certain if the usual "don't run as root" advice
applies as containers don't have the same security properties
(especially when combined with SELinux).
I feel like it'd simplify things if I could safely run all of
the containers as root. At the very least, I'd be able to predict
UID/GID values inside the containers from outside!
I can't get any clear advice on how people are expected to run podman
containers in production. All of the various bits of documentation in
Linux distributions that talk about running under systemd never
bother to talk about UIDs or GIDs. Any documentation on running podman
rootless seems to only talk about it in the context of developers
running unprivileged containers on their local machines for
experimentation/development. If you set up containers via Fedora
Server's cockpit UI, you'll get containers running as root everywhere.
What is the best practice here?
--
Mark Raynsford | https://www.io7m.com
1 year, 4 months
How does podman "initialize" after a reboot?
by Pratham Patel
Hello everyone,
**Disclaimer: This is a long e-mail.**
I am on NixOS (23.05), using the podman binary provided by the
distribution package. There are several issues that I am facing but
the issue that I want resolved is that _I want rootless Podman
containers started at boot_.
I won't get much into NixOS other than what is needed (i.e. no
advocacy for NixOS). NixOS, being a distribution with reproducible
builds, has a different method of storing binaries. Instead of
binaries living in `/usr/bin`, binaries actually live in
`/nix/store/<hash>-pkg-ver/bin`. Thereafter, the binaries are linked
into `/run/current-system/sw/bin`. My `PATH` (from a login shell)
looks like the following:
```
[pratham@sentinel] $ echo $PATH
/home/pratham/.local/bin:/home/pratham/bin:/run/wrappers/bin:/home/pratham/.local/share/flatpak/exports/bin:/var/lib/flatpak/exports/bin:/home/pratham/.nix-profile/bin:/etc/profiles/per-user/pratham/bin:/nix/var/nix/profiles/default/bin:/run/current-system/sw/bin
```
NixOS, being an OS that you can build with configuration files (i.e.
almost zero bash code to install; except for formatting and mounting),
there exists a way to declare your Podman containers like you do in a
compose.yaml and those containers will automatically be created as a
systemd service [0]. This is great! But those service files are placed
in `/etc/systemd/user`. This has an issue: the Podman container now
runs as root. I checked this by **logging in as root** and checking
the output of `podman ps` (not just `sudo podman ps`). If I wanted
rootful containers, I wouldn't be using Podman...
So, for the time being, I have resorted to writing a systemd unit file
by hand (which is stored in `$HOME/.config/systemd/user`). But the
path `/run/current-system/sw/bin` is missing from the unit's PATH. No
biggie, I can just add it using the following line under the
`[Service]` section:
```
Environment="PATH=/run/current-system/sw/bin:$PATH"
```
(This is a temporary hack and is strongly advised against, but I did
this as a troubleshooting measure, not as a solution.)
But the service fails with the following log entries in journalctl:
```
Jul 11 10:46:47 sentinel podman[36673]:
time="2023-07-11T10:46:47+05:30" level=error msg="running
`/run/current-system/sw/bin/newuidmap 36686 0 1000 1 1 10000 65536`:
newuidmap: write to uid_map failed: Operation not permitted\n"
Jul 11 10:46:47 sentinel podman[36673]: Error: cannot set up namespace
using "/run/current-system/sw/bin/newuidmap": should have setuid or
have filecaps setuid: exit status 1
Jul 11 10:46:47 sentinel systemd[1317]: testing-env.service: Main
process exited, code=exited, status=125/n/a
```
I never encountered this error on Fedora or RHEL. While experimenting,
I noticed one thing: **If I run _any_ Podman command (even `podman
ps`) from my _login shell_ and then restart the Podman container's
systemd service, the service runs cleanly.**
From the _Why can't I use sudo with rootless Podman_ article [1]:
> One of the core reasons Podman requires a temporary files directory is for detecting if the system has rebooted. After a reboot, all containers are no longer running, all container filesystems are unmounted, and all network interfaces need to be recreated (among many other things). Podman needs to update its database to reflect this and perform some per-boot setup to ensure it is ready to launch containers. This is called "refreshing the state."
>
> This is necessary because Podman is not a daemon. Each Podman command is run as a new process and doesn't initially know what state containers are in. You can look in the database for an accurate picture of all your current containers and their states. Refreshing the state after a reboot is essential to making sure this picture continues to be accurate.
>
> To perform the refresh, you need a reliable way of detecting a system reboot, and early in development, the Podman team settled on using a sentinel file on a tmpfs filesystem. A tmpfs is an in-memory filesystem that is not saved after a reboot—every time the system starts, a tmpfs mount will be empty. By checking for the existence of a file on such a filesystem and creating it if it does not exist, Podman can know if it's the first time it has run since the system rebooted.
>
> The problem becomes in determining where you should put your temporary files directory. The obvious answer is /tmp, but this is not guaranteed to be a tmpfs filesystem (though it often is). Instead, by default, Podman will use /run, which is guaranteed to be a tmpfs. Unfortunately, /run is only writable by root, so rootless Podman must look elsewhere. Our team settled on the /run/user/$UID directories, a per-user temporary files directory.
This means that Podman needs some sort of "initialization" when the
system has rebooted. Apparently, due to NixOS' nature, this
"initialization" doesn't occur when Podman is invoked from a systemd
service (something is missing but I can't figure out _what_). So I
rebooted and setup an `inotifywait` job (logged in as `root`--not with
the `sudo` prefix--with the command `inotifywait /run/user/1000/
--recursive --monitor`; `XDG_RUNTIME_DIR` for user `pratham` is
`/run/user/1000`) and ran `podman ps` when I was logged in as user
`pratham`. It generated the following output:
```
/run/user/1000/ ATTRIB,ISDIR libpod
/run/user/1000/libpod/ ATTRIB,ISDIR
/run/user/1000/libpod/tmp/ OPEN alive.lck
/run/user/1000/libpod/tmp/ CLOSE_WRITE,CLOSE alive.lck
/run/user/1000/libpod/tmp/ OPEN alive.lck
/run/user/1000/libpod/tmp/ CLOSE_WRITE,CLOSE alive.lck
/run/user/1000/libpod/tmp/ CREATE pause.pid.NjPiqQ
/run/user/1000/libpod/tmp/ OPEN pause.pid.NjPiqQ
/run/user/1000/libpod/tmp/ MODIFY pause.pid.NjPiqQ
/run/user/1000/libpod/tmp/ CLOSE_WRITE,CLOSE pause.pid.NjPiqQ
/run/user/1000/libpod/tmp/ MOVED_FROM pause.pid.NjPiqQ
/run/user/1000/libpod/tmp/ MOVED_TO pause.pid
/run/user/1000/ ATTRIB,ISDIR libpod
/run/user/1000/libpod/ ATTRIB,ISDIR
/run/user/1000/containers/ CREATE,ISDIR overlay
/run/user/1000/containers/ OPEN,ISDIR overlay
/run/user/1000/containers/ ACCESS,ISDIR overlay
/run/user/1000/containers/ CLOSE_NOWRITE,CLOSE,ISDIR overlay
/run/user/1000/containers/overlay/ CREATE overlay-true
/run/user/1000/containers/overlay/ OPEN overlay-true
/run/user/1000/containers/overlay/ CLOSE_WRITE,CLOSE overlay-true
/run/user/1000/containers/overlay/ OPEN overlay-true
/run/user/1000/containers/overlay/ CLOSE_NOWRITE,CLOSE overlay-true
/run/user/1000/containers/overlay/ CREATE metacopy()-false
/run/user/1000/containers/overlay/ OPEN metacopy()-false
/run/user/1000/containers/overlay/ CLOSE_WRITE,CLOSE metacopy()-false
/run/user/1000/containers/overlay/ CREATE native-diff()-true
/run/user/1000/containers/overlay/ OPEN native-diff()-true
/run/user/1000/containers/overlay/ CLOSE_WRITE,CLOSE native-diff()-true
/run/user/1000/containers/ CREATE,ISDIR overlay-containers
/run/user/1000/containers/ OPEN,ISDIR overlay-containers
/run/user/1000/containers/ ACCESS,ISDIR overlay-containers
/run/user/1000/containers/ CLOSE_NOWRITE,CLOSE,ISDIR overlay-containers
/run/user/1000/containers/ CREATE,ISDIR overlay-locks
/run/user/1000/containers/ OPEN,ISDIR overlay-locks
/run/user/1000/containers/ ACCESS,ISDIR overlay-locks
/run/user/1000/containers/ CLOSE_NOWRITE,CLOSE,ISDIR overlay-locks
/run/user/1000/containers/ CREATE,ISDIR networks
/run/user/1000/libpod/tmp/ OPEN alive.lck
/run/user/1000/libpod/tmp/ CLOSE_WRITE,CLOSE alive.lck
/run/user/1000/libpod/tmp/ OPEN alive.lck
/run/user/1000/containers/ OPEN,ISDIR networks
/run/user/1000/containers/ ACCESS,ISDIR networks
/run/user/1000/containers/ CLOSE_NOWRITE,CLOSE,ISDIR networks
/run/user/1000/libpod/tmp/ CREATE alive
/run/user/1000/libpod/tmp/ OPEN alive
/run/user/1000/libpod/tmp/ CLOSE_NOWRITE,CLOSE alive
/run/user/1000/libpod/tmp/ CLOSE_WRITE,CLOSE alive.lck
/run/user/1000/systemd/units/ CREATE .#invocation:dbus.serviced739c18053185984
/run/user/1000/systemd/units/ MOVED_FROM
.#invocation:dbus.serviced739c18053185984
/run/user/1000/systemd/units/ MOVED_TO invocation:dbus.service
/run/user/1000/ CREATE,ISDIR dbus-1
/run/user/1000/ OPEN,ISDIR dbus-1
/run/user/1000/ ACCESS,ISDIR dbus-1
/run/user/1000/ CLOSE_NOWRITE,CLOSE,ISDIR dbus-1
/run/user/1000/dbus-1/ OPEN,ISDIR services
/run/user/1000/dbus-1/services/ OPEN,ISDIR
/run/user/1000/dbus-1/ ACCESS,ISDIR services
/run/user/1000/dbus-1/services/ ACCESS,ISDIR
/run/user/1000/dbus-1/ ACCESS,ISDIR services
/run/user/1000/dbus-1/services/ ACCESS,ISDIR
/run/user/1000/dbus-1/ CLOSE_NOWRITE,CLOSE,ISDIR services
/run/user/1000/dbus-1/services/ CLOSE_NOWRITE,CLOSE,ISDIR
/run/user/1000/systemd/ CREATE,ISDIR transient
/run/user/1000/systemd/ OPEN,ISDIR transient
/run/user/1000/systemd/ ACCESS,ISDIR transient
/run/user/1000/systemd/ CLOSE_NOWRITE,CLOSE,ISDIR transient
/run/user/1000/systemd/transient/ CREATE podman-2894.scope
/run/user/1000/systemd/transient/ OPEN podman-2894.scope
/run/user/1000/systemd/transient/ MODIFY podman-2894.scope
/run/user/1000/systemd/transient/ CLOSE_WRITE,CLOSE podman-2894.scope
/run/user/1000/systemd/units/ CREATE
.#invocation:podman-2894.scopeb6be723b1ec13b95
/run/user/1000/systemd/units/ MOVED_FROM
.#invocation:podman-2894.scopeb6be723b1ec13b95
/run/user/1000/systemd/units/ MOVED_TO invocation:podman-2894.scope
/run/user/1000/containers/ CREATE,ISDIR overlay-layers
/run/user/1000/containers/ OPEN,ISDIR overlay-layers
/run/user/1000/containers/ ACCESS,ISDIR overlay-layers
/run/user/1000/containers/ CLOSE_NOWRITE,CLOSE,ISDIR overlay-layers
/run/user/1000/containers/overlay-layers/ CREATE mountpoints.lock
/run/user/1000/containers/overlay-layers/ OPEN mountpoints.lock
/run/user/1000/containers/overlay-layers/ CLOSE_WRITE,CLOSE mountpoints.lock
/run/user/1000/containers/overlay-layers/ OPEN mountpoints.lock
/run/user/1000/containers/overlay-layers/ CLOSE_WRITE,CLOSE mountpoints.lock
/run/user/1000/containers/overlay-layers/ OPEN mountpoints.lock
/run/user/1000/containers/overlay-layers/ CLOSE_WRITE,CLOSE mountpoints.lock
/run/user/1000/systemd/units/ DELETE invocation:podman-2894.scope
/run/user/1000/systemd/transient/ DELETE podman-2894.scope
/run/user/1000/libpod/tmp/ OPEN pause.pid
/run/user/1000/libpod/tmp/ ACCESS pause.pid
/run/user/1000/libpod/tmp/ CLOSE_NOWRITE,CLOSE pause.pid
/run/user/1000/systemd/transient/ CREATE podman-pause-f50834a6.scope
/run/user/1000/systemd/transient/ OPEN podman-pause-f50834a6.scope
/run/user/1000/systemd/transient/ MODIFY podman-pause-f50834a6.scope
/run/user/1000/systemd/transient/ CLOSE_WRITE,CLOSE podman-pause-f50834a6.scope
/run/user/1000/systemd/units/ CREATE
.#invocation:podman-pause-f50834a6.scope03db5d0ea8888975
/run/user/1000/systemd/units/ MOVED_FROM
.#invocation:podman-pause-f50834a6.scope03db5d0ea8888975
/run/user/1000/systemd/units/ MOVED_TO invocation:podman-pause-f50834a6.scope
```
Following is the output of `podman info` on my computer:
```
[pratham@sentinel] $ podman info
host:
arch: arm64
buildahVersion: 1.30.0
cgroupControllers:
- cpu
- io
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: Unknown
path: /run/current-system/sw/bin/conmon
version: 'conmon version 2.1.7, commit: '
cpuUtilization:
idlePercent: 81.03
systemPercent: 3.02
userPercent: 15.94
cpus: 4
databaseBackend: boltdb
distribution:
codename: stoat
distribution: nixos
version: "23.05"
eventLogger: journald
hostname: sentinel
idMappings:
gidmap:
- container_id: 0
host_id: 994
size: 1
- container_id: 1
host_id: 10000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 10000
size: 65536
kernel: 6.1.38
linkmode: dynamic
logDriver: journald
memFree: 3040059392
memTotal: 3944181760
networkBackend: netavark
ociRuntime:
name: crun
package: Unknown
path: /run/current-system/sw/bin/crun
version: |-
crun version 1.8.4
commit: 1.8.4
rundir: /run/user/1000/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
exists: true
path: /run/user/1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities:
CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_
CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: ""
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable:
/nix/store/n8lbxja2hd766pnz89qki90na2b3g815-slirp4netns-1.2.0/bin/slirp4netns
package: Unknown
version: |-
slirp4netns version 1.2.0
commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 4
libseccomp: 2.5.4
swapFree: 2957766656
swapTotal: 2957766656
uptime: 0h 5m 34.00s
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- docker.io
- quay.io
store:
configFile: /home/pratham/.config/containers/storage.conf
containerStore:
number: 2
paused: 0
running: 0
stopped: 2
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/pratham/.local/share/containers/storage
graphRootAllocated: 13539516416
graphRootUsed: 7770832896
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 9
runRoot: /run/user/1000/containers
transientStore: false
volumePath: /home/pratham/.local/share/containers/storage/volumes
version:
APIVersion: 4.5.0
Built: 315532800
BuiltTime: Tue Jan 1 05:30:00 1980
GitCommit: ""
GoVersion: go1.20.5
Os: linux
OsArch: linux/arm64
Version: 4.5.0
```
So my current question is how do I do this initial setup manually? I
don't want to log into `pratham`'s login shell every time I have to
reboot my machine for the Podman containers to start.
[0]: https://nixos.wiki/wiki/Podman#Run_Podman_containers_as_systemd_services
[1]: https://www.redhat.com/sysadmin/sudo-rootless-podman
- Pratham Patel
1 year, 4 months
Need some help for a rather strange usecase
by Boris Behrens
Hi,
sorry if this question is bad. You are allowed to flame me for this :)
I would like to create a container which is basically connected to two
separate VLANs and does some sort of bridging between them.
I also would like to be able to assign IP addresses from inside the
container, because I would like to assign IP addresses via keepalived.
The reason, why I would like to do it that way is to reduce cross traffic
between hosts.
I have three hosts, that are attached to a public VLAN. All three hosts got
a public IP address, which needs to be assigned to another host in case of
something is going wrong.
HAProxy picks up the request on the public VLAN and forwards it to the
underlying backend, which is in the same container. This backend talks to a
storage cluster via the private VLAN.
The container host is a ubuntu 20.04 with Podman 3.4.2
What I did until now:
- create two additional networks [1]
- create a container [2]
But now I have the problem that I am not allowed to add an IP address from
inside the container [3]
I also don't know if I have a conceptual error in the whole thing, because
it is a strange thing to use containers as a VM replacement.
But currently I just don't know better.
Hope someone can help me.
---
A more in depth description of what I try to solve:
I have a ceph cluster that serves s3 traffic via radosgw.
radosgw talks to all the physical disks in the cluster directly. So it does
the discribution of objects.
To do TLS termination, some basic HTTP header manipulation, and other HTTP
related stuff a HAProxy is sitting in front of the radosgw.
I don't want to have a public IP address directly on a storage host,
because misconfiguration happen and this is something I want to avoid.
So I thought I could spin up a container on some storage server, map the
public VLAN and the private storage VLAN into the container and combine
HAProxy and radosgw into one unit.
Now there is the problem with public availablity. I want to used DNS load
balancing for the HAproxy. So every HAProxy gets it's own public IP
address. But when one ore more HAproxy instance fail (there are so many
things that can go south) I would like to add the IP address to another
container.
Here comes keepalived, which does VRRP from inside the containers and when
some container stop anouncing it is available, another host spins up the IP
address, and starts to serve it.
And because I am struggling with even those simple tasks, I don't want to
even try k8s/k3s. Also I think k8s/k3s have a lot of cross traffic between
the instances, which might kill the performance really hard.
---
[1]
$ podman network create --disable-dns --driver=macvlan -o parent=bond0.50
--subnet 10.64.1.0/24 public
$ podman network create --disable-dns --driver=macvlan -o parent=bond0.43
--subnet 10.64.2.0/24 management
[2]
$ podman run --detach --hostname=frontend-`hostname` --name
frontend-`hostname -s` \
--mount=type=bind,source=/opt/frontend/etc/haproxy,destination=/etc/haproxy,ro
\
--mount=type=bind,source=/opt/frontend/etc/ssl/frontend,destination=/etc/ssl/frontend,ro
\
--network=podman,public,management \
-it ubuntu:20.04 /bin/bash
[3]
root@frontend-0cc47a6df14e:/# ip addr add 192.168.0.1/24 dev eth2
RTNETLINK answers: Operation not permitted
Best wishes
Boris
1 year, 4 months
CANCELED! Thursday, August 1, 2023, 11:00 am EDT (UTC-4) Podman Community Meeting
by Tom Sweeney
Hi All,
We were down to only one topic for the meeting tomorrow, and the
presenter isn't able to make it now due to a scheduling conflict. Given
that, and the number of people that are enjoying some holiday time, we
have decided to cancel the Podman Community Meeting tomorrow, Thursday,
August 1, 2023, 11:00 am EDT. We are still planning to meet on Thursday
August 17, 2023 at 11:00 am EDT for the Cabal meeting, and then on
Tuesday, October 3, 2023, at 11:00 am EDT, we will hold the next Podman
Community meeting.
If you have topics for either meeting, please send them along to
me, or add them to the agenda listed below my sig.
Thanks All!
t
Community Meeting: https://hackmd.io/fc1zraYdS0-klJ2KJcfC7w
Cabal Meeting: https://hackmd.io/gQCfskDuRLm7iOsWgH2yrg?both
1 year, 4 months
4.6 update & cgroup issues
by lejeczek
Hi guys.
I've just got updates and I get::
-> $ podman run --cpus=4 --memory=8g ...
Error: OCI runtime error: crun: the requested cgroup
controller `cpu` is not available
I wonder if anybody else - while on Centos 9 - sees this
same or similar?
many thanks, L.
1 year, 5 months