Problem restarting a mysql container
by Simon Colston
I'm running podman version 1.8.0 on Fedora 31
If I run with this command
podman run --userns=keep-id \
--name=mysql \
--env="MYSQL_ROOT_PASSWORD=mysql" --publish 3306:3306 \
--volume=/home/simon/servers/mysql/var/lib/mysql:/var/lib/mysql:Z
mysql:8.0
I can connect with the mysql client with no errors.
mysql -u root -p -h 127.0.0.1 -P 3306
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
etc.
If I then do:
podman stop mysql
podman start mysql
then I can no longer connect with the mysql client.
mysql -u root -p -h 127.0.0.1 -P 3306
Enter password:
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
I then reset everything:
podman stop mysql
podman rm mysql
and delete all the database files.
If I remove the --userns=keep-id argument I can connect after a 'stop' and a 'start'. (Except that the database files
now have an odd user id that I can only delete as root)
So, --userns=keep-id is the only difference and causing some sort of problem after a restart.
I had this working using podman version 1.7 and I think that (and maybe the kernel) are the only things that have changed.
Am I doing something wrong? Have I found a bug and would you like me to submit it?
Simon
4 years, 8 months
Re: Please give feedback on: Add support to auto-update containers running in systemd units as
by Karl Quinsland
I am of two minds on this.
I am happy to see the functionality come to podman, but am concerned that
there's no way to make this feature robust enough for all but the simplest
of use cases without sinking a *ton* of time into it.
Tl;DR: "reload this service when there's a new version" is a lot more
complicated than it appears unless the service in question is low stakes or
otherwise purposefully designed to be highly stateless and all consumers of
the service are equally well equipped to deal with a service that may
suddenly speak a slightly updated version of the protocol... etc. If this
is a feature that is in demand, then please do keep building it!
As implemented now, I can think of a few common scenarios where it will be
immediately useful, but beyond them, I see quite a few things that'll need
to be added to make it useful in more sophisticated/legacy environments. I
would use this auto-update functionality on a few containers that I deploy
around the house because those containers all run on systemd hosts and the
workloads that the containers have is not sensitive to (slightly) out of
date containers. Nor is a manual rollback of any container the end of the
world. I can't use this at work, though because various workloads have
elaborate gates around their rollout or otherwise need to be rolled out as
soon as a new release is available... not (up to) 24h later.
---
I've implemented something similar internally that does not suffer from
some of the same drawbacks. It's is quite a bit more flexible, but at the
cost of some additional overhead/infrastructure. Chiefly:
- Would work with any init system that supports some form of "additional
configuration" faculty. In my case, though, we're primarily - but not
exclusively - a systemd shop.
- Is not limited to daily checks for updates. Within seconds of the "switch
being flipped" - so to speak - the new version of the container can be
running.
- Supports rollbacks and other release gates
Internally, we use the *excelent* Consul Key/Value storage system to manage
which workloads use which versions of a container, but any key/value
storage system that allows a daemon to monitor or 'learn' about a change to
a value for a given key will work. That is: I use consul to pull this off,
you could absolutely make EtcD or ZooKeeper work here, too.
Through a process that's not relevant here, a key/value path is updated.
E.G.:
path: /service/in-field-c/version
value: 1.28
where in-field-hardware-controller is an illustrative example, as is the
value stored @ that key.
On every container host, there's a daemon that watches the
/service/in-field-hardware-controller/version path in consul. Depending on
the workload, we use the simple but powerful consul-template program or a
more sophisticated internal daemon. Consul-Template is a small golang based
binary that can be run as a daemon to watch a specific consul key, but the
consul API is open and there are a variety of daemons out there that
support monitoring a given path. The critical bit here is that the daemon
has the ability to execute system commands when a change is observed: When
the monitoring daemon notices a change to the value @ the key, it renders
out a file that is then read by systemd and "exposed" to the ExecStart=
directive as an environment variable. The file that is rendered out would
be placed in:
/etc/systemd/systemd/in-field-hardware-controller.service.d/10-version.conf
and would look like this:
[Service]
Environment=WORKLOAD_IMAGE_VERSION=1.28
The daemon that writes out the file then consults some internal logic to
see when to *apply* this change. In simple cases, the daemon
(consul-template) will immediately run
systemctl daemon-reload; systemctl restart hardware-controller.service
which will immediately apply the change. In other cases, the daemon (not
consul-template) will run additional scripts to sanity check other
dependencies and provide additional 'gates' on the roll out. These scripts
check up and down-stream dependencies, database/stateful data versions and
- in some cases - require an engineer to be the "second man" (see 'two man
rule' on wikipedia) in a version roll out. If the updated container does
not start to publish an expected payload to a pre-defined endpoint, we
consider the container to be unhealthy and consult additional internal
logic about weather to revert or exponentially backoff on the restart
attempts.
The portion of the hardware-controller.service file that plugs the env-var
into the run command looks like this
ExecStart=/usr/bin/podman run --name=hardware-controller <...snip...>
some-registry/hw-controller:${WORKLOAD_IMAGE_VERSION}
I will be the first to acknowledge that our solution has many knobs and
sliders that increase the complexity of our "dynamic" version configuration
setup. Some of these knobs are
necessary to support features that are absolutely critical for our needs:
rollouts within seconds unless additional gates and relatively painless
rollbacks (where possible). For my
personal/at-home workloads, those needs are not critical and so the many
knobs/sliders are not needed.
Happy to clarify anything!
-K
4 years, 8 months
Please give feedback on: Add support to auto-update containers running in systemd units as
by Daniel Walsh
We are working on a new feature of Podman that allows you to run a
service within a container and then have the container automatically
updated when a new image is pushed to a registry.
The full description is given below. We would love to have community
feedback on the ideas, and have people play with it.
```
Add support to auto-update containers running in systemd units as
generated with |podman generate systemd --new|.
|podman restart --auto-update| looks up container with a specified
"io.containers.autoupdate" label (i.e., the auto-update policy).
If the label is present and set to "image", Podman reaches out to the
corresponding registry to check if the image has been updated. We
consider an image to be updated if the digest in the local storage is
different than the one of the remote image. If an image must be
updated, Podman pulls it down and restarts the container. Note that the
restarting sequence relies on systemd.
At container-creation time, Podman looks up the "PODMAN_SYSTEMD_UNIT"
environment variables and stores it verbatim in the container's label.
This variable is now set by all systemd units generated by
|podman-generate-systemd| and is set to |%n| (i.e., the name of systemd
unit starting the container). This data is then being used in the
auto-update sequence to instruct systemd (via DBUS) to restart the unit
and hence to restart the container.
Note that this implementation of auto-updates relies on systemd and
requires a fully-qualified image reference to be used to create the
container. This enforcement is necessary to know which image to
actually check and pull. If we used an image ID, we would not know
which image to check/pull anymore.
Fixes: #3575 <https://github.com/containers/libpod/issues/3575>
Signed-off-by: Valentin Rothberg rothberg(a)redhat.com
<mailto:rothberg@redhat.com>
------------------------------------------------------------------------
https://github.com/containers/libpod/pull/5480
4 years, 8 months