On Wednesday, February 23rd, 2022 at 08:52, Giuseppe Scrivano
<gscrivan(a)redhat.com> wrote:
> Rudolf Vesely via Podman podman(a)lists.podman.io writes:
>
> > On Wednesday, February 23rd, 2022 at 08:17, Giuseppe Scrivano
gscrivan(a)redhat.com wrote:
> >
> > > Rudolf Vesely via Podman podman(a)lists.podman.io writes:
> > >
> > > > Hi Everybody,
> > > >
> > > > I tried to mount filesystem inside unprivileged container using
> > > >
> > > > fuse3 and it's working. The only thing I had to do was to mount
> > > >
> > > > /dev/fuse using "--device" and add "SYS_ADMIN"
capability.
> > > >
> > > > Example:
> > > >
> > > > podman run \
> > > >
> > > > -d \
> > > >
> > > > --device=/dev/fuse \
> > > >
> > > > --cap-add SYS_ADMIN \
> > > >
> > > > localhost/myimage
> > > >
> > > > After that I can mount fuse inside.
> > > >
> > > > Now I'd like to access the mounted filesystem from another
container in a pod or from the container host. In order to do that I used
"bind-propagation=rshared" like this:
> > > >
> > > > podman run \
> > > >
> > > >
--mount=type=bind,source=/from,destination=/to,bind-propagation=rshared \
> > > >
> > > > -d \
> > > >
> > > > --device=/dev/fuse \
> > > >
> > > > --cap-add SYS_ADMIN \
> > > >
> > > > localhost/myimage
> > > >
> > > > When I mount fuse inside the container into "/to" or
"/to/subfolder" I
> > > >
> > > > can again see / access the filesystem from inside of the container
but
> > > >
> > > > I don't see it from the host / from another containers in a pod
that
> > > >
> > > > mount "/from".
> > > >
> > > > Could you please tell me Am I missing something?
> > >
> > > mount points created from a rootless environment won't be propagated
to
> > >
> > > the host, even if you specify rshared.
> > >
> > > They will be propagated in the rootless mount namespace, that you can
> > >
> > > access with "podman unshare".
> > >
> > > You first need to setup a mount point in the "podman unshare"
> > >
> > > environment, e.g.:
> > >
> > > $ podman unshare mount --make-shared --bind /from /from
> > >
> > > $ podman run -v /from:/to:rshared ....
> > >
> > > Is the mount accessible from other containers now?
> >
> > Hi Giuseppe,
> >
> > That was my initial trial since I don't need to access it from the
> > host. I wanted to run two containers - the first mounting FUSE and
> > the second running app that accesses the mounted data.
> >
> > I tried to run the two containers in a pod and outside of the pod.
> >
> > I tried to run the second with
> >
> > podman run
--mount=type=bind,source=/from,destination=/to,bind-propagation=rshared
> >
> > and without
> >
> > podman run --mount=type=bind,source=/from,destination=/to
> >
> > and even with
> >
> > podman run
--mount=type=bind,source=/from,destination=/to,bind-propagation=rshared --device=/dev/fuse
--cap-add SYS_ADMIN
> >
> > But the second container does not see the mounted data.
> >
> > And if I mount the fuse on the first to "/from/mount" and I also
> >
> > "touch /from/abc" then the second container will see the directory
> >
> > "/to/mount" and the file "/to/abc" but the
"/to/mount" directory will
> >
> > be empty.
>
> have you used `podman unshare mount --make-shared --bind /from /from`
>
> before creating the first container?
>
> Podman mailing list -- podman(a)lists.podman.io
>
> To unsubscribe send an email to podman-leave(a)lists.podman.io
The fuse is mounted using rclone:
https://rclone.org/commands/rclone_mount/
with option: --allow-other
When I run rclone mount inside of the first container it looks like for example this:
name_of_s3_mount: on /from type fuse.rclone
(rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
or when mounting as normal user inside of the container:
name_of_s3_mount: on /from type fuse.rclone
(rw,nosuid,nodev,relatime,user_id=1000,group_id=1000,allow_other)
And I can confirm that in both cases, all users inside of the first container can see the
data.
I've just tried inside of the first container after rclone mount:
mount --make-shared --rbind /from /from
and that has no effect.
Forgot to mention that "/etc/fuse.conf" has "user_allow_other" option
on both host and inside of the first container.
you need to run `podman unshare mount --make-shared --bind /from /from`
from the host before creating the container, not from the container.
Please be careful, the `/from` directory must be the same one you use
for the source for the bind mount when you create the container:
$ mkdir /tmp/from
$ podman unshare mount --make-shared --bind /tmp/from /tmp/from
$ podman run -d --privileged --rm -v/tmp/from:/to:rshared alpine sh -c 'mount -t tmpfs
tmpfs /to; touch /to/new-file; sleep 100'
$ podman unshare ls /tmp/from
new-file
$ ls /tmp/from
$ podman run --rm --rm -v/tmp/from:/to alpine sh -c 'ls /to'
new-file
The new-file is visible from the mount that can be shared among
different containers, but it is not visible from the host.