[Podman] Re: Follow-up: Rootless storage usage
by Daniel Walsh
Is there any config files in ~/.config/containers?
podman system reset
Should remove everything, and from then on Podman should use rootless
overlay.
On 1/25/23 09:52, Михаил Иванов wrote:
> Is native overlay available in rootless mode?
> When I run podman as root there's no problem, overlayfs is picked up
> as default in debian. VFS is selected as default only in rootless mode.
> Rgrds,
> On 25.01.2023 14:03, Giuseppe Scrivano wrote:
>> Reinhard Tartler<siretart(a)gmail.com> writes:
>>
>>> On Tue, Jan 24, 2023 at 2:08 PM Daniel Walsh<dwalsh(a)redhat.com> wrote:
>>>
>>> On 1/24/23 03:47, Reinhard Tartler wrote:
>>>
>>> Dan,
>>>
>>> In Debian, I've chosen to just go with the upstream defaults:
>>> https://github.com/containers/storage/blob/8428fad6d0d3c4cded8fd7702af36a...
>>>
>>> This file is installed verbatim to /usr/share/containers/storage.conf.
>>>
>>> Is there a better choice? Does Fedora/Redhat provide a default storage.conf from somewhere else?
>>>
>>> Thanks,
>>> -rt
>>>
>>> That should be fine. Fedora goes with that default as well. Does debian support rootless overlay by default?
>>>
>>> If not then it would fail over to VFS if fuse-overlayfs is not installed.
>>>
>>> I'm a bit confused about what you mean with that.
>>>
>>> In Debian releases that ship podman 4.x we have at least Linux kernel 6.0. The fuse-overlayfs package is installed by default, but users may opt to not
>>> install it by configuring apt to not install "Recommends" by default.
>>>
>>> What else is required for rootless overlay?
>>>
>>> Also, if I follow this conversation, then it appears that the default storage.conf requires modification in line 118 (to uncomment the mount_program
>>> option) in order to actually use fuse-overlayfs. I would have expected podman to use fuse-overlayfs if it happens to be installed, and fallback to direct
>>> mount if not. I read Michail's email thread that this appears to be not the case and he had to spend a lot of effort figuring out how to install an
>>> appropriate configuration file. Maybe I'm missing something, but I wonder what we can do to improve the user experience?
>> what issue do you see if you use native overlay?
>>
>> Podman prefers native overlay if it is available, since it is faster.
>> If not, it tries fuse-overlays and if it is not available, it falls back
>> to vfs.
>>
>> Could you try from a fresh storage though? If fuse-overlayfs was
>> already used, then Podman will continue using it even if native overlay
>> is available, since the storage metadata is slightly different.
>>
>> Thanks,
>> Giuseppe
>> _______________________________________________
>> Podman mailing list --podman(a)lists.podman.io
>> To unsubscribe send an email topodman-leave(a)lists.podman.io
2 years, 4 months
[Podman] Re: Why do use podman machine on Mac?
by Jarkko Laiho
Macs are BSD-based, not Linux, and therefore do not run the Linux kernel, and therefore cannot run Podman (or Docker) natively.
- JK
> On 7. Sep 2023, at 19.19, Mehdi Haghgoo via Podman <podman(a)lists.podman.io> wrote:
>
> The container experience with podman machine on Windows and mac is not optimal because the containers are slow.
> Mac is a Linux-based OS. So, why can't we create native containers on it as we do on Linux?
>
> That applies to WSL. It's kind of Linux. Why cannot we create native Linux containers on it without resorting to Podman machine and podman clients?
>
>
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
1 year, 8 months
[Podman] Re: RunRoot & mistaken IDs
by lejeczek
On 28/03/2023 21:00, Chris Evich wrote:
> On 3/28/23 09:06, lejeczek via Podman wrote:
>> I think it might have something to do with the fact that
>> I changed UID for the user
>
> The files under /run/user/$UID are typically managed by
> systemd-logind. I've noticed sometimes there's a delay
> between logging out and the files being cleaned up. Try
> logging out for a minute or three and see if that fixes it.
>
> Also, if you have lingering enabled for the user, it may
> take a restart of particular the user.slice.
>
> Lastly, I'm not certain, but you (as root) may be able to
> `systemctl reload systemd-logind`. That's a total guess
> though.
>
> ---
thanks, that was that delay, yes, bit annoying If 'usermod'
was in mass/often use.
2 years, 1 month
[Podman] Re: fs.mqueue.msg_max rootless problem
by Lewis Gaul
Hi,
I think this is the same thing I raised in
https://github.com/containers/podman/discussions/19737?
This seems to be a kernel limitation - I'm not sure where the mqueue limits
come from when creating a new IPC namespace, but it doesn't inherit the
limits from the parent namespace and the root user within the user
namespace does not have permissions to modify the limits. This was
supposedly fixed in a recent kernel version although I haven't tested it.
The workaround I'm currently using (requiring sudo permissions) is along
the lines of:
podman create --ipc private --name ctr_foo ...
podman init ctr_foo
ctr_pid=$(podman inspect -f '{{.State.Pid}}' ctr_foo)
sudo nsenter --target $ctr_pid --user --ipc sysctl fs.mqueue.msg_max=64
podman start ctr_foo
Obviously this isn't ideal, and I'd be open to alternatives...
Regards,
Lewis
On Mon, 27 Nov 2023 at 12:23, Daniel Walsh <dwalsh(a)redhat.com> wrote:
> On 11/27/23 02:04, Михаил Иванов wrote:
>
> Hallo,
>
> For me rootful works:
>
> island:container [master]> cat /proc/sys/fs/mqueue/msg_max
> 256
>
> $ podman run alpine ls -ld /proc/sys/fs/mqueue/msg_max
> -rw-r--r-- 1 nobody nobody 0 Nov 27 12:10
> /proc/sys/fs/mqueue/msg_max
>
> Since it is owned by real root, a rootless user can not write to it. I
> guess we could ague this is a bug with the kernel. mqeueu/msg_max should be
> owned by root of the user namespace as opposed to real root.
>
> ## Rootful:
> island:container [master]> sudo podman run --sysctl=fs.mqueue.msg_max=64 --rm centos cat /proc/sys/fs/mqueue/msg_max
> 64
>
> ## Rootless:
> island:container [master]> podman run --sysctl=fs.mqueue.msg_max=64 --rm centos cat /proc/sys/fs/mqueue/msg_max
> Error: crun: open `/proc/sys/fs/mqueue/msg_max`: Permission denied: OCI permission denied
>
> ## What rootless gets by default (changed as compared to host setting!):
> island:container [master]> podman run --rm centos cat /proc/sys/fs/mqueue/msg_max
> 10
>
> Rgrds,
>
> On 25.11.2023 20:17, Daniel Walsh wrote:
>
> On 11/25/23 10:44, Михаил Иванов wrote:
>
> Hallo,
>
> Is it possible to get podman to propagate current host fs.mqueue.msg_max
> value to rootless container? I can do that if I specify --ipc host when
> running the container, but this also exposes other ipc stuff from host
> to container, including shared memory, which I do not want.
>
> If I specify --sysctl fs.mqueue.msg_size=64 to podman it gives me
> "OCI permission denied" error, even when my host setting (256) is greater
> than requested value.
>
> Thanks,
> --
> Micvhael Ivanov
>
>
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
> The way you attempted is correct. Might not be allowed for rootless
> containers.
>
> I attempted this in a rootful container and it blows up for me.
>
>
> podman run --sysctl fs.mqueue.msg_size=64 alpine echo hi
> Error: crun: open `/proc/sys/fs/mqueue/msg_size`: No such file or
> directory: OCI runtime attempted to invoke a command that was not found
>
>
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
>
> _______________________________________________
> Podman mailing list -- podman(a)lists.podman.io
> To unsubscribe send an email to podman-leave(a)lists.podman.io
>
1 year, 6 months
[Podman] Reliable service starts
by Mark Raynsford
Hello!
I'm using podman on Fedora CoreOS. The standard setup for a
podman-based service tends to look like this (according to the
documentation):
---
[Unit]
Description=looseleaf
After=network-online.target
Wants=network-online.target
[Service]
Type=exec
TimeoutStartSec=60
User=_looseleaf
Group=_looseleaf
Restart=on-failure
RestartSec=10s
Environment="_JAVA_OPTIONS=-XX:+UseSerialGC -Xmx64m -Xms64m"
ExecStartPre=-/bin/podman kill looseleaf
ExecStartPre=-/bin/podman rm looseleaf
ExecStartPre=/bin/podman pull docker.io/io7m/looseleaf:0.0.4
ExecStart=/bin/podman run \
--name looseleaf \
--volume /var/storage/looseleaf/etc:/looseleaf/etc:Z,ro \
--volume /var/storage/looseleaf/var:/looseleaf/var:Z,rw \
--publish 20000:20000/tcp \
--memory=128m \
--memory-reservation=80m \
docker.io/io7m/looseleaf:{{looseleaf_version}} \
/looseleaf/bin/looseleaf server --file /looseleaf/etc/config.json
ExecStop=/bin/podman stop looseleaf
[Install]
WantedBy=multi-user.target
---
The important line is this one:
/bin/podman pull docker.io/io7m/looseleaf:0.0.4
Unfortunately, this line can fail. That in itself isn't a problem, the
service will be restarted and it'll run again. The real problem is that
it can fail in ways that will break all subsequent executions.
On new Fedora CoreOS deployments, there's often a lot of network
traffic happening on first boot as the rest of the system updates
itself, and it's not unusual for `podman pull` to fail and leave the
services permanently broken (unless someone goes in and fixes them).
This is what will typically happen:
Feb 02 20:31:05 control1.io7m.com podman[1934]: Trying to pull docker.io/io7m/looseleaf:0.0.4...
Feb 02 20:31:48 control1.io7m.com podman[1934]: time="2023-02-02T20:31:48Z" level=warning msg="Failed, retrying in 1s ... (1/3). Error: initializing source docker://io7m/looseleaf:0.0.4: pinging container registry registry-1.docker.io: Get \"https://regist>
Feb 02 20:31:50 control1.io7m.com podman[1934]: Getting image source signatures
Feb 02 20:31:50 control1.io7m.com podman[1934]: Copying blob sha256:9794579c486abc6811cea048073584c869db02a4d9b615eeaa1d29e9c75738b9
Feb 02 20:31:50 control1.io7m.com podman[1934]: Copying blob sha256:8921db27df2831fa6eaa85321205a2470c669b855f3ec95d5a3c2b46de0442c9
Feb 02 20:31:50 control1.io7m.com podman[1934]: Copying blob sha256:846e3b32ee5a149e3ccb99051cdb52e96e11488293cdf72ee88168c88dd335c7
Feb 02 20:31:50 control1.io7m.com podman[1934]: Copying blob sha256:7f516ed68e97f9655d26ae3312c2aeede3dfda2dd3d19d2f9c9c118027543e87
Feb 02 20:31:50 control1.io7m.com podman[1934]: Copying blob sha256:e88daf71a034bed777eda8657762faad07639a9e27c7afb719b9a117946d1b8a
Feb 02 20:32:03 control1.io7m.com systemd[1]: looseleaf.service: start-pre operation timed out. Terminating.
It'll usually happen again on the next service restart. Then, this will
tend to happen:
Feb 02 20:34:13 control1.io7m.com podman[2745]: time="2023-02-02T20:34:13Z" level=error msg="Image docker.io/io7m/looseleaf:0.0.4 exists in local storage but may be corrupted (remove the image to resolve the issue): size for layer \"13cfed814d5b083572142bc>
Feb 02 20:34:13 control1.io7m.com podman[2745]: Trying to pull docker.io/io7m/looseleaf:0.0.4...
Feb 02 20:34:14 control1.io7m.com podman[2745]: Getting image source signatures
Feb 02 20:34:14 control1.io7m.com podman[2745]: Copying blob sha256:9794579c486abc6811cea048073584c869db02a4d9b615eeaa1d29e9c75738b9
Feb 02 20:34:14 control1.io7m.com podman[2745]: Copying blob sha256:8921db27df2831fa6eaa85321205a2470c669b855f3ec95d5a3c2b46de0442c9
Feb 02 20:34:14 control1.io7m.com podman[2745]: Copying blob sha256:846e3b32ee5a149e3ccb99051cdb52e96e11488293cdf72ee88168c88dd335c7
Feb 02 20:34:14 control1.io7m.com podman[2745]: Copying blob sha256:7f516ed68e97f9655d26ae3312c2aeede3dfda2dd3d19d2f9c9c118027543e87
Feb 02 20:34:14 control1.io7m.com podman[2745]: Copying blob sha256:e88daf71a034bed777eda8657762faad07639a9e27c7afb719b9a117946d1b8a
Feb 02 20:34:18 control1.io7m.com podman[2745]: Copying config sha256:cce9701f3b6e34e3fc26332da58edcba85bbf4f625bdb5f508805d2fa5e62e3e
Feb 02 20:34:18 control1.io7m.com podman[2745]: Writing manifest to image destination
Feb 02 20:34:18 control1.io7m.com podman[2745]: Storing signatures
Feb 02 20:34:18 control1.io7m.com podman[2745]: Error: checking platform of image cce9701f3b6e34e3fc26332da58edcba85bbf4f625bdb5f508805d2fa5e62e3e: inspecting image: size for layer "13cfed814d5b083572142bc068ae7f890f323258135f0cffe87b04cb62c3742e" is unkno>
Feb 02 20:34:18 control1.io7m.com systemd[1]: looseleaf.service: Control process exited, code=exited, status=125/n/a
At this point, there's really nothing that can be done aside from
having a human log in and running something like "podman system reset".
These systems are supposed to be as immutable as possible, and
deployments are supposed to be automated. As it stands currently, I
can't actually a deploy a machine and not have it immediately break and
require a manual intervention.
Is there some better way to handle this?
--
Mark Raynsford | https://www.io7m.com
2 years, 3 months
[Podman] Re: fcontext for rootfull volumes ?
by lejeczek
On 14/06/2023 15:16, lejeczek via Podman wrote:
> Hi guys.
>
> I map /root very often - I'd imagine many do - and I do
> that with Z
> What I get is quite puzzling to me, say host has it:
>
> system_u:object_r:container_file_t:s0 bin
> system_u:object_r:container_file_t:s0:c526,c622 cacert.p12
> system_u:object_r:container_file_t:s0:c526,c622 kracert.p12
> system_u:object_r:container_file_t:s0:c74,c78 pki
>
> in container:
>
> -> $ ls -Z1 bin pki
> bin:
> system_u:object_r:container_file_t:s0 conf
> system_u:object_r:container_file_t:s0 container-config
> ls: cannot open directory 'pki': Permission denied
>
> 'root' existed prior to container creation and 'pki' was
> added later, & outside of container.
> fcontext is not enough? SELinux says:
>
> allow container_init_t container_file_t:dir read;
>
> label=disable seems to be the way of it it but is that the
> right way?
ah, fcontext is good enough - another tool/daemon kept
changing labels.
1 year, 11 months
[Podman] Re: "floating" IP with podman
by lejeczek
On 12/06/2023 17:35, Chris Evich wrote:
>
> IIRC this is called an 'alias'. I don't have a direct
> answer to your question, but I can anticipate what the
> experts will want to know:
>
> Is this a root or rootless container?
>
> Chris Evich (he/him), RHCA III
> Senior Quality Assurance Engineer
> If it ain't broke, your hammer isn't wide 'nough.
>
> On 6/12/23 05:38, lejeczek via Podman wrote:
>> Hi guys.
>>
>> Is it possible to "attach" an IP to a container with (or
>> perhaps outside of) podman but not create a separate/new
>> iface for that?
>> As if you added a "subsequent" IP to already
>> ip-configured iface.
>>
>> many thanks, L.
>>
yes rootfool.
On this/similar topic - does 'macvlan' offer settable
metrics (it surely does not "inherit" - I expected it'd -
host iface's metric) or perhaps a "no-gateway" setup?
I'm on Centos 8 with 4.4.1 version.
1 year, 11 months
[Podman] Re: Reliable service starts
by Mark Raynsford
On 2023-02-03T09:19:44 +0100
Valentin Rothberg <vrothberg(a)redhat.com> wrote:
> Hi Mark,
>
> Thanks for reaching out.
>
> I suggest using `podman generate systemd` to generate a systemd unit.
> There's also a new way of running Podman inside of systemd called Quadlet
> that ships with the just released Podman v4.4. A blog about that topic is
> in the pipeline.
>
> Given the complexity of running Podman in systemd, `podman generate
> systemd` and Quadlet are the only supported ways.
>
> In your case, I suggest removing `podman pull` from the service. In
> contrast to `podman pull`, `podman run` won't redundantly pull the image if
> it's already in the local storage. That will relax the network bottleneck.
Thanks, I'll look into this. The systemd unit shown in my example is
actually already generated from a template (which is then included as
part of the CoreOS ignition file). I assume I won't have to run
"podman generate systemd" on the target machine? Can I run that on my
local development machine and then upload the results to the machine
that will actually run the service?
--
Mark Raynsford | https://www.io7m.com
2 years, 3 months
[Podman] Re: RunRoot & mistaken IDs
by lejeczek
On 28/03/2023 21:00, Chris Evich wrote:
> On 3/28/23 09:06, lejeczek via Podman wrote:
>> I think it might have something to do with the fact that
>> I changed UID for the user
>
> The files under /run/user/$UID are typically managed by
> systemd-logind. I've noticed sometimes there's a delay
> between logging out and the files being cleaned up. Try
> logging out for a minute or three and see if that fixes it.
>
> Also, if you have lingering enabled for the user, it may
> take a restart of particular the user.slice.
>
> Lastly, I'm not certain, but you (as root) may be able to
> `systemctl reload systemd-logind`. That's a total guess
> though.
>
>
Those parts seem very clunky - at least in up-to-date Centos
9 stream - I have removed a user and re/created that user in
IdM and..
even after full & healthy OS reboot, containers/podman insist:
-> $ podman container ls -a
WARN[0000] RunRoot is pointing to a path
(/run/user/2001/containers) which is not writable. Most
likely podman will fail.
Error: default OCI runtime "crun" not found: invalid argument
-> $ id
uid=1107400004(podmania) gid=1107400004(podmania)
groups=1107400004(podmania)
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
Where/what does it persist/insist on that old, non-existent
UID - would anybody know?
many thanks, L.
1 year, 4 months
[Podman] Speeding up podman by using cache
by Ganeshar, Puvi
Hello Podman team,
I am about explore this option so just wanted to check with you all before as I might be wasting my time.
I am in Platform Engineering team at DirecTV, and we run Go and Java pipelines on Jenkins using Amazon EKS as the workers. So, the process is that when a Jenkins build runs, it asks the EKS for a worker (Kubernetes pod) and the cluster would spawn one and the new pod would communicate back to the Jenkins controller. We use the Jenkins Kubernetes pod template to configure the communication. We are currently running the latest LTS of podman, v5.2.2, however still using cgroups-v1 for now, planning to migrate early 2025 by upgrading the cluster to use Amazon Linux 2023 which uses cgroups-v2 as default. Here’s the podman configuration details that we use:
host:
arch: arm64
buildahVersion: 1.37.2
cgroupControllers:
- cpuset
- cpu
- cpuacct
- blkio
- memory
- devices
- freezer
- net_cls
- perf_event
- net_prio
- hugetlb
- pids
cgroupManager: cgroupfs
cgroupVersion: v1
conmon:
package: conmon-2.1.12-1.el9.aarch64
path: /usr/bin/conmon
version: 'conmon version 2.1.12, commit: f174c390e4760883511ab6b5c146dcb244aeb647'
cpuUtilization:
idlePercent: 99.22
systemPercent: 0.37
userPercent: 0.41
cpus: 16
databaseBackend: sqlite
distribution:
distribution: centos
version: "9"
eventLogger: file
freeLocks: 2048
hostname: podmanv5-arm
idMappings:
gidmap: null
uidmap: null
kernel: 5.10.225-213.878.amzn2.aarch64
linkmode: dynamic
logDriver: k8s-file
memFree: 8531066880
memTotal: 33023348736
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: aardvark-dns-1.12.1-1.el9.aarch64
path: /usr/libexec/podman/aardvark-dns
version: aardvark-dns 1.12.1
package: netavark-1.12.2-1.el9.aarch64
path: /usr/libexec/podman/netavark
version: netavark 1.12.2
ociRuntime:
name: crun
package: crun-1.16.1-1.el9.aarch64
path: /usr/bin/crun
version: |-
crun version 1.16.1
commit: afa829ca0122bd5e1d67f1f38e6cc348027e3c32
rundir: /run/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
pasta:
executable: /usr/bin/pasta
package: passt-0^20240806.gee36266-2.el9.aarch64
version: |
pasta 0^20240806.gee36266-2.el9.aarch64-pasta
Copyright Red Hat
GNU General Public License, version 2 or later
https://www.gnu.org/licenses/old-licenses/gpl-2.0.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
remoteSocket:
exists: false
path: /run/podman/podman.sock
rootlessNetworkCmd: pasta
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: false
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.3.1-1.el9.aarch64
version: |-
slirp4netns version 1.3.1
commit: e5e368c4f5db6ae75c2fce786e31eef9da6bf236
libslirp: 4.4.0
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.2
swapFree: 0
swapTotal: 0
uptime: 144h 6m 15.00s (Approximately 6.00 days)
variant: v8
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- registry.access.redhat.com
- registry.redhat.io
- docker.io
store:
configFile: /etc/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.mountopt: nodev,metacopy=on
graphRoot: /var/lib/containers/storage
graphRootAllocated: 107352141824
graphRootUsed: 23986397184
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "false"
Supports d_type: "true"
Supports shifting: "true"
Supports volatile: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 1
runRoot: /run/containers/storage
transientStore: false
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 5.2.2
Built: 1724331496
BuiltTime: Thu Aug 22 12:58:16 2024
GitCommit: ""
GoVersion: go1.22.5 (Red Hat 1.22.5-2.el9)
Os: linux
OsArch: linux/arm64
Version: 5.2.2
We migrated to podman when Kubernetes deprecated docker and have been using podman for the last two years or so. Its working well, however since we run over 500 builds a day, I am trying to explore whether I can speed up the podman build process by using image caching. I wanted to see if I use an NFS file system (Amazon FSX) as the storage for podman (overlay-fs) would it improve podman performance by the builds completing much faster as of the already downloaded images on the NFS. Currently, podman in each pod on the EKS cluster would download all the required images every time so not taking advantage of the cached images.
These are my concerns:
1. Any race conditions, a podman processes colliding with each other during read and write.
2. Performance of I/O operations as NFS communication will be over the network.
Have any of you tried this method before? If so, can you share any pitfalls that you’ve faced?
Any comments / advice would be beneficial as I need to weigh up pros and cons before spending time on this. Also, if it causes outage due to storage failures it would block all our developers; so, I will have to design this in a way where we can recover quickly.
Thanks very much in advance and have a great day.
Puvi Ganeshar | @pg925u
Principal, Platform Engineer
CICD - Pipeline Express | Toronto
[Image]
7 months, 1 week