social.coop is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Fediverse instance for people interested in cooperative and collective projects. If you are interested in joining our community, please apply at https://join.social.coop/registration-form.html.

Administered by:

Server stats:

488
active users

#lxc

4 posts4 participants0 posts today
fraggle<p>While Docker brought containers to the mainstream, Linux developers had already been building and using container technologies for years. Tools like chroot date back to the early 80s, and LXC, combined with kernel features like namespaces and cgroups, provided a solid framework for process isolation. Docker’s genius was packaging it all up with a developer-friendly interface, but the underlying magic was always part of Linux’s DNA. Understanding this gives you a deeper appreciation for how robust and flexible the Linux kernel truly is.</p><p><a href="https://1.6.0.0.8.0.0.b.e.d.0.a.2.ip6.arpa/tags/linux" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Linux</span></a> <a href="https://1.6.0.0.8.0.0.b.e.d.0.a.2.ip6.arpa/tags/containers" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Containers</span></a> <a href="https://1.6.0.0.8.0.0.b.e.d.0.a.2.ip6.arpa/tags/lxc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LXC</span></a> <a href="https://1.6.0.0.8.0.0.b.e.d.0.a.2.ip6.arpa/tags/devops" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DevOps</span></a> <a href="https://1.6.0.0.8.0.0.b.e.d.0.a.2.ip6.arpa/tags/sysadmin" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SysAdmin</span></a> <a href="https://1.6.0.0.8.0.0.b.e.d.0.a.2.ip6.arpa/tags/opensource" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenSource</span></a></p>
Mika<p>I have finally caved in and dove into the rabbit hole of <a href="https://sakurajima.social/tags/Linux" rel="nofollow noopener noreferrer" target="_blank">#Linux</a> Container (<a href="https://sakurajima.social/tags/LXC" rel="nofollow noopener noreferrer" target="_blank">#LXC</a>) on <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener noreferrer" target="_blank">#Proxmox</a><span> during my exploration on how to split a GPU across multiple servers and... I totally understand now seeing people's Proxmox setups that are made up exclusively of LXCs rather than VMs lol - it's just so pleasant to setup and use, and superficially at least, very efficient.<br><br>I now have a </span><a href="https://sakurajima.social/tags/Jellyfin" rel="nofollow noopener noreferrer" target="_blank">#Jellyfin</a> and <a href="https://sakurajima.social/tags/ErsatzTV" rel="nofollow noopener noreferrer" target="_blank">#ErsatzTV</a> setup running on LXCs with working iGPU passthrough of my server's <a href="https://sakurajima.social/tags/AMD" rel="nofollow noopener noreferrer" target="_blank">#AMD</a> Ryzen 5600G APU. My <a href="https://sakurajima.social/tags/Intel" rel="nofollow noopener noreferrer" target="_blank">#Intel</a> <a href="https://sakurajima.social/tags/ArcA380" rel="nofollow noopener noreferrer" target="_blank">#ArcA380</a> GPU has also arrived, but I'm prolly gonna hold off on adding that until I decide on which node should I add it to and schedule the shutdown, etc. In the future, I might even consider exploring (re)building a <a href="https://sakurajima.social/tags/Kubernetes" rel="nofollow noopener noreferrer" target="_blank">#Kubernetes</a>, <a href="https://sakurajima.social/tags/RKE2" rel="nofollow noopener noreferrer" target="_blank">#RKE2</a><span> cluster on LXC nodes instead of VMs - and if that's viable or perhaps better.<br><br>Anyway, I've updated my </span><a href="https://sakurajima.social/tags/Homelab" rel="nofollow noopener noreferrer" target="_blank">#Homelab</a> Wiki with guides pertaining LXCs, including creating one, passing through a GPU to multiple unprivileged LXCs, and adding an <a href="https://sakurajima.social/tags/SMB" rel="nofollow noopener noreferrer" target="_blank">#SMB</a><span> share for the entire cluster and mounting them, also, on unprivileged LXC containers.<br><br></span>🔗 <a href="https://github.com/irfanhakim-as/homelab-wiki/blob/master/topics/proxmox.md#linux-containers-lxc" rel="nofollow noopener noreferrer" target="_blank">https://github.com/irfanhakim-as/homelab-wiki/blob/master/topics/proxmox.md#linux-containers-lxc</a></p>
Mika<p>I... actually managed to do this and it was somewhat messy to get through with it, but I did it. My 'stoppers' initially were simply needing to update some of the <a href="https://sakurajima.social/tags/Jellyfin" rel="nofollow noopener noreferrer" target="_blank">#Jellyfin</a>'s <code>xml</code> configs for any wrong/old paths/values, and lastly, the <a href="https://sakurajima.social/tags/SQLite" rel="nofollow noopener noreferrer" target="_blank">#SQLite</a> DBs themselves which had old paths as well - most of which were easy to fix as they're <code>text</code> values, but some were (JSON) <code>blob</code>s, using the same extension on <a href="https://sakurajima.social/tags/VSCode" rel="nofollow noopener noreferrer" target="_blank">#VSCode</a><span>, this wasn't that hard to do either by simply exporting the blob, editing the blob's JSON text value, and reimporting the blob to the column.<br><br>Now my Jellyfin </span><a href="https://sakurajima.social/tags/LinuxServer" rel="nofollow noopener noreferrer" target="_blank">#LinuxServer</a>.io container sitting in an unprivileged (<a href="https://sakurajima.social/tags/Debian" rel="nofollow noopener noreferrer" target="_blank">#Debian</a> <a href="https://sakurajima.social/tags/Linux" rel="nofollow noopener noreferrer" target="_blank">#Linux</a>) <a href="https://sakurajima.social/tags/LXC" rel="nofollow noopener noreferrer" target="_blank">#LXC</a> container on <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener noreferrer" target="_blank">#Proxmox</a> is set up with hardware transcoding using the <a href="https://sakurajima.social/tags/AMD" rel="nofollow noopener noreferrer" target="_blank">#AMD</a> Ryzen 5 5600G onboard iGPU (cos I'm getting impatient in waiting for my <a href="https://sakurajima.social/tags/Intel" rel="nofollow noopener noreferrer" target="_blank">#Intel</a> <a href="https://sakurajima.social/tags/ArcA380" rel="nofollow noopener noreferrer" target="_blank">#ArcA380</a> to arrive). I'll update my <a href="https://sakurajima.social/tags/ErsatzTV" rel="nofollow noopener noreferrer" target="_blank">#ErsatzTV</a> container to do the same. Everything's perfect now, 'cept, I still wouldn't recommend users to stream Jellyfin on the web or a web-based client using transcoding, cos while the transcoding itself is perfect, Jellyfin seems to have an issue (that I never got on <a href="https://sakurajima.social/tags/Plex" rel="nofollow noopener noreferrer" target="_blank">#Plex</a><span>) whereby the subtitle would desync pretty consistently if not direct playing - with external or embedded subs, regardless. Dk if that can ever be fixed though, considering the issue has been up since 2023 with no fix whatsoever.<br><br>There's also a separate issue I'm having where Jellyfin does not seem to support discovering/serving media files that are contained in a symlink directory (even though there were some people on their forums reporting in the past that it should) - I've reported it last week, but it's not going anywhere for now. Regardless though, I'm absolutely loving Jellyfin despite some of its rough edges, and my users are loving it too. I think I've considered myself 'migrated' from Plex to Jellyfin, but I'll still keep Plex around as backup for these 2 cases/issues I've mentioned, for now.<br><br></span>🔗 <a href="https://github.com/jellyfin/jellyfin-web/issues/4346" rel="nofollow noopener noreferrer" target="_blank">https://github.com/jellyfin/jellyfin-web/issues/4346</a><span><br><br></span>🔗 <a href="https://github.com/jellyfin/jellyfin/issues/13858" rel="nofollow noopener noreferrer" target="_blank">https://github.com/jellyfin/jellyfin/issues/13858</a><span><br><br>RE: </span><a href="https://sakurajima.social/notes/a6j9bhrbtq" rel="nofollow noopener noreferrer" target="_blank">https://sakurajima.social/notes/a6j9bhrbtq</a></p>
Mika<p>Bruh, I might've wasted my time learning how to passthrough a GPU to an <a href="https://sakurajima.social/tags/LXC" rel="nofollow noopener noreferrer" target="_blank">#LXC</a> container on <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener noreferrer" target="_blank">#Proxmox</a> (as well as mount a SMB/CIFS share) and write up a guide (haven't been able to test yet, cept with the latter) - all by doing some seemingly <i>magic</i> <a href="https://sakurajima.social/tags/Linux" rel="nofollow noopener noreferrer" target="_blank">#Linux</a> <i>fu</i><span> with some user/group mappings and custom configs, if it turns out that you could actually achieve the same result just as easily graphically using a standard wizard on PVE.<br><br>It's 4am, I'll prolly try to find time later during the day, or rather evening (open house to attend at noon), and try using the wizard to 1) Add a device passthrough on an LXC container for my </span><a href="https://sakurajima.social/tags/AMD" rel="nofollow noopener noreferrer" target="_blank">#AMD</a> iGPU (until my <a href="https://sakurajima.social/tags/Intel" rel="nofollow noopener noreferrer" target="_blank">#Intel</a> <a href="https://sakurajima.social/tags/ArcA380" rel="nofollow noopener noreferrer" target="_blank">#ArcA380</a> GPU arrives) and see if the root user + service user on the container could access it/use it for transcoding on <a href="https://sakurajima.social/tags/Jellyfin" rel="nofollow noopener noreferrer" target="_blank">#Jellyfin</a>/<a href="https://sakurajima.social/tags/ErsatzTV" rel="nofollow noopener noreferrer" target="_blank">#ErsatzTV</a>, and 2) Add a SMB/CIFS storage on the Proxmox Datacenter, tho my <a href="https://sakurajima.social/tags/NAS" rel="nofollow noopener noreferrer" target="_blank">#NAS</a><span> is also just a Proxmox VM in the same cluster (not sure if this is a bad idea?) and see if I could mount that storage to the LXC container that way.<br><br></span><a href="https://sakurajima.social/tags/Homelab" rel="nofollow noopener noreferrer" target="_blank">#Homelab</a> folks who have done this, feel free to give some tips or wtv if you've done this before!</p>
ClaudioM<p>Was <a href="https://bsd.network/tags/listening" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>listening</span></a> earlier to the latest <span class="h-card"><a href="https://hackaday.social/@FLOSS_Weekly" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>FLOSS_Weekly</span></a></span> talking about <a href="https://bsd.network/tags/Incus" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Incus</span></a> and <a href="https://bsd.network/tags/lxc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>lxc</span></a> for <a href="https://bsd.network/tags/linux" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linux</span></a> containers, the latter being something I was interested in doing on my Fedora VM running in Bhyve. I've already installed Incus and will follow this tutorial to get familiar with it for now.</p><p><a href="https://linuxcontainers.org/incus/docs/main/" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="ellipsis">linuxcontainers.org/incus/docs</span><span class="invisible">/main/</span></a></p>
Mika<p>I'm writing a guide on splitting a GPU passthrough across multiple <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener noreferrer" target="_blank">#Proxmox</a> <a href="https://sakurajima.social/tags/LXC" rel="nofollow noopener noreferrer" target="_blank">#LXC</a><span> containers based on a few resources, including the amazing Jim's Garage video.<br><br>Does anyone know the answer to this question of mine though, on why he might've chosen to map a seemingly arbitrary GID </span><code>107</code> on the LXC Container to the Proxmox host's <code>render</code> group GID of <code>104</code> - instead of mapping <code>104 -&gt; 104</code>, as he did with the <code>video</code> group, where he mapped <code>44 -&gt; 44</code><span> (which seems to make sense to me)?<br><br>I've watched his video seemingly a million times, and referred to his incredibly simplified guide on his GitHub that's mostly only meant for copy-pasting purposes, and I couldn't quite understand why yet - I'm not sure if it really is arbitrary and </span><code>107</code> on the LXC Container could be anything, including <code>104</code> if we wanted to... or if it (i.e. <code>107</code>) should've been the LXC Container's actual <code>render</code> group GID, in which case then it should've also been <code>104</code> instead of <code>107</code><span> on his Debian LXC Container as it is on mine.<br><br>Anyway, super excited to test this out once my </span><a href="https://sakurajima.social/tags/Intel" rel="nofollow noopener noreferrer" target="_blank">#Intel</a> <a href="https://sakurajima.social/tags/ArcA380" rel="nofollow noopener noreferrer" target="_blank">#ArcA380</a><span> arrives. I could probably already test it by passing through one of my node's Ryzen 5 5600G iGPU, but I worry if I'd screw something up, seeing that it's the only graphics onboard the node.<br><br></span>🔗 <a href="https://github.com/JamesTurland/JimsGarage/issues/141" rel="nofollow noopener noreferrer" target="_blank">https://github.com/JamesTurland/JimsGarage/issues/141</a></p>
Mika<p>I love <a href="https://sakurajima.social/tags/Podman" rel="nofollow noopener noreferrer" target="_blank">#Podman</a>, but gosh is it needlessly complicated (to setup, correctly) compared to <a href="https://sakurajima.social/tags/Docker" rel="nofollow noopener noreferrer" target="_blank">#Docker</a><span>. I'll continue using it over Docker on my systems, but recommending/advocating to people's sake (when it comes to containerisation), maybe I'll stick with Docker.<br><br>If you're just setting it up on your personal machine, it's easy - some aspects may even be simpler than Docker - but the moment you start getting into things like getting it to work on a </span><a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener noreferrer" target="_blank">#Proxmox</a> <a href="https://sakurajima.social/tags/LXC" rel="nofollow noopener noreferrer" target="_blank">#LXC</a> container... it gets messy real fast.</p>
r1w1s1Clear explanation on how FreeBSD jails compare to containers.<br><br>I have used in the past <a href="https://snac.bsd.cafe?t=docker" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#docker</a> and <a href="https://snac.bsd.cafe?t=lxc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#lxc</a>.<br>For my <a href="https://snac.bsd.cafe?t=slackbuilds" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#slackbuilds</a> I'm still using <a href="https://snac.bsd.cafe?t=chroot" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#chroot</a> works great.<br>
Stéphane Graber<p>The latest round of LTS updates for <a href="https://hachyderm.io/tags/Incus" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Incus</span></a>, <a href="https://hachyderm.io/tags/LXC" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LXC</span></a> and <a href="https://hachyderm.io/tags/LXCFS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LXCFS</span></a> is now out! <br><a href="https://stgraber.org/2025/04/04/lxc-lxcfs-incus-6-0-4-lts-release/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">stgraber.org/2025/04/04/lxc-lx</span><span class="invisible">cfs-incus-6-0-4-lts-release/</span></a></p>
stfn :raspberrypi: :python:<p>Hey networking/LXC specialists.</p><p>I have NextCloudPi running as an LXC container.</p><p>To access it, I set up routing on my Mikrotik router (screenshot).</p><p>The problem is that accessing NCP this way is very slow, I need to wait 5-10 seconds for the page to load.</p><p>I have Tailscale installed in the container, and accessing NCP using the Tailscale host name is nearly instantaneous.</p><p><a href="https://fosstodon.org/tags/Nextcloud" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Nextcloud</span></a> <a href="https://fosstodon.org/tags/lxc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>lxc</span></a> <a href="https://fosstodon.org/tags/networking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>networking</span></a> <a href="https://fosstodon.org/tags/linux" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linux</span></a></p>
gyptazy<p>No April Fools' joke - the new <a href="https://mastodon.gyptazy.com/tags/ProxLB" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ProxLB</span></a> release is scheduled for the 1st of April! Stay tuned!</p><p>ProxLB is an advanced <a href="https://mastodon.gyptazy.com/tags/loadbalancer" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>loadbalancer</span></a> for <a href="https://mastodon.gyptazy.com/tags/Proxmox" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Proxmox</span></a> clusters that brings in features like <a href="https://mastodon.gyptazy.com/tags/DRS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DRS</span></a> (known from <a href="https://mastodon.gyptazy.com/tags/VMware" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>VMware</span></a>), <a href="https://mastodon.gyptazy.com/tags/maintenance" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>maintenance</span></a> mode and (anti-)#affinity groups.</p><p><a href="https://mastodon.gyptazy.com/tags/homelab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>homelab</span></a> <a href="https://mastodon.gyptazy.com/tags/virtualization" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>virtualization</span></a> <a href="https://mastodon.gyptazy.com/tags/VM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>VM</span></a> <a href="https://mastodon.gyptazy.com/tags/virtualmachine" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>virtualmachine</span></a> <a href="https://mastodon.gyptazy.com/tags/ProxmoxVE" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ProxmoxVE</span></a> <a href="https://mastodon.gyptazy.com/tags/Prox" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Prox</span></a> <a href="https://mastodon.gyptazy.com/tags/xen" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>xen</span></a> <a href="https://mastodon.gyptazy.com/tags/alternatives" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>alternatives</span></a> <a href="https://mastodon.gyptazy.com/tags/opensource" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>opensource</span></a> <a href="https://mastodon.gyptazy.com/tags/coding" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>coding</span></a> <a href="https://mastodon.gyptazy.com/tags/projects" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>projects</span></a> <a href="https://mastodon.gyptazy.com/tags/KVM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>KVM</span></a> <a href="https://mastodon.gyptazy.com/tags/qemu" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>qemu</span></a> <a href="https://mastodon.gyptazy.com/tags/guests" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>guests</span></a> <a href="https://mastodon.gyptazy.com/tags/workloads" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>workloads</span></a> <a href="https://mastodon.gyptazy.com/tags/LXC" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LXC</span></a> <a href="https://mastodon.gyptazy.com/tags/container" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>container</span></a></p>
r1w1s1 :slackware:<p><span class="h-card" translate="no"><a href="https://osna.social/@razze" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>razze</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.online/@vwbusguy" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>vwbusguy</span></a></span> nice! I really thinking to add selinux support to <a href="https://fosstodon.org/tags/slackware" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>slackware</span></a> :) for use only with <a href="https://fosstodon.org/tags/lxc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>lxc</span></a></p>
Alan Jeskins-Powell<p><a href="https://mastodon.social/tags/selfhosting" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhosting</span></a> <br>Maybe of limited interest -<br>A ZFS ZVol is presented as a block device.<br>On ZFS storage, <a href="https://mastodon.social/tags/Incus" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Incus</span></a> and <a href="https://mastodon.social/tags/Lxc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Lxc</span></a> use ZVol's for VM storage.<br>When creating a ZVol for a VM Incus/Lxc will typically use udev to determine the Zvol ID.<br>Alpine Linux does not include udev by default which will cause Incus/Lxc VM creation to fail, albeit with different error messages.<br>Solution is simple. When using ZFS on Alpine make sure to "apk add zfs-udev"</p>
jbz<p>📦 Incus 6.10 Container &amp; Virtual Machine Manager Released <br>—<span class="h-card" translate="no"><a href="https://mastodon.social/@linuxiac" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>linuxiac</span></a></span> </p><p><a href="https://linuxiac.com/incus-6-10-container-and-virtual-machine-manager-released/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">linuxiac.com/incus-6-10-contai</span><span class="invisible">ner-and-virtual-machine-manager-released/</span></a></p><p><a href="https://indieweb.social/tags/incus" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>incus</span></a> <a href="https://indieweb.social/tags/lxc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>lxc</span></a> <a href="https://indieweb.social/tags/opensource" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>opensource</span></a></p>
stfn :raspberrypi: :python:<p>So I have this idea to move at least some of my self-hosted stuff from Docker to LXC.</p><p>Correct my if I'm wrong dear Fedisians, but I feel that LXC is better than Docker for services that are long-lasting and keeping an internal state, like Jellyfin or Forgejo or Immich? Docker containers are ephemeral in nature, whereas LXC containers are, from what I understand, somewhere between Docker containers and VMs.</p><p><a href="https://fosstodon.org/tags/lxc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>lxc</span></a> <a href="https://fosstodon.org/tags/lxd" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>lxd</span></a></p>
stfn :raspberrypi: :python:<p>OK, for many IT people this will be obvious, but it blew my mind just now.</p><p>I have NextCloud running in an LXC container in my NAS, and I was looking for an easy way to access it over the LAN.</p><p>I usually access it over Tailscale but it feels silly to access a local service over a VPN.</p><p>And thanks to this video I learnt that I can add a route in my Mikrotik router config to just route the LXC network through my NAS as a gateway. And it works!</p><p><a href="https://fosstodon.org/tags/mikrotik" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>mikrotik</span></a> <a href="https://fosstodon.org/tags/lxc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>lxc</span></a> <a href="https://fosstodon.org/tags/nextcloud" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>nextcloud</span></a> </p><p><a href="https://www.youtube.com/watch?v=TmGvbXfwJEA" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">youtube.com/watch?v=TmGvbXfwJE</span><span class="invisible">A</span></a></p>
Worteks<p>🧐 Partir de <a href="https://mastodon.social/tags/VMWare" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>VMWare</span></a>, pourquoi ce n'est pas si simple ?</p><p>Vous cherchez une alternative <a href="https://mastodon.social/tags/OpenSource" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenSource</span></a> pour remplacer VMWare ? Thibaut Démaret, CTO de Worteks vous a préparé un guide.<br>De <a href="https://mastodon.social/tags/LXC" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LXC</span></a>/LXD à <a href="https://mastodon.social/tags/Kubernetes" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Kubernetes</span></a> en passant par <a href="https://mastodon.social/tags/Proxmox" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Proxmox</span></a> et <a href="https://mastodon.social/tags/OpenStack" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenStack</span></a>, il y dresse un comparatif pour vous mettre à jour sur l'état du marché.</p><p>⏩ Découvrez son analyse et ses conseils pour bien choisir la solution de virtualisation qui correspond à vos besoins : <a href="https://www.worteks.com/blog/2025-01-08-alternative-a-vmware/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">worteks.com/blog/2025-01-08-al</span><span class="invisible">ternative-a-vmware/</span></a></p><p><span class="h-card" translate="no"><a href="https://fosstodon.org/@ow2" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>ow2</span></a></span> <span class="h-card" translate="no"><a href="https://social.openinfra.dev/@OpenInfra" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>OpenInfra</span></a></span> <br><span class="h-card" translate="no"><a href="https://mastodon.social/@fsfe" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>fsfe</span></a></span></p>
gyptazy<p>ProxLB 1.0.7 (an opensource DRS alike solution for <a href="https://mastodon.gyptazy.com/tags/Proxmox" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Proxmox</span></a> clusters) is just around the corner!</p><p>1.0.7 will be the last version before I'm going to publish the new refactored code base in a modern and object oriented way. Version 1.1.0 squashes some more bugs that were postponed on the current code base and makes the overall future handling much easier (including new features).</p><p>Website: <a href="https://proxlb.de" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">proxlb.de</span><span class="invisible"></span></a><br>GitHub: <a href="https://lnkd.in/eEZWEU7s" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">lnkd.in/eEZWEU7s</span><span class="invisible"></span></a><br>Blog post: <a href="https://lnkd.in/e5_b6u-A" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">lnkd.in/e5_b6u-A</span><span class="invisible"></span></a><br>Tags: <a href="https://mastodon.gyptazy.com/tags/ProxmoxVE" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ProxmoxVE</span></a> <a href="https://mastodon.gyptazy.com/tags/DRS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DRS</span></a> <a href="https://mastodon.gyptazy.com/tags/Loadbalancer" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Loadbalancer</span></a> <a href="https://mastodon.gyptazy.com/tags/opensource" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>opensource</span></a> <a href="https://mastodon.gyptazy.com/tags/virtualization" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>virtualization</span></a> <a href="https://mastodon.gyptazy.com/tags/gyptazy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>gyptazy</span></a> <a href="https://mastodon.gyptazy.com/tags/Proxmox" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Proxmox</span></a> <a href="https://mastodon.gyptazy.com/tags/ProxLB" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ProxLB</span></a> <a href="https://mastodon.gyptazy.com/tags/homelab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>homelab</span></a> <a href="https://mastodon.gyptazy.com/tags/enterprise" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>enterprise</span></a> <a href="https://mastodon.gyptazy.com/tags/balancer" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>balancer</span></a> <a href="https://mastodon.gyptazy.com/tags/balancing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>balancing</span></a> <a href="https://mastodon.gyptazy.com/tags/virtualmachines" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>virtualmachines</span></a> <a href="https://mastodon.gyptazy.com/tags/VM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>VM</span></a> <a href="https://mastodon.gyptazy.com/tags/VMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>VMs</span></a> <a href="https://mastodon.gyptazy.com/tags/VMware" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>VMware</span></a> <a href="https://mastodon.gyptazy.com/tags/LXC" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LXC</span></a> <a href="https://mastodon.gyptazy.com/tags/container" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>container</span></a> <a href="https://mastodon.gyptazy.com/tags/cluster" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cluster</span></a></p>
DecaTec<p>Heute Update auf <a href="https://mastodon.social/tags/Pihole" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Pihole</span></a> 6 durchgeführt. Lief bis auf ein paar Kleinigkeiten recht problemlos:<br>- Auf einer Instanz wurde das Passwort zurückgesetzt<br>- Eine Instanz hat beim Update den Upstream-DNS "vergessen" (<a href="https://mastodon.social/tags/dnscryptproxy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>dnscryptproxy</span></a>)<br>- Nach jedem Neustart die Meldung "Failed to adjust time during NTP sync...". Liegt aber daran, dass beide Instanzen auf einem unprivilegierten <a href="https://mastodon.social/tags/Proxmox" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Proxmox</span></a> <a href="https://mastodon.social/tags/LXC" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LXC</span></a> laufen. Container nun auf privilegiert umgestellt, Problem weg</p><p>Mal sehen, was die kommenden Tage noch so auffällt.</p>
Pope Bob the Unsane<p>After taking the nickle tour of <a href="https://kolektiva.social/tags/Qubes" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Qubes</span></a>, my hasty conclusion is that it is anti-<a href="https://kolektiva.social/tags/KISS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>KISS</span></a>; there are seemingly many moving parts under the surface, and many scripts to grok to comprehend what is going on.</p><p>I plan to give it some more time, if only to unwrap how it launches programs in a VM and shares them with dom0's X server and audio and all that; perhaps it's easier than I think.</p><p>I also think <a href="https://kolektiva.social/tags/Xen" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Xen</span></a> is a bit overkill, as the claim is that it has a smaller kernel and therefore smaller attack surface than the seemingly superior alternative, <a href="https://kolektiva.social/tags/KVM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>KVM</span></a>. Doing some rudimentary searching out of identified / known VM escapes, there seem to be many more that impact Xen than KVM, in the first place.</p><p>Sure, the <a href="https://kolektiva.social/tags/Linux" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Linux</span></a> kernel may be considerably larger than the Xen kernel, but it does not need to be (a lot can be trimmed from the Linux kernel if you want a more secure hypervisor), and the Linux kernel is arguably more heavily audited than the Xen kernel.</p><p>My primary concern is compartmentalization of 'the web', which is the single greatest threat to my system's security, and while <a href="https://kolektiva.social/tags/firejail" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>firejail</span></a> is a great soltion, I have run into issues maintaining my qutebrowser.local and firefox.local files tuned to work well, and it's not the simplest of solutions.</p><p>Qubes offers great solutions to the compartmentalization of data and so on, and for that, I really like it, but I think it's over-kill, even for people that desire and benefit from its potential security model, given what the threats are against modern workstations, regardless of threat actor -- most people (I HOPE) don't have numerous vulnerable services listening on random ports waiting to be compromised by a remote threat.</p><p>So I am working to refine my own security model, with the lessons I'm learning from Qubes.</p><p>Up to this point, my way of using a system is a bit different than most. I have 2 non-root users, neither has sudo access, so I do the criminal thing and use root directly in a virtual terminal.</p><p>One user is my admin user that has ssh keys to various other systems, and on those systems, that user has sudo access. My normal user has access to some hosts, but not all, and has no elevated privileges at all.</p><p>Both users occasionally need to use the web. When I first learned about javascript, years and years ago, it was a very benevolent tool. It could alter the web page a bit, and make popups and other "useful" things.</p><p>At some point, <a href="https://kolektiva.social/tags/javascript" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>javascript</span></a> became a beast, a monster, something that was capable of scooping up your password database, your ssh keys, and probe your local networks with port scans.</p><p>In the name of convenience.</p><p>As a result, we have to take browser security more seriously, if we want to avoid compromise.</p><p>The path I'm exploring at the moment is to run a VM or two as a normal user, using KVM, and then using SSH X forwarding to run firefox from the VM which I can more easily firewall, and ensures if someone escapes my browser or abuses JS in a new and unique way, that no credentials are accessible, unless they are also capable of breaking out of the VM.</p><p>What else might I want to consider? I 'like' the concept of dom0 having zero network access, but I don't really see the threat actor that is stopping. Sure, if someone breaks from my VM, they can then call out to the internet, get a reverse shell, download some payloads or build tools, etc.</p><p>But if someone breaks out of a Qubes VM, they can basically do the same thing, right? Because they theoretically 'own' the hypervisor, and can restore network access to dom0 trivially, or otherwise get data onto it. Or am I mistaken?</p><p>Also, what would the <a href="https://kolektiva.social/tags/LXC" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LXC</span></a> / <a href="https://kolektiva.social/tags/LXD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LXD</span></a> approach look like for something like this? What's its security record like, and would it provide an equivalent challenge to someone breaking out of a web browser (or other program I might use but am not thinking of at the moment)?</p>