social.coop is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Fediverse instance for people interested in cooperative and collective projects. If you are interested in joining our community, please apply at https://join.social.coop/registration-form.html.

Administered by:

Server stats:

486
active users

#ArcA380

4 posts2 participants0 posts today

Done this with the #AMD 5600G's iGPU, now testing #Jellyfin hardware transcoding with my #Intel #ArcA380 GPU - one question though, is it normal for CPU usage to be rather high at least in the early parts of streaming (while transcoding), even with hardware transcoding?

I'm just trying to figure out if it is actually hardware transcoding - I'm assuming it is, bcos on the admin dashboard I'm seeing that it's transcoding as AV1 and I'm sure my Ryzen 7 1700 would not be able to do/handle that, esp considering that I'm testing with 4 streams playing concurrently? but the CPU usage is rather high the first minute or two or more from when the stream starts, it does lower down afterwards - the video playback is perfectly fine, no stuttering or anything like that.

My method of passthrough is the same as I did with the 5600G, that is a simple passthrough to the
#LXC container, then to the #Docker container running Jellyfin. I don't think I noticed this high CPU usage when testing the 5600G, and the only minor configuration difference between the two was I'd used #VAAPI with the 5600G and disabled #AV1 encoding (since idt it supports it), while on the Arc A380 GPU I'm using Intel's #QSV and have enabled AV1 encoding.

Am I correct to assume that hardware transcoding is indeed working? Cos again, I'm quite certain my Ryzen 7 1700 would definitely NOT be able to handle this lol esp since I only give the LXC container 2 cores.

Continued thread

ok ive to go now but the plan once i return ~2 hrs later is to dig thru my pendrives and use 1 to backup my current bios settings, use another to load up the latest stable bios version, take pics of my current bios settings, upgrade to the latest bios version, and then lastly to re-configure bios as close as possible to my current settings.

fingers crossed, my
#Proxmox node would still be fine and hopefully, the ReBAR option would show up on my BIOS, cos rn on BIOS v4.6 (2020), it's not (even with CSM disabled, above 4G decoding enabled). My server hardware: #AMD Ryzen 7 1700, #ASRock B450M Pro4 #motherboard, and #Intel #ArcA380 #GPU.

I've installed my #Intel #ArcA380 GPU on my #Proxmox node and... I'm having a hard time configuring the #ASRock #BIOS bcos for wtv reason, it's not outputting the full screen to my test monitor, so I can't really see the full list of options and menus I'd like to configure to enable ReBAR and so on. I've never experienced this before so.. what gives?

---

Edit: Just switched to another, more 'modern' monitor and it works fine.

I have finally caved in and dove into the rabbit hole of #Linux Container (#LXC) on #Proxmox during my exploration on how to split a GPU across multiple servers and... I totally understand now seeing people's Proxmox setups that are made up exclusively of LXCs rather than VMs lol - it's just so pleasant to setup and use, and superficially at least, very efficient.

I now have a
#Jellyfin and #ErsatzTV setup running on LXCs with working iGPU passthrough of my server's #AMD Ryzen 5600G APU. My #Intel #ArcA380 GPU has also arrived, but I'm prolly gonna hold off on adding that until I decide on which node should I add it to and schedule the shutdown, etc. In the future, I might even consider exploring (re)building a #Kubernetes, #RKE2 cluster on LXC nodes instead of VMs - and if that's viable or perhaps better.

Anyway, I've updated my
#Homelab Wiki with guides pertaining LXCs, including creating one, passing through a GPU to multiple unprivileged LXCs, and adding an #SMB share for the entire cluster and mounting them, also, on unprivileged LXC containers.

🔗 https://github.com/irfanhakim-as/homelab-wiki/blob/master/topics/proxmox.md#linux-containers-lxc

I... actually managed to do this and it was somewhat messy to get through with it, but I did it. My 'stoppers' initially were simply needing to update some of the #Jellyfin's xml configs for any wrong/old paths/values, and lastly, the #SQLite DBs themselves which had old paths as well - most of which were easy to fix as they're text values, but some were (JSON) blobs, using the same extension on #VSCode, this wasn't that hard to do either by simply exporting the blob, editing the blob's JSON text value, and reimporting the blob to the column. Oh, I also had to update meta.json files of all plugins I've installed to point to the new path to their logos.

Now my Jellyfin
#LinuxServer.io container sitting in an unprivileged (#Debian #Linux) #LXC container on #Proxmox is set up with hardware transcoding using the #AMD Ryzen 5 5600G onboard iGPU (cos I'm getting impatient in waiting for my #Intel #ArcA380 to arrive). I'll update my #ErsatzTV container to do the same. Everything's perfect now, 'cept, I still wouldn't recommend users to stream Jellyfin on the web or a web-based client using transcoding, cos while the transcoding itself is perfect, Jellyfin seems to have an issue (that I never got on #Plex) whereby the subtitle would desync pretty consistently if not direct playing - with external or embedded subs, regardless. Dk if that can ever be fixed though, considering the issue has been up since 2023 with no fix whatsoever.

There's also a separate issue I'm having where Jellyfin does not seem to support discovering/serving media files that are contained in a symlink directory (even though there were some people on their forums reporting in the past that it should) - I've reported it last week, but it's not going anywhere for now. Regardless though, I'm absolutely loving Jellyfin despite some of its rough edges, and my users are loving it too. I think I've considered myself 'migrated' from Plex to Jellyfin, but I'll still keep Plex around as backup for these 2 cases/issues I've mentioned, for now.

🔗 https://github.com/jellyfin/jellyfin-web/issues/4346

🔗 https://github.com/jellyfin/jellyfin/issues/13858

RE:
https://sakurajima.social/notes/a6j9bhrbtq

GitHub[Issue]: Sub title desync JF 10.8.9 · Issue #4346 · jellyfin/jellyfin-webBy MrToast99

Bruh, I might've wasted my time learning how to passthrough a GPU to an #LXC container on #Proxmox (as well as mount a SMB/CIFS share) and write up a guide (haven't been able to test yet, cept with the latter) - all by doing some seemingly magic #Linux fu with some user/group mappings and custom configs, if it turns out that you could actually achieve the same result just as easily graphically using a standard wizard on PVE.

It's 4am, I'll prolly try to find time later during the day, or rather evening (open house to attend at noon), and try using the wizard to 1) Add a device passthrough on an LXC container for my
#AMD iGPU (until my #Intel #ArcA380 GPU arrives) and see if the root user + service user on the container could access it/use it for transcoding on #Jellyfin/#ErsatzTV, and 2) Add a SMB/CIFS storage on the Proxmox Datacenter, tho my #NAS is also just a Proxmox VM in the same cluster (not sure if this is a bad idea?) and see if I could mount that storage to the LXC container that way.

#Homelab folks who have done this, feel free to give some tips or wtv if you've done this before!

I'm writing a guide on splitting a GPU passthrough across multiple #Proxmox #LXC containers based on a few resources, including the amazing Jim's Garage video.

Does anyone know the answer to this question of mine though, on why he might've chosen to map a seemingly arbitrary GID
107 on the LXC Container to the Proxmox host's render group GID of 104 - instead of mapping 104 -> 104, as he did with the video group, where he mapped 44 -> 44 (which seems to make sense to me)?

I've watched his video seemingly a million times, and referred to his incredibly simplified guide on his GitHub that's mostly only meant for copy-pasting purposes, and I couldn't quite understand why yet - I'm not sure if it really is arbitrary and
107 on the LXC Container could be anything, including 104 if we wanted to... or if it (i.e. 107) should've been the LXC Container's actual render group GID, in which case then it should've also been 104 instead of 107 on his Debian LXC Container as it is on mine.

Anyway, super excited to test this out once my
#Intel #ArcA380 arrives. I could probably already test it by passing through one of my node's Ryzen 5 5600G iGPU, but I worry if I'd screw something up, seeing that it's the only graphics onboard the node.

🔗 https://github.com/JamesTurland/JimsGarage/issues/141