social.coop is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Fediverse instance for people interested in cooperative and collective projects. If you are interested in joining our community, please apply at https://join.social.coop/registration-form.html.

Administered by:

Server stats:

488
active users

#k3s

2 posts2 participants0 posts today
Replied in thread

@arichtman yeah using overlay at scale does work it just requires a lot of planning and design. Ideally with some network engineers. My last experience was contrail, years ago, and it was not great.

I’ve used flannel and cilium for one-off personal things without issue. In fact #metallb worked really really well for me with #k3s in the past as well.

What I did in the last weeks (part 2):

All of my k3s clusters (and I have a few of them for $REASONS) are now running either #fluxCD or #argoCD. So everything #gitops now.

In general, I like the lightweightness of fluxCD, not having to run Redis and whatnot. But having a GUI is sometimes nice, even though the flux CLI is really easy to use and very intuitive.

And of course, #renovatebot is watching all repositories and sending merge requests to update things in the clusters. Nice!

Ok, think I need to do the whole #k3s shared storage thing again. As of last night I have both NFS and SMB going, but neither work for #sqlite DBs of course. I get corruptions on NFS and immediate errors on SMB (can’t get exclusive locks). Thinking of trying #longhorn for the shared storage with off-cluster backup.

My primary use case is apps like #homeassistant though, so I’m not sure if I’m overthinking this. I suppose I could configure HA to use a DB like Maria but I have some apps that don’t support this at all to my knowledge, so I’d need something with proper file locking support.

Has anyone gone down this road? Am
I framing this right - Longhorn for most app storage needs, SMB/NFS for larger storages like photos / media?

Crisis averted: Don’t upgrade your #k3s/#k8s cluster remotely when your only tunnel exists on said cluster 😅

Decided I needed to upgrade to the latest release (1.26 to 1.32) remotely while connected over #tailscale. Besides the drop, after getting home the upgrade completed without issue and it’s all up and running happily. Was wondering if I was about to experience the “why you don’t use kubernetes at home” lesson, but it seems as though k3s was the right decision still.