Let’s tinker around and accidentally break something.
and debug it until you have to reinstall your entire stack from scarch
When’s the last time you checked if your backup solution works?
Yesterday! Switched my media server from freebsd to alpine and got the arr stack all set up using the backup zip files
logging is probably down
You do, of course have a dedicated rsyslogd server? An isolated system to which logs are sent, so that if someone compromises your other systems, they can’t wipe traces of that compromise from those systems?
Oh. You don’t. Well, that’s okay. Not every lab can be complete. That Raspberry Pi over there in the corner isn’t actually doing anything, but it’s probably happy where it is. You know, being off, not doing anything.
All of your systems are set up, but are they capable of being redeployed using a configuration management software package? Ansible or something like that?
Oh. They’re not. Well, that’s probably okay. I mean, you could probably go manually reproduce configurations, more or less.
You have an intrusion detection system set up, right? A server watching your network’s traffic, looking for signs that systems on your network have been compromised, and to warn you?
Oh. You don’t. Well, that’s probably okay. I mean, probably nothing on your network has been compromised. And probably nothing in the future will be.
Barring any hardware issues or external factors, will it run for 10000 years? Any logs not properly rotated? And other outputs accumulating and eventually filling up a filesystem?
Buy a UPS and setup a NUT server on the spare raspberry pi you have lying around.
All of those systems in your homelab…they aren’t all pulling down their updates multiple times over your network link, right? You’re making use of a network-wide cache? For Debian-family systems, something like Apt-Cacher NG?
Oh. You’re not. Well, that’s probably okay. I mean, not everyone can have their environment optimized to minimize network traffic.
You have squid or some other forward http proxy set up to share a cache among all the devices on your network set up to access the Web, to minimize duplicate traffic?
And you have a shared caching DNS server set up locally, something like BIND?
Oh. You don’t. Well, that’s probably okay. I mean, it probably doesn’t matter that your devices are pulling duplicate copies of data down. Not everyone can have a network that minimizes latency and avoids inefficiency across devices.
Then it turns out your monitoring system failed and FUCK IT’S BEEN A MONTH SINCE THE LAST PROPER BACKUP
Do your backups work?
You have remote power management set up for the systems in your homelab, right? A server set up that you can reach to power-cycle other servers, so that if they wedge in some unusable state and you can’t be physically there, you can still reboot them? A managed/smart PDU or something like that? Something like one of these guys?
Oh. You don’t. Well, that’s probably okay. I mean, nothing will probably go wrong and render a device in need of being forcibly rebooted when you’re physically away from home.
Ah. The approach that squirrel@piefed.zip suggested. ;)
Thanks for the tutorial though.
You should use Arch, then you can update every 15 minutes 🤭
Does a $12 Shelly plug count?
If you do have the smart PSU and power management server you probably also went down the rabbit hole of scripting the power cycling, right? Maybe made that server hardened against power loss disk corruption so it can be run until UPS battery exhaustion.
What if there is a power outage and NUT shuts everything down? Would be nice to have everything brought back up in an orderly way when power returns. Without manual intervention. But keeping you informed via logging and push notifications.
Backup? Psh… That’s what the lab is for.
I havent messed much with my servers in 2 years. I think that means I’ll hit my RIO in another 5 :)
Have you tried introducing unnecessary complexity?
GET OUT OF MY HOUSE!
You have all your devices attached to a console server with a serial port console set up on the serial port, and if they support accessing the BIOS via a serial console, that enabled so that you can access that remotely, right? Either a dedicated hardware console server, or some server on your network with a multiport serial card or a USB to multiport serial adapter or something like that, right? So that if networking fails on one of those other devices, you can fire up minicom or similar on the serial console server and get into the device and fix whatever’s broken?
Oh, you don’t. Well, that’s probably okay. I mean, you probably won’t lose networking on those devices.
if you can cycle your home assistant with the shelly plug whilst your home assistant is down, yes. from experience it’s really quite annoying to have a smart plug switch off HA…
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
| Fewer Letters | More Letters |
|---|---|
| DNS | Domain Name Service/System |
| HA | Home Assistant automation software |
| ~ | High Availability |
| PSU | Power Supply Unit |
[Thread #161 for this comm, first seen 13th Mar 2026, 11:00] [FAQ] [Full list] [Contact] [Source code]
But if my backups actually work then I miss out on the joy of rebuilding everything from scratch and explaining to my wife why non of the lights in the house work anymore.
I set this up years ago, but then decided it was better to just install different distros on each of my computers. Problem solved?
I had a automatic reboot of all VMs and the hypervisor because of a kernel update at night. Nextcloud decided to start in maintenance mode and Jellyfin refused to start because the cache folder didn’t have enough space left. Authentik also complained about outdated provider configuration…
Need to investigate the Nextcloud and Authentic issue during weekend 🤗
I haven’t messed with my raspberry pi in maybe a month… And I think one of my backups got corrupted because I receive an email saying that it failed along with tons of errors every night. Hmm, maybe I should get to that soon…
Do you have a spinning fish display in front of your homelab server, right? We all know the spinning fish improves performance and security, it is a indispensable part of homelabbing
Going into spring/summer that’s ideal, I wanna go places do things. Mid winter, I’m feature creeping till something breaks.
Are you implying it’s possible to debug without having to reinstall from scratch? Preposterous! 😂
HA is on the same proxmox host as the router. So yeah I can end up locked out. Hasn’t happened yet tho! The relay is on my test machine, it’s always nvidia that crashes there.
If you know how your setup works, then that’s a great time for another project that breaks everything.
Honestly, that would be living the dream… I have too many other things I want to do!
The Shelly can be configured to automatically turn back on after a certain amount of time. It has local scripting capabilities.
If they did that… I don’t know.
J O E L
Gotta be honest, my home lab chugs along quite happily.
Atomic fedora makes it hard to break, and then all the services are containerized and managed by configuration and just files only.
When there’s an update to a service: just pull service. Firewall needs configuring: just firewall-reset && just firewall-enable.
The only flaky thing is a vpn that I run through glutan and I’m thinking of dumping that provider.
Man I always get sad when I see this meme format because the story behind it is so fucking tragic… :(
You can forgejo with a container index enabled, I don’t know if there’s a way to use that as a proxy for downloading containers though.
If it’s stable, it’s not a lab.
That’s infrastructure.
Oh. You don’t. Well, that’s probably okay. I mean, nothing will probably go wrong and render a device in need of being forcibly rebooted when you’re physically away from home.
*furiously adds a new item to the TODO list*
Yeah, my home server was being a little too stable and I wasn’t really learning anything. So I switched from fedora to proxmox, now I’ve got a nixos vm I’m going to try to get all my services running in.
The comments in this thread have collectively created thousands of person-hours worth of work for us all…
You need monitoring
Infrastructure diagram? No! In this homelab we refer to the infrastructure hyperdodecahedron.
That won’t work in most cases, all https traffic isn’t cached unless you mitm https which is a bad idea and not worth it.
Only cache updates those are worth it and most have a caching server option.
If logging is down and there’s no one around to log it, is it really down?
What story?
I did not know there was a story, assumed it was from that TV series about the cartel guy
Kubernetes?
Haha too right mate
I’m remembering a very not fun discussion my team had about “the monitoring system not sending any alerts doesn’t inherently mean everything is ok” after an outage that was missed by our monitoring system.
You need to make sure you’re monitoring connectivity as well as specific problem states. No data is a problem state often overlooked, and it’s not always considered for every resource type in these systems out of the box.
And you probably want a heartbeat notification. Yes, it’s noise, but if you don’t see anything from monitoring you need to question if monitoring is the thing that broke. It sending out a notification every so often going “yes I am online” is useful.
Hearbeat notifications man. “Yes I am online” email once a day or so. Yeah it’s more emails to delete but it can be a lifesaver.
I wish it was stable
I had a drive die yesterday
Have you already tried implementing an identity provider like Authentik, so you can add OIDC and ldap for all your services, while you are the only one that’s using them? 🤔
Behind a traefik reverse proxy with lets encrypt for ssl even though the services aren’t exposed to the internet?
No try migrating all your docker containers to podman.
One alert daily reporting that there are no alerts is probably good for a home lab….
Never run:
docker compose pull
docker compose down && docker compose up -d
Right before the end of your day. Ask me how I know 😂
That’s not a homelab, that’s a home server.
I test in my Homeproduction
Time to switch distros!
Don’t encourage me.
“Damn, I’ve got this Debian server shit down. I wonder how an opensuse server would work out” *installs tumbleweed *
True story
What’s a backup solution..? (I’m only being half sarcastic, I really need to set one up, but it’s not as “fun” as the rest of my homelab, send suggestions)
To be fair a lot of apps don’t handle custom CAs like they should. Looking at you Home Assistant! 😠
And then try turning on SELinux!
Just did that last weekend. Nothing to do anymore. 😢
Hey my wife uses some of them too!
Saturday morning: “Incus and podman seem interesting. I bet I could swap everything over while the family is out this afternoon”
Sunday evening: “Dad, when will the lights work again?”
Did you do Quadlets?
Tal just got the chaotic evil tag today.
Have you tested your backups recently? Having them complete is one thing, having the data you need for recovery is another. Have you backed up your vm configurations and build scripts?
Go test your latest backup!
“Dad, when will the lights work again?
As soon as selinux decides I have permission.
Yes of course. Had to spend a couple of hours fixing permission related issues.
Hmmm. My pi{VPN,hole,dhcp,HA} has a little bit of overhead left…
You can always configure your vim further
Who will log the loggers?
compose up will automatically recreate with newer images if the new one were pulled. so there is no need for compose down btw
It’s not that difficult to get SELinux working with podman quadlets, especially if you run things rootless. I have a kerberized service account for each application I host and my quadlets are configured to run under those. I very rarely encounter applications that simoky can’t be run rootless but I usually can find an adequate alternative. I think right now the only thing that runs as root is one of the talk or collabora containers in my nextcloud stack. No selinux issues either.
I’ve moved my homelab twice because it became stable, I really liked the services it was running, and I didn’t want to disturb the last lab**cough**prod server.
My current homelab will be moar containers. I’m sure I’ll push it to prod instead of changing the IP address and swapping name tags this time.
Don’t forget about Anubis and crowdsec to make it even safer inside your LAN
No mercy for you, then. ;)
Wazuh ftw
I at least have external backups for important family pics and docs! But yea the homelab itself is severely lacking. If it dies, I get to start from scratch. Been gambling for years that “I’ll get around to a backup solution” before it dies. I wouldn’t bet on me :|
Probably a good idea to switch over to WPA-Enterprise using Authentik’s RADIUS server support and let all of the users of your wireless access point log in with their own network credentials, while you’re at it.
But did you run them as rootful or the intended rootless way.
It seems like a good time to learn graphviz’s dot format for the network layout diagrams, with automated layout.
https://blog.ipspace.net/kb/NetAutJourney/40-Network-Diagrams/
I use podman-compose with system accounts and I don’t have a ton of issues. The biggest one is that I can’t seem to get bluetooth and pip working on Home Assistant at the same time. Most of the servers I manage have SELinux and it works fine as long as I use :z/:Z with bind mounts.
A few years ago, I set up a VPS for my friend’s business; at the time, I didn’t know how to work with SELinux so I just turned it off. I tried to flip it back on, and it somehow bricked the system. We had to restore from a backup. Since then, I’ve been afraid to enable it on my flagship homelab server.
Time to start documenting it!
NEVER1!!!11!!
Guess this is a good time to test my infrastructure automation.
are you sure it really bricked it? when turning it on, on next boot it needs to go over all the files and retag them or something like that, and it can take a significant amount of time
but you probably won’t notice that some of the regular emails are not sent anymore
I set my homelab up on Bazzite immutable with podman and SELinux. It took a while to work everything out and have it boot up into a valid state hahaha
Don’t look too closely you can jinx it.
I had problems getting apps with multiple containers working in quadlets (definitely a knowledge issue on my part, but didn’t feel the time learning it was beneficial, but will probably revisit during kubernetes learning) so went back to podman with docker compose.
Nothing to install? Not with that attitude!
Start a 10” rack.
Restore is future me’s problem. Fuck that guy :D
Couple it to your smart watch, backup every 10 seconds, and make it vibrate when successful
Ah, that frission of excitement when you come to restore! Will it work? Does it contain that very important file? Is it up to date? How much will future you hate past you if it isn’t there?
Can’t believe nobody here mentioned nixOS so far? How about moving all of your configs in a flake and manage all of your systems with it?
Started running unmanic on my plex library to save hard drive space since apparently the powers that be don’t want us to even own hard drives anymore. So far it’s going great, it’ll probably take weeks since I don’t have a gpu hooked up to it
heck i really wish we could all throw a party together. part swap, stories swap. show off cool shit for everyone to copy.
help each other fill in the missing pieces
y’all seem like cool peeps meme-ing about shit nobody else gets!
time to test the backups!
you are just making yourself learn to ignore that your smartwatch vibrates. It’s a bit like breathing and blinking, you are so used to it you can completely forget that its happening. if your smartwatch, or phone, or whatever, starts vibrating all the time, you will get used to it and not notice when it stops happening anymore, but also it will hide any actually meaningful notification.
I just installed Debian on a decommissioned Chromebox for exactly this purpose + 4x usb-to-serial adapters.
I made a git repo and started putting all of my dot files in a Stow and then I forgot why I was doing it in the first place.
or learn emacs
You just described a convention.
Honestly, I don’t know what happened, but it was unreachable via SSH and the web console. There shouldn’t have been a ton of files to tag since it was an Almalinux system that started with SELinux enabled, and all we added was a container app or two.
The old lighting wasn’t that great anyway. If I were to just put lighting on a DMX512-controlled network, then all of it could be synchronized to whole-house audio…
This is just as true in my non-computer hobbies that involve physical systems instead of code and configs!
If I had to just barely meet the requirements using as little budget as possible while making it easy for other people to work on, that would be called “work.” My brain needs to indulge in some over-engineering and “I need to see it for myself” kind of design decisions.
Any reason you chose Bazzite for your homelab distro? First I’ve heard of someone doing that!
Then configure vim using emacs
Wouldn’t an immutable OS be overall a pretty good idea for a stable server?
time to test the backups!
Always a white knuckle event for me
Time to expand.
Rootless. The docker containers were rootful, hence the permission struggles.
At 71, I have to document. I started a long time ago. I worked for a mec. contractor long ago, and the rule was: ‘If you didn’t write it down, it didn’t happen.’ That just carried over to everything I do.
I think it’s kinda better using quadlets, because I wrote some custom scripts, and quadlets made the process better. But podman compose is probably file too.
Actually, one thing I want to do is switch from services being on a subdomain to services being on a path.
immich.myserver.com -> myserver.com/immich
jellyfin.myserver.com -> myserver.com/jellyfin
I’m getting tired of having to update DNS records every time I want to add a new service.
I guess the tricky part will be making sure the services support this kind of routing…
unnecessary complexity?
I can help with that. It’s a skill I have. LOL
Don’t forget to integrate it into Home Assistant so you can alert the ISS when the mail man is on the porch.
Right before the end of your day
Oh, gosh, I did this last evening. I didn’t check what time it was, and initiated an update on some 70 containers. I have a cron that shuts down the server in the evening, and sure enough, right in the middle of the updates, it powered off. I didn’t even mess with it and went to bed. Re-initiated the update this morning, and everything is up and running. Whew!
OP, totally understand, but this is a level of success with your homelab. Nothing needs fiddling with. Now, there is a whole Awesome Self Hosted list you could deploy on a non-production server and run that through the paces.
I had the same idea, but the solution I thought about is finding a way to define my DNS records as code, so I can automate the deployment. But the pain is tolerable so far (I have maybe 30 subdomains?), I haven’t done anything yet
In Nginx you can do rewrites so services think they are at the root.
Wildcard CNAME pointing to your reverse proxy who then figures out where to route the request to? That’s what I’ve been doing - this way there’s no need to ever update DNS at all :)
I find the path a bit clunky because the apps themselves will oftentimes get confused (especially front-ends). So keeping everything “bare” wrt path, and just on “separate” subdomains is usually my preferred approach.
So that when setting up a new system, you can migrate all your user configuration easily, while also version-controlling it.
git commit --message 'So that when setting up a new system, you can migrate all your user configuration easily, while also version-controlling it.'
Scarched arth
Oh but I have them !
Every day an email is sent out with the backup status.
Every day I got my email in the morning with the back up logs.
For years.
I associated email received to backup successful, until a month or so when my vpn broke and the emails where just “could not connect”, but it took me a while to bother actually opening the message body as it had always been the same for years.
So I’ll manage it differently, have the email subject be more explicit about a success or a failure amongst other things.
Always learning :^)
Alternatively if you’re tired of manual DNS configuration:
FreeIPA
Configures users, sudoer group, ssh keys, and DNS in one go.
Also lotta services can be integrated using LDAP auth too.
So far I’ve got proxmox, jellyfin, zoneminder, mediawiki, and forgejo authing against freeipa in top of my samba shares.
Why are you having to update your DNS records when you add a new service? Just set up a wildcard A record to send *.myserver.com to the reverse proxy and you never have to touch it again.
Who cares if it’s exposed to the internet?
Encrypting your local traffic is still valuable to protect your systems from any bad actors on your local network (neighbor kid cracks your wifi password, some device on your network decides to start snooping on your local traffic, etc)
Many services require HTTPS with a valid cert to function correctly, eg: Bitwarden. Having a real cert for a real domain is much simpler and easier to maintain than setting up your own CA
I already have Ansible to manage my system and I like to have the same base between my pc and my server build muscle memory.
If I was managing a pc fleet I would consider NixOS, but I don’t see the appeal right now.
The rare moment when everything actually works. 😄
Do you write down what you write down on the internet?
TIL. Thank you!
I honestly don’t know a ton about immutable distros other than that they let you front-load some difficulty in getting things set up in exchange for making it harder to break. I was just surprised that the distro of choice was Bazzite, since its target audience seems to be gamers.
https://wiki.hackerspaces.org/List_of_Hacker_Spaces
Also check out meetup.com for linux user groups and other events.
Carry around a candle in one of those old timey holders like Scrooge Mcduck
Living the good life
As in a blog or wiki? I do not because I am not authoritative. What I know came from reading, doing, screwing it up, ad nauseam. When something finally clicks for me, I write it down because 9 times out of 10, I will need that info later. But my writing would be so full of inaccuracies that it would be embarrassing and possibly lead someone astray.
Not OP but a lot of people probably use pi-hole which doesn’t support wildcards for some inane reason
Sure. What that guy is using is actually not the most-interesting diagram style, IMHO, for automatic layout of network maps, if you want large-scale stuff, which is where the automatic layout gets more interesting. I have some scripts floating around somewhere that will generate very large network maps — run a bunch of traceroutes, geolocate IPs, dump the results into an sqlite database, and then generate an automatically laid-out Internet network map. I don’t want to go to the trouble of anonymizing the addresses and locations right now, but if you have a graphviz graph and want to try playing with it, I used:
goes looking
Ugh, it’s Python 2, a decade-and-a-half old, and never got ported. Lemme gin up an example for the non-hierarchical graphviz stuff:
graph.dot:
graph foo {
a--b
a--d
b--c
d--e
c--e
e--f
b--d
}
Processed with:
$ sfdp -Goverlap=prism -Gsep=+5 -Gesep=+4 -Gremincross -Gpack -Gsplines=true -Tpdf -o graph.pdf graph.dot
Generates something like this:
https://lemmy.today/pictrs/image/c7fb0167-fbda-47f5-914f-a0daa3066c67.png
That’ll take a ton of graphviz edges and nicely lay them out, albeit not in that kind of hierachy shown. You can create massive network maps like this. Note that was last looking at graphviz’s automated layout stuff about 15 years ago, so it’s possible that they have better algorithms now, but this can deal with enormous numbers of nodes and will do reasonable things with them.
I just grabbed his example because it was the first graphviz network map example that came up.
And then migrate all your podman containers to proxmox
Having a very similar infrastructure, I would love to know if you ever find anything that works for this. I’ve been maintaining a SnipeIT instance manually, but that’s a real PITA. Tried the same with ITSM-NG, but haven’t even lookid in it for months.
Quick! Break something!
Maybe try this…
That’s my case. I send every new subdomain to my nginx IP on pi-hole and then use nginx as a reverse proxy
How is the kubernetes (k3s/rke2) migration coming along?
I switched to Technitium and I’ve been pretty happy. Seems very robust, and as a bonus was easy to use it to stop DNS leaks (each upstream has a static route through a different Mullvad VPN, and since they’re queried in parallel, a VPN connection can go down without losing any DNS…maybe this is how pihole would have handled it too though).
And of course, wildcards supported no problem.
That wasy exact setup as well until I switched to a different router which supported both custom DNS entries and blocklists, thereby making the pi-hole redundant
One word: chaos engineering
My man person!
I should do some breaking network changes… While tunneled in.
You’re right. I got in the habit of doing that because I’m endlessly tweaking my .env files and I don’t think those reload unless you shut down first
Okay, but why not create more work for yourself by rebuilding everything from scratch?
Backups. You’re forgetting them.
Don’t worry, you’re one Docker pull away from having to look up how to manually migrate Postgres databases within running containers!
(Looks at my PaperlessNGX container still down. Still irritated.)
I built an 8 outlet version of those with relays and wall outlets for.. a lot less.
An 8 switch relay, old Pi, and 8 hardware store outlets can be had for not much more. I did that and let PiKVM control my outlets directly.
I run opnsense, so I need to dump pi-hole. But I don’t have the energy right now to do that.
Pi-Hole was pretty straightforward at the time and I did not look back since then. Annoying, but easy.
At the start I just wanted a desktop machine that runs Steam through sunshine/moonlight so hardware support and gaming stuff such was very important.
My homelab used to run on my laptop when it could all fit within a couple 100s of GB and I was the only user but moving it was tricky. Since I’m a programmer I’m not afraid of this stuff so I just spent the hours to figure out one problem at a time.
I ended up figuring out adding HDD whitelist in SELinux, make it accessible in podman, manually edit fstab because tools didn’t work, systemd service for startup, logging in automatically where I already forgot everything and would have not had to do any of this on a bog standard Ubuntu server.
Good for stability, bad for flexibility for when the homelab grows more complex.
Respect! I too often take it for granted that it’s a privilege for my gaming rig and my homelab server to be separate boxes.
No upstream bugs to fix?
that started with SELinux enabled
that does not matter, it needs to go over all of them. I don’t know how long it takes with SSD, but with HDD it can take a half an hour or more, with a mostly base system. and the kernel starts doing this very early, when not even systemd or other processes are running, so no ssh, but web console should have been working to see what its doing
Good to know! I do hope to eventually re-enable SELinux on my flagship server, so I’ll keep this in mind. As for my friend’s server, I think he migrated to Alpine a while back.
Wreck it Ralph!!
I came to the same conclusion, Nobara for would have been best.
“Yes, while connected to my wireguard server through port 123 here from my Chinese office, I should probably try to upgrade the wireguard server. That’s a great idea!”
Ask me how I know.
It does support it, you just have to add it to dnsmasq.
I have it Setup under misc.dnsmasq_lines like so:
address=/proxy.example.com/192.0.0.100
local=/proxy.example.com/
Then I have my proxied service reachable under service.proxy.example.com
Off topic, warning: this comment section is making me want to learn things
It’s been 2 days off reddit and my brain has opinions other than “aaaargh” or “meh”.
Proceed with caution
I use a MikroTik Router and while I do love the amount of power it gives me, I very quickly realized that I jumped in at the deep end. Deeper than I can deal with unfortunately.
I did get everything running after a week or so but I absolutely had to fight the router to do so.
Sometimes less is more I guess
Yes that does seem to describe modern computing, indeed, consumer electronics in general.
It’s no longer about solving actual problems, it IS the problem.
I feel your pain. Had to fix my immich, NC and Joplin postgresdb. Turned out, DB via NFS is a risky life. ;D
I stopped the tailscale service…
… while ssh’d through the tailscale interface.
Luckily, it was my home server and I had to drive there anyway.
Pro tip: If you’re using openwrt or other managed network components don’t forget to automatically back those up too. I almost had to reset my openwrt router and having to reconfigure that from scratch sucks.
It makes me start looking for the next thing. Got my jellyfin, got my pi hole, my retro console and just recently home assistant set up. (Just a few more buts to add to that). Next i think i am going to look into self hosting a cloud storage solution. Like google drive/photos etc. Would be nice to make my own backups and have them offline
It’s how cults start!
I’ve started to take a l lot more notes at work I guess there will be a time where I take notes of what month it is!
I guess there will be a time where I take notes of what month it is!
You may jest, but there are times when I can’t remember what I had for breakfast. They say that you never truly forget anything, but that our recall mechanism fades over time. For a myriad of reasons, including age, my recall mechanism is shit.
https://github.com/pgautoupgrade/docker-pgautoupgrade
Or if you are on k8s, you can use cloudnativepg.
Offt depends what you had and your version of health. I am hopeful that technology helps when I am that age, only a few years but ai agents seem to be a start. Just need to let go of those big data fears.
I used to make nginx changes while vpn’d into my network and utilizing guacamole (served via said nginx). I’m not a smart man.
I’m just using Docker on Proxmox, buuuut… I’m gonna look into this project. It looks like a LIFESAVER. Thank you for sharing this. You’re awesome! :D
Because I’m an idiot. 🤦 Thanks!