Probably a silly question but the .uk domain is really cheap. If I’m not in the UK can I still use that domain for my server without issue?
Its like 50 bucks for a ten year lease
Probably a silly question but the .uk domain is really cheap. If I’m not in the UK can I still use that domain for my server without issue?
Its like 50 bucks for a ten year lease
It’s going to be retired though, unless something changes. All io domains will disappear.
It’s generally safer to stick to 3 or more letter domains since those aren’t tied to countries.
Depends on the country. .tv and .io don’t, though I know .io is shifting to disallow it
I recently made a huge mistake. My self-hosted setup is more of a production environment than something I do for fun. The old Dell PowerEdge in my basement stores and serves tons of important data; or at least data that is important to me and my family. Documents, photos, movies, etc. It’s all there in that big black box.
A few weeks ago, I decided to migrate from Hyper-V to Proxmox VE (PVE). Hyper-V Server 2019 is out of mainstream support and I’m trying to aggressively reduce my dependence on Microsoft. The migration was a little time consuming but overall went over without a hitch.
I had been using Veeam for backups but Veeam’s Proxmox support is kind of “meh” and it made sense to move to Proxmox Backup Server (PBS) since I was already using their virtualization system. My server uses hardware raid and has two virtual disk arrays. One for VM virtual disk storage and one for backup storage. Previously, Veeam was dumping backups to the backup storage array and copying them to S3 storage offsite. I should note that storing backups on the same host being backed up is not advisable. However, sometimes you have to make compromises, especially if you want to keep costs down, and I figured that as long as I stayed on top of the offsite replications, I would be fine in the event of a major hardware failure.
With the migration to Proxmox, the plan was to offload the backups to a PBS physical server on-site which would then replicate those to another PBS host in the cloud. There were some problems with the new on-site PBS server which left me looking for a stop-gap solution.
Here’s where the problems started. Proxmox VE can backup to storage without the need for PBS. I started doing that just so I had some sort of backups. I quickly learned that PBS can replicate storage from other PBS servers. It cannot, however, replicate storage from Proxmox VE. I thought, “Ok. I’ll just spin up a PBS VM and dump backups to the backup disk array like I was doing with Veeam.”
Hyper-V has a very straight forward process for giving VM’s direct access to physical disks. It’s doable in Proxmox VE (which is built on Debian) but less straight forward. I spun up my PBS VM, unmounted the backup disk array from the PVE host, and assigned it as mapped storage to the new PBS VM. …or at least I thought that’s what I did.
I got everything configured and started running local backups which ran like complete and utter shit. I thought, “Huh. That’s strange. Oh well, it’s temporary anyways.” and went on with my day. About two days later, I go to access Paperless-ngx and it won’t come up. I check the VM console. VM is frozen. I hard reset it aaaannnnddd now it won’t boot. I start digging into it and find that the virtual HDD is corrupt. fsck is unable to repair it and I’m scratching my head trying to figure out what is going on.
I continued investigating until I noticed something. The physical disk id that’s mapped to the PBS VM is the same as the id of the host VM storage disk. At that point, I realize just how fucked I actually am. The host server and the PBS VM have been trying to write to the same disk array for the better part of two days. There’s a solid chance that the entire disk is corrupt and unrecoverable. VM data, backups, all of it. I’m sweating bullets because there are tons of important documents, pictures of my kids, and other stuff in there that I can’t afford to lose.
Half a day working the physical disk over with various data recovery tools confirmed my worst fears: Everything on it is gone. Completely corrupted and unreadable.
Then I caught a break. After I initially unmounted the [correct] backup array from PVE it’s just been sitting there untouched. Every once in a great while, my incompetence works out to my advantage I guess. All the backups that were created directly from PVE, without PBS, were still in tact. A few days old at this point but still way better than nothing. As I write this, I’m waiting on the last restore to finish. I managed to successfully restore all the other VM’s.
What’s really bad about this is I’m a veteran. I’ve been in IT in some form for almost 20 years. I know better. Making mistakes is OK and is just part of learning. You have to plan for the fact that you WILL make mistakes and systems WILL fail. If you don’t, you might find yourself up shit creek without a paddle.
So what did I do wrong in this situation? - First, I failed to adequately plan ahead. I knew there were risks involved but I failed to appreciate the seriousness of those risks, much less mitigate them. What I should have done was go and buy a high capacity external drive, using it to make absolutely sure I had a known good backup of everything stored separately from my server. My inner cheapskate talked me out of it. That was a mistake. - Second, I failed to verify, verify, verify, and verify again that I was using the correct disk id. I already said this once but I’ll repeat it: storing backups on the host being backed up is ill advised. In an enterprise environment, it would be completely unacceptable. With self-hosting, it’s understandable, especially given that redundancy is expensive. If you are storing backups on the server being backed up, even if it’s on removable storage, you need to make sure you have a redundant offsite backup and that it is fully functional.
Luck is not a disaster recovery plan. That was a close call for me. Way too close for my comfort.
Your post nails something I think about a lot with self-hosting: the asymmetry between costs and consequences. Enterprise teams can buy redundancy at scale. Solo operators can’t. So we do the calculation differently, and sometimes we get it wrong.
What struck me most is the verification part. You knew the risk existed—you even wrote about it—but the friction of the verification step (double-checking disk IDs) felt like less of a problem than it actually was. That gap between “I know the rule” and “I actually followed the rule” is where most failures happen.
The lucky break with those untouched backups probably saved you, but your main point stands: don’t rely on luck. Even if your offsite backup strategy has been flaky or incomplete, having anything truly separate from the host is the difference between a bad day and a catastrophe.
Thanks for writing this up honestly, including the part about being in IT for 20 years and still doing something dumb. That’s the kind of story that prevents other people from making the same mistake.
Those second and third backups can really come in handy if something on the host blows up.
I’ve got my main PBS backups on an internal array on the host, secondary backups to an external HDD plugged into the host, then a third separate node (ThinkCentre mini-pc) backing up to a pair of HDDs I trade out monthly for an offsite copy.
I think it’s important to consider not backing up media, etc that you can relatively easily re-obtain. Makes the storage requirements for redundant backups a fair bit more palatable.
Hi everyone,
I wanted to share an open-source project that I think the self-hosted community will appreciate, especially given the recent shift toward corporate-controlled data.
QST (Quiz/Survey/Test) is a complete, non-commercial assessment platform licensed under GPLv2. Unlike "free" cloud tools that harvest user data, QST is designed to be hosted on your own hardware (Windows, Linux, or Mac).
Why it’s relevant in 2026:
Zero Data Leakage: Since it's self-hosted, student/taker data never touches a third-party server or my butt training set.
Scalability: It uses a multithreaded architecture that can handle thousands of users on modest hardware.
Interoperability: Supports QTI, Moodle XML, and Word XML imports—no vendor lock-in for your question banks.
No "Enshittification": It’s a standalone tool, not a "lite" version of a paid service.
It’s great for anyone who needs to run exams, certifications, or private surveys without the overhead of a massive LMS.
Source & Info: https://sourceforge.net/projects/qstonline/
Documentation: https://qstonline.org/
I'm happy to answer any questions about the setup or the Perl-based architecture!
Great idea and lots of schools need something like this, but I don’t understand anything. The website renders completely broken, I can’t find the source code anywhere, there seems to be no git or something like that?
You claim to be open source but where is your source code? It looks like the sourceforge are compiled files.
I'm looking for a community management tool I could #selfhosting
Our Club try to move from Mail to messanger as communication tool.
Our userbase use Whatsapp, #Threema, #Signal
The usage is not regulated and not all boardmembers use all tools or wish to do that.
Could someone recommens selfhosting tools for that. I woukd host it on my #Docker
At moment I only have #Zammad on my list. But it could not use Threema and is not exactly for community management.
Mov.im ? A web frontend for xmpp. Though it’s not hostable via docker (there is one but not for production).
I’m not sure it will 100% suit your use case but definitely a solid choice.
XMPP with slidge is what you are looking for.
I’ve been thinking about finally getting myself a proper domain for my server, but a friend told me that to get one I either need a VPS with a public ip (which just takes all the fun out of selfhosting) or purchase a static ip, which is beyond what I’m willing to spend for a hobby. Do I have any good options or should I just let it go?
For sure, you need a public (dynamic) IP for this.
NAT sucks, been there, done that… ugh! And, yeah, nothing can be done short of some sort of proxing that adds latency and unreliability
This is my solution as well. If you’ve got a OpenWRT router, you can have the router itself update the IP.
So, I have a VPS running some stuff, local proxmox-setup running something and then the ‘normal’ computers (laptop mostly) which I’d like to get a bit better backup solution for.
Proxmox VMs are taken care of by proxmox backup server and hetzner storagebox + nas at the separated garage, so they are decently protected against hardware failures. Workstations keep important files synced to nextcloud and the VPS has it’s own nightly snapshots at the provider, so there’s some reundancy in the system already. However, as we all know, two is one and one is none with backups, so I’d like to add a separate backup server in the mix.
As there’s devices which are not online all the time I’m leaning towards an agent-based solution where devices push their data to the backup server instead of server pulling the data in. Also as I have some spare capacity I’d like to have an option to offer backup storage for friends as well where agent-based solution is a practically requirement.
But now the difficult thing is to decide software for it. Veeam offers something for hobbyists, but I’d rather have more open solution. Bacula seems promising, but based on a quick read it doesn’t seem to be that simple to set up. Amanda looked good too, but that seems to be more or less abandoned project. Borg Backup would fill my own needs, but as friends tend to have either Windows or OSX that’s not quite what I’m after.
Any suggestions on what route I should take?
??? What’s wrong with a simple cron job / systemd timer with a bash script?
No deduplication, encryption nor support for non-linux operating systems for a start.
Give it a few days, I got a popup saying there was an issue, they’re working on it.
Invidious is certainly worth it, even though you lose live feeds at the moment. You can even remove comments and trending. If you want to subscribe to a channel, you can do it without YouTube.
Hi community,
I’m one of the maintainers of Portabase, and this is my first time sharing about it on Lemmy.
Portabase is an open-source platform for database backup and restore.
It’s designed to be simple, reliable, and lightweight, without exposing your databases to public networks. It works via a central server and edge agents (like Portainer), making it perfect for self-hosted or edge environments.
It currently supports 7 databases:
PostgreSQL, MariaDB, MySQL, SQLite, MongoDB, Redis and Valkey
Repository: https://github.com/Portabase/portabase
(we hit 500 stars recently!)
Key features:
What’s coming next:
I’d love to hear from you: which database would you like to see supported next in Portabase?
Thanks for the clarification. By “persist across restarts,” I’m referring to the fact that if I just install the agent in my container, it won’t persist if I restart the container, unless I install it on a volume which seems clunky. Running the agent alongside in a separate container with network access is the solution I was looking for.
On the Redis and Valkey restores, that makes sense. Disaster recovery is my use case anyway. Do you document the manual restore process for those? I didn’t notice it in a brief review of the docs, but I may have overlooked it.
Do you document the manual restore process for those?
No, we haven’t documented it yet, but that’s a good idea. I’ll add it to the backlog, and we’ll work on it soon!
I know everyone here loves FOSS, and for good reason, but let’s not pretend it doesn’t have its own issues. UX and accessibility are two I whine about regularly, but another big one is project abandonment.
I can’t tell you how many old forum/reddit posts I’ve run across of a lone developer hyping up their latest project, only for me to go to the github page and notice the last commit was 7 years ago.
If you’re not familiar with the Gemini protocol, it’s an updated alternative to Gopher, which in turn was an early competitor to the WWW back in the 90s. Gemini itself I can’t speak to, but if you go down the list of gemini servers and clients on geminiprotocol.net, you’ll see 404s, broken links, and expired certs galore. There was a flood of developer interest 5 or 6 years ago when the protocol was new, but everyone wandered away once the shiny wore off.
My recent foray into wiki software has turned up a few corpses as well. Wiki.js development seems to have stalled, and Pepperminty wiki has been abandoned for three years now.
And yes, I know this is because FOSS devs are often doing this on their own time for little to no money, so passion is the only thing driving them, but passion can only get you so far.
Besides loss of developer interest, community schisms can cause a project to sink. Remember what happened to Audacity? I think it ended up surviving but there was a real concern for a while that the forks wouldn’t be as well supported.
All the FOSS offerings I can think of that are “too big to fail” have big corporate support, like the Linux kernel.
I’m guessing most of us are self-hosting as a hobby, and we can afford to risk a loss of support when a project is abandoned, but businesses don’t have that luxury. That’s why they use proprietary software.
It can be frustrating looking for the current best alternative. Ive found alternativeto.net to be really good at finding all the alternatives and telling you which are worth a look.
With my own projects, maybe 1 in 10 make it to that kind of “ready to publish” beta quality, and most get abandoned thereafter.
Iys not really because im short on money as such. Its because im just playing around with whatever project, and quickly lose interest once it reaches that “almost kinda done” state.
I dont really see this as a shortcoming of the FOSS ecosystem.
If you have enough access to do so, don’t wait for permission. Build it, start using it, figure out how to connect to the backend or write to the same format, then start telling other BUS DRIVERS.
Disclaimer: I am only a user, not the developer, and I am not in their team :)
I found this backup solution a few days ago and I already love it! Time to share!
Vykar is a new backup solution in an early state, that is inspired by Borg and Restic. It offers a fast and encrypted backup solution with an easy YAML formatted configuration. It can back up to a local repository, S3, its own backup server or all of them. Deduplication and snapshots are integrated, in daemon mode it has a built-in scheduler too.
Run it via the provided binaries, the GUI or use it in Docker. Recovery can be easily done via the CLI or starting a local web server to browse the files or access them via WebDAV.
Example YAML configuration (for showing its simplicity), full docs here
# vykar configuration file
# Minimal required configuration.
# Full reference: https://vykar.borgbase.com/configuration
repositories:
- label: "Backupserver"
url: "https://backup.myserver.com/"
access_token: "secure-token-here"
sources:
- label: "immich-homeserver"
path: "/docker/immich/data"
exclude:
- "backup"
- "thumbs"
- "encoded-video"
- label: "media-homeserver"
paths:
- "/backup/media/books"
- "/backup/media/music"
- "/backup/scripts"
- "/backup/media/video"
exclude:
- "cache"
- "tmp"
# --- Common optional settings (uncomment as needed) ---
encryption:
# mode: "auto" # Default — benchmark at init and persist chosen mode
mode: "aes256gcm"
#
retention:
keep_last: 3 #keeps the last 3 snapshots
# keep_daily: 7
# keep_weekly: 4
# https://vykar.borgbase.com/configuration#compression
compression:
algorithm: zstd
zstd_level: 5
# https://vykar.borgbase.com/configuration#exclude-patterns
exclude_patterns:
- "*.tmp"
- "*.bak"
- "*.log"
- ".pnpm-store"
- "node_modules"
- "postgres"
- ".Trash-1000"
- "$Recycle.Bin"
- "System Volume Information"
- ".DS_Store"
# schedule:
# enabled: true
# every: "24h"
Disclaimer: I am only a user, not the developer
Got any screenshots? /s
I feel that the bar could be lower for less hardcore people ;)