With this kind of mentality, nobody will ever migrate and one will have to deal with Discord’s horrible terms and conditions
The only alternative is to willingly leave over a hundred communities, some of which I have strong ties to, and never interact with the majority again.
That’s a big ask.
I run Home Assistant in a virtual machine on my home server. Sometimes I need to restart it and I’m not always in a position to SSH or VNC in. Is there anything out there that would allow me to do this quickly?
Not looking for a workaround but thanks
OK now we’re talking! Thanks.
yep… I write all my papers in Google because I can access the files anywhere, and nothing beats PaperPile for referencing yet.
I recall spreadsheets being particularly painful on mobile when I’d try to select multiple rows and it would select way more at a time but would need to fouble-check that or find a screen recording if I made one at the time.
The main issues is there was a bug where if there is an open session for a document in Collabora (including dead sessions say from mobile) and that Collabora server is shut down in the wrong order, then all changes including if you click “Save” will be lost. A bug was opened for this and closed by making sure the servers shut down in the correct order, but I don’t know if that fixes cases where the servers a hard shutdown.
From time to time I like to review my network to see where I can tighten up. Review logs, check out the landscape, and make sure there are no gaps. Today, I have some downtime, so I figured it’d be a good for it. Since I am not a certified IT professional, this is what I have cobbled together reading, and seeing what others have done. I’d like to bounce this off you guys who are more experienced than I and get your impressions. If you have any recommendations, I’m always down to be schooled.
So if you’d like to participate in my audit, I have a home network as follows:
How secure would you say this network is and give any recommendations to further harden the network besides keeping up with current updates, monitoring and auditing logs.
Thanks
You’re ahead of an alarming number of my colleagues by just trying until you can get it working then documenting things
I have to document. At 71, with a TBI, my brain is not what it used to be. Sometimes I don’t even remember what I had for breakfast. LOL
Ever since Readarr was officially discontinued, many forks and replacements have popped up. I’m currently running pennydreadful/bookshelf, which seems to be chugging along. Faustvii/Readarr is also around but seems to not be actively meaintained??
There’s also Chaptarr, which looks promising, but I’ve heard concerns about it being vibe-coded and such (see rreading-glasses: “I do not endorse the vibe-coded Chaptarr project.”). Does anybody know to what extent this is true, and what the code quality is like?
??
Caliber web isn’t two separate applications, it’s a calibre-compatible database served via http. There is no desktop “calibre” involved.
There is integrated koreader sync, though.
Yep! for a while I deployed Calibre-Web alongside Calibre in a ‘books’ compose.yaml stack using Docker. I used volume mounts to expose my library to both containers. The main thing to be cautious of is that you don’t write to the db from both C and CW at the same time (which could result in corruption). Some folks spin up/down Calibre as-needed, but I had them both running and was just mindful. I personally ended up switching from C+CW to Calibre-Web Automated and fully removing Calibre. I’m able to do everything from CWA that I was doing in both previously. FWIW if you are managing devices (e.g., family, etc.), Kobo devices + Kobo sync via CW/CWA is wonderful for usability (books show up on devices ‘natively’).
The title says basically everything but let me elaborate.
Given the recent news about the sold out of harddrives for the current year and possibly also the next years (tomshardware article) I try to buy the HDDs I want to use for the next few years earlier than expected.
I am on a really tight budget so I really don’t want to overspend. I have an old tower PC laying around which I would like to turn into a DIY NAS probably with TrueNAS Scale.
I don’t expect high loads, it will only be 1-2 users with medium writing and reading.
In this article from howtogeek the author talks about the differences and I get it, but a lot of the people commenting seem to be in a similar position as I am. Not really a lot of read-write load, only a few users, and many argue computing HDDs are fine for this use case.
Possibilites I came up with until now: 1. Buy two pricey Seagate Ironwolf or WD Red HDDs and put them in RAID1 2. Buy three cheaper Seagate Barracuda or WD Blue and put two in RAID1 and keep one as a backup if (or should I say when?) one of the used drives fails.
I am thankful for every comment or experience you might have with this topic!
It is a gamble, fuck the my butt bozos for speculating us into economic uncertainty
F in the chat for your savings, least you’ve got the peak of home NASes. Pretty fuckin cool and I hold out hope when the drop comes in a… 6 months to 3 years…? that I’ll be able to afford full SSD NAS life. The power savings, the speed, the no worries of shock or vibrations, the silence - jealous
On my Lan I have 192.168.1.111 hosting a bunch of various services not containerized. All connections are done either from my internal lan or from wireguard going through 192.168.1.111 so no external traffic bar wireguard.
I’ve set the host name of 111 in the hosts file inside the router and 111 and it works for all devices expect the ones connecting via wireguard.
But I dont want to have to use hostname+port for every service, I’d like each service to have its own name. I’d also like certs.
Can someone point me in the right direction for what I need to do? I’m thinking maybe this requires a local DNS server which im hesitant to run because im happy using 8.8.8.8.
For certs do I create a single cert on the 192.168.1.111 and then point all the applications to it?
See the section “Personal dashboards” of this great resource page I often refer to: https://github.com/awesome-selfhosted/awesome-selfhosted
I don’t see anyone else recommending it here but you can also use Traefik, that’s what I use. I’ve sein it up so that I can automatically add any docker hosted apps based on the container tags, it makes it convenient to use.
The Huntarr situation (score 200+ and climbing today) is getting discussed as a Huntarr problem. It’s not. It’s a structural problem with how we evaluate trust in self-hosted software.
Here’s the actual issue:
Docker Hub tells you almost nothing useful about security.
The ‘Verified Publisher’ badge verifies that the namespace belongs to the organization. That’s it. It says nothing about what’s in the image, how it was built, or whether the code was reviewed by anyone who knows what a 403 response is.
Tags are mutable pointers. huntarr:latest today is not guaranteed to be huntarr:latest tomorrow. There’s no notification when a tag gets repointed. If you’re pulling by tag in production (or in your homelab), you’re trusting a promise that can be silently broken.
The only actually trustworthy reference is a digest: sha256:.... Immutable, verifiable, auditable. Almost nobody uses them.
The Huntarr case specifically:
Someone did a basic code review — bandit, pip-audit, standard tools — and found 21 vulnerabilities including unauthenticated endpoints that return your entire arr stack’s API keys in cleartext. The container runs as root. There’s a Zip Slip. The maintainer’s response was to ban the reporter.
None of this would have been caught by Docker Hub’s trust signals, because Docker Hub’s trust signals don’t evaluate code. They evaluate namespace ownership.
What would actually help:
The uncomfortable truth: most of us are running images we’ve never audited, pulled from a registry whose trust signals we’ve never interrogated, as root, on our home networks. Huntarr made the news because someone did the work. Most of the time, nobody does.
One thing that sucks about that is you might miss an upgrade that needed to happen before a large version jump later. It’s pretty rare but I believe I’ve seen a container break like that and the upgrade was misery.
Fair! I’m not giving enough credit to the fact that some applications don’t really have another option than to run root for some dependencies
Hey y’all, this actually isn’t self hosting related, but who have you had good luck with for paid matrix hosting?
Right now, I do enough tinkering with everything that I would be willing to just pay to host a matrix server for my friends.
Unless it really is easy enough to do it on a synology nas for text/voice/screen share…but do I need to pay for a domain still?
We are (like everyone) on matrix.org now but realize we need to move eventually.
If you have your own VPS anyway, there is the Matrix Ansible Playbook which makes the setup with docker containers very easy. But I also get the sentiment that you don’t want to tinker around all the time and just want stuff to work.
Kudos to you for using Matrix in the first place, I hope you can bring a lot of your friends and family to switch over to it. So far this has been the biggest hurdle on my journey 😅
If you’re Canadian, you can get free and cheap .ca domains https://www.cira.ca/en/why-choose-ca/
I have CasaOS and I installed this https://hub.docker.com/r/linuxserver/overseerr
Is there an easy way to simply upgrade it like a normal update and keep the settings?
Can’t you just do a new setup? I just installed the seerr container on my unraid server and it took just a couple of minutes. Or am I missing something?
yeah, it really sucks to spring this upon as like this… I had to change UID/GID of a user too because of that, really annoying.