In reply to: https://mastodon.cloud/users/bitsandburnouts/statuses/116136572863087586
And I’m sure if your ISP kicked you, that would still work…
And I’m sure if your ISP kicked you, that would still work…
I don’t have any experience with Ghost but just from glancing at it, seems like might be overkill for a simple blog. There are a lot of static site generators out there that would be on the safer side, I think. (I am not an expert at this though!)
I’ve been using Actual for over a year and I really like it a lot. Full disclosure though, I don’t use any of the linking features and manually input all transactions.
In addition to Calibre-Web there is also Calibre Web Automated.
Most of my ebooks are large as they are comics. You also have EPUB3 ebooks now, which can contain images and audio in them too (think combined ebook + audiobook). So they can get pretty large.
Oooo, I didn’t know that. That’s fancy
I didn’t expose Portainer to the public internet… That’s staying strictly on my tailnet. I’m not going to risk an important piece of infrastructure like that to the public Internet.
From time to time I like to review my network to see where I can tighten up. Review logs, check out the landscape, and make sure there are no gaps. Today, I have some downtime, so I figured it’d be a good for it. Since I am not a certified IT professional, this is what I have cobbled together reading, and seeing what others have done. I’d like to bounce this off you guys who are more experienced than I and get your impressions. If you have any recommendations, I’m always down to be schooled.
So if you’d like to participate in my audit, I have a home network as follows:
How secure would you say this network is and give any recommendations to further harden the network besides keeping up with current updates, monitoring and auditing logs.
Thanks
You’re ahead of an alarming number of my colleagues by just trying until you can get it working then documenting things
I have to document. At 71, with a TBI, my brain is not what it used to be. Sometimes I don’t even remember what I had for breakfast. LOL
And movies, tv shows, game servers and what not. Kindly stop beliving your source needs to provide at 1000⁄1000, thats just a sales gimmic from the operators.
There’s a docker that essentially sets up a web VNC for Calibre. I do this for file conversion, DRM removal (only books I buy), etc.
Then I use Calibre-web for the OPDS server and nice web UI.
You can have calibre auto import from a folder. Though be careful because it deletes them from that folder (you might want to do single direction sync into that folder).
And if you share the db with calibre-web or have some other sync method that works, you should be good to go.
… comics and manga, which is another aspect I’ve been noticing calibre does not do such a great job
Absolutely. Calibre is horrible with anything that is fixed format. I recently backed up my entire Kindle library with about 1k manga volumes, expecting to be able to convert from KFX to EPUB format as I have been doing for my regular books for 15+ years. Calibre failed awfully at this. The only thing it’s reasonably good at with comics, is converting to ZIP format. So I had to write a Python script to take the KFX -> ZIP outputs from Calibre and convert them into working EPUB files.
On my Lan I have 192.168.1.111 hosting a bunch of various services not containerized. All connections are done either from my internal lan or from wireguard going through 192.168.1.111 so no external traffic bar wireguard.
I’ve set the host name of 111 in the hosts file inside the router and 111 and it works for all devices expect the ones connecting via wireguard.
But I dont want to have to use hostname+port for every service, I’d like each service to have its own name. I’d also like certs.
Can someone point me in the right direction for what I need to do? I’m thinking maybe this requires a local DNS server which im hesitant to run because im happy using 8.8.8.8.
For certs do I create a single cert on the 192.168.1.111 and then point all the applications to it?
See the section “Personal dashboards” of this great resource page I often refer to: https://github.com/awesome-selfhosted/awesome-selfhosted
I don’t see anyone else recommending it here but you can also use Traefik, that’s what I use. I’ve sein it up so that I can automatically add any docker hosted apps based on the container tags, it makes it convenient to use.
The Huntarr situation (score 200+ and climbing today) is getting discussed as a Huntarr problem. It’s not. It’s a structural problem with how we evaluate trust in self-hosted software.
Here’s the actual issue:
Docker Hub tells you almost nothing useful about security.
The ‘Verified Publisher’ badge verifies that the namespace belongs to the organization. That’s it. It says nothing about what’s in the image, how it was built, or whether the code was reviewed by anyone who knows what a 403 response is.
Tags are mutable pointers. huntarr:latest today is not guaranteed to be huntarr:latest tomorrow. There’s no notification when a tag gets repointed. If you’re pulling by tag in production (or in your homelab), you’re trusting a promise that can be silently broken.
The only actually trustworthy reference is a digest: sha256:.... Immutable, verifiable, auditable. Almost nobody uses them.
The Huntarr case specifically:
Someone did a basic code review — bandit, pip-audit, standard tools — and found 21 vulnerabilities including unauthenticated endpoints that return your entire arr stack’s API keys in cleartext. The container runs as root. There’s a Zip Slip. The maintainer’s response was to ban the reporter.
None of this would have been caught by Docker Hub’s trust signals, because Docker Hub’s trust signals don’t evaluate code. They evaluate namespace ownership.
What would actually help:
The uncomfortable truth: most of us are running images we’ve never audited, pulled from a registry whose trust signals we’ve never interrogated, as root, on our home networks. Huntarr made the news because someone did the work. Most of the time, nobody does.
One thing that sucks about that is you might miss an upgrade that needed to happen before a large version jump later. It’s pretty rare but I believe I’ve seen a container break like that and the upgrade was misery.
Fair! I’m not giving enough credit to the fact that some applications don’t really have another option than to run root for some dependencies