In reply to: https://lemmy.today/post/47645010
down again ?
down again ?
I’m not the author, just sharing.
The features are theoretical in the sense that there is no real guarantee they’d be possible after BSky corp changes their behavior and that they are in use only in the least significant way possible, for tiny and irrelevant numbers of users. But of course this is just restating the obvious again. For a network truly to be shielded against this sort of thing it should be decentralized already before.
See this for how constellation makes no difference: https://mastodon.social/@ricci@discuss.systems/116126736087551797
sure and he left.twitter, him ostensibly leaving doesn’t make it any better. The entire way its set up is the failure, it’s yet another shitstain on the internet.
Oh, I’m dumb - that makes more sense. Thank you!
Great to see a big outlet with a story like this.
They don’t have my preferred alternatives for the first couple: Kagi search (one great feature is being able to down-rank domains as a preference) and fastmail (I like their auth options more than proton).
But it’s also nice that they only mention a couple top picks, decision paralysis does make the switch a lot harder.
Hey all, I know the purists might sneer at me for this, but I just spun up a server via Hetzner so I could run Docker in the cloud. Since my ISP uses CGNAT, my only options for hosting services at home are either via Tailscale Funnel or CloudFlare Tunnel. If you remember my previous post about Yattee, that’s not available on the public internet, but was meant for my private use only. The idea behind this new project is hosting things I intend for public access. I’m debating if I’m going to use a domain I already own that ends in the .us TLD or if I should wait until I get paid in a couple of days and buy a new one from Pork Bun that doesn’t have the .us TLD and all the potential baggage that carries. Three questions come to mind:
I personally have a K3S cluster I host at home and an auto-scaled cluster in Hetzner. I see different use cases and am happy to have both.
One thing to mention is that you can also run your own tunnel with something like pangolin on a VPS (CX23 is plenty). Thus, you could have a cheaper could bill if you wanted a hybrid setup.
Also, I highly recommend moving your node to a data center closer to home.
unfortunately, the American data centers that Hetzner has don’t have any capacity for the type of VPS that I’m interested in. what they do offer for those particular data centers is just entirely too expensive for my budget. That’s why I went with one of the European data centers, as even with the upcoming pricing increase, I’ll be able to actually afford my VPS. Plus, having it outside of American jurisdiction makes me much happier.
tldr: I’m going to set up raid z2 with 4x8TB hard drives. I’ll have photos, documents (text, pdf, etc.), movies/tv shows, and music on the pool. Are the below commands good enough? Anything extra you think I should add?
sudo zpool create mypool raidz2 -o ashift=12 /dev/disk/by-id/12345 ...
zfs set compression=lz4 mypool #maybe zstd?
zpool set autoexpand=on mypool
zpool set autoreplace=on mypool
zpool set listsnapshots=on mypool
With ai raising hard drive prices, I over spent on 3x10TB drives in order to reorganize my current pool and have 3 hard drives sitting on a shelf in the event of a failure. My current pool was built over time but it currently consists of 4x8TB drives. They are a mirrored stripe so a usable 16TB. If I understand it correctly, I can lose 1 drive for sure without losing data and maybe a second drive depending on which drive fails. Because of that, I want to move to raid z2 to ensure I can lose 2 drives without data loss. I’m going to move data from my 4x8TB drives, to the 3x10TB, reconfigure the 4x8TB, and move everything back. I run Immich, plex/jellyfin, and navidrome off the pool. All other documents are basically there for storage just in case. What options should I use for raid z2 when setting it up?
Make sure you scrub weekly. The probability of a second device failure is higher than you think, since it can be triggered by resilvering. I would also make sure you have a spare at hand.
For context, I’ve also been using ZFS since Solaris.
I was wrong about compression on datasets vs pools, my apologies.
By “almost no impact” (for compression), I meant well under 1% penalty for zstd, and almost unmeasurable for lz4 fast, with compression efficiency being roughly the same for both lz4 and zstd. Here is some data on that.
Lz4 compression on modern (post-haswell) CPUs is actually so fast, that lz4 can beat non-compressed writes in some workloads (see this). And that is from 2015.
Today, there is no reason to turn off compression.
I will definitely look into the NFS integrations for ZFS, I use NFS (exports and mounts) extensively, I wonder what I’ve been missing.
Anyway, thanks for this.