Goofed Home

My 19-year-old son has a new passion project - a dating sim where you date... bugs.

$$7714
https://lemmy.world/u/ickplant posted on Mar 6, 2026 17:46

My question is… where did I go so right?

It includes characters like Daxter the Praying Mantis and Kiki the Lunar Moth, among others.

He is creating all the art (and he is damn good) and the dialogue, then his friends will help with the technical stuff.

Sure, it may never see the light of day, but I’m just proud he is doing something this weird.

That’s my boy.

https://lemmy.world/post/43928425

$$8301
https://lemmy.zip/u/Eyro_Elloyn posted on Mar 7, 2026 17:35
In reply to: https://piefed.world/comment/4176665

Still my favorite scream of his.

https://lemmy.zip/comment/25099688
$$9880
https://lemmy.world/u/diabetic_porcupine posted on Mar 10, 2026 04:59
In reply to: https://lemmy.world/post/43928425

Idk man… a lunar moth? Sounds more like a boo-boo to me

https://lemmy.world/comment/22574759

Suggest some good budget friendly seedbox providers

$$7677
https://sh.itjust.works/u/alphacyberranger posted on Mar 6, 2026 15:04

I just want to seed torrents. I am not planning to run plex or anything like that. I would like a budget friendly one around $5 to $7.

https://sh.itjust.works/post/56372127

$$7752
https://lemmy.world/u/irmadlad posted on Mar 6, 2026 19:16
In reply to: https://sh.itjust.works/post/56372127

I’d do as @tal@lemmy.today advised.

https://lemmy.world/comment/22517220
$$7767
https://lemmy.world/u/Paragone posted on Mar 6, 2026 19:55
In reply to: https://sh.itjust.works/post/56372127

I’m suggesting something orthogonal: I’m suggesting specifically rTorrent hosting.

Apparently rTorrent provides the maximum GB served per unit of CPU used, & since seedbox hosting is on such pathetic virtual-machines, this can matter.

_ /\ _

https://lemmy.world/comment/22517888

It might be a good thing for the Internet to get intrinsic resistance to DDoS attacks

$$7593
https://lemmy.today/u/tal posted on Mar 6, 2026 07:53

Internet Protocol is the protocol underlying all Internet communications, what lets a packet of information get from one computer on the Internet to another.

Since the beginning of the Internet, Internet Protocol has permitted Computer A to send a packet of information to Computer B, regardless of whether Computer B wants that packet or not. Once Computer B receives the packet, it can decide to discard it or not.

The problem is that Computer B also only has so much bandwidth available to it, and if someone can acquire control over sufficient computers that can act as Computer A, then they can overwhelm Computer B’s bandwidth by having all of these computers send packets of data to Computer B; this is a distributed denial-of-service (DDoS) attack.

Any software running on a computer — a game, pretty much any sort of malware, whatever — normally has enough permission to send information to Computer B. In general, it hasn’t been terribly hard for people to acquire enough computers to perform such a DDoS attack.

There have been, in the past, various routes to try to mitigate this. If Computer B was on a home network or on a business’s local network, then they could ask their Internet service provider to stop sending traffic from a given address to them. This wasn’t ideal in that even some small Internet service providers could be overwhelmed, and trying to filter out good traffic from bad wasn’t necessarily a trivial task, especially for an ISP that didn’t really specialize in this sort of thing.

As far as I can tell, the current norm in 2026 for dealing with DDoSes is basically “use CloudFlare”.

CloudFlare is a large American Content Delivery Network (CDN) company — that is, it has servers in locations around the world that keep identical copies of data, and when a user of a website requests, say, an image for some website using the CDN, instead of the image being returned from a given single fixed server somewhere in the world, they use several tricks to arrange for that content to be provided from a server they control near the user. This sort of thing has generally helped to keep load on international datalinks low (e.g. a user in Australia doesn’t need to touch the submarine cables out of Australia if an Australian CloudFlare server already has the image on a website that they want to see) and to keep them more-responsive for users.

However, CDNs also have a certain level of privacy implications. Large ones can monitor a lot of Internet traffic, see traffic from a user spanning many websites, as so much traffic is routed through them. The original idea behind the Internet was that it would work by having many small organizations that talked to each other in a distributed fashion, rather than having one large company basically monitor and address traffic issues Internet-wide.

A CDN is also a position to cut off traffic from an abusive user relatively-close to the source. A request is routed to its server (relatively near the flooding machine), and so a CDN can choose to simply not forward it. CloudFlare has decided to specialize in this DDoS resistance service, and has become very popular. My understanding — I have not used CloudFlare myself — is that they also have a very low barrier to start using them, see it as a way to start small websites out and then later be a path-of-least-resistance to later provide commercial services to them.

Now, I have no technical issue with CloudFlare, and as far as I know, they’ve conducted themselves appropriately. They solve a real problem, which is not a trivial problem to solve, not as the Internet is structured in 2026.

But.

If DDoSes are a problem that pretty much everyone has to be concerned about and the answer simply becomes “use CloudFlare”, that’s routing an awful lot of Internet traffic through CloudFlare. That’s handing CloudFlare an awful lot of information about what’s happening on the Internet, and giving it a lot of leverage. Certainly the Internet’s creators did not envision the idea of there basically being an “Internet, Incorporated” that was responsible for dealing with these sort of administrative issues.

We could, theoretically, have an Internet that solves the DDoS problem without use of such centralized companies. It could be that a host on the Internet could have control over who sends it traffic to a much greater degree than it does today, have some mechanism to let Computer B say “I don’t want to get traffic from this Computer A for some period of time”, and have routers block this traffic as far back as possible.

This is not a trivial problem. For one, determining that a DDoS is underway and identifying which machines are problematic is something of a specialized task. Software would have to do that, be capable of doing that.

For another, currently there is little security at the Internet Protocol layer, where this sort of thing would need to happen. A host would need to have a way to identify itself as authoritative, responsible for the IP address in question. One doesn’t want some Computer C to blacklist traffic from Computer A to Computer B.

For another, many routers are relatively limited as computers. They are not equipped to maintain a terribly-large table of Computer A, Computer B pairs to blacklist.

However, if something like this does not happen, then my expectation is that we will continue to gradually drift down the path to having a large company controlling much of the traffic on the Internet, simply because we don’t have another great way to deal with a technical limitation inherent to Internet Protocol.

This has become somewhat-more important recently, because various parties who would like to train AIs have been running badly-written Web spiders to aggressively scrape website content for their training corpus, often trying to hide that they are a single party to avoid being blocked. This has acted in many cases as a de facto distributed denial of service attack on many websites, so we’ve had software like Anubis, whose mascot you may have seen on an increasing number of websites, be deployed, in an attempt to try to identify and block these:

https://lemmy.today/api/v3/image_proxy?url=https%3A%2F%2Fraw.githubusercontent.com%2FTecharoHQ%2Fanubis%2Frefs%2Fheads%2Fmain%2Fweb%2Fstatic%2Fimg%2Fhappy.webp

We’ve had some instances on the Threadiverse get overwhelmed and become almost unusable under load in recent months from such aggressive Web spiders trying to scrape content. A number of Threadiverse instances disabled their previously-public access and require users to get accounts to view content as a way of mitigating this. In many cases, blocking traffic at the instance is sufficient, because even though the my butt web spiders are aggressive, they aren’t sufficiently so to flood a website’s Internet connection if it simply doesn’t respond to them; something like CloudFlare or Internet Protocol-level support for mitigating DDoS attacks isn’t necessarily required. But it does bring the DDoS issue, something that has always been an issue for the Internet, back to prominent light again in a new way.

It would also solve some other problems. CloudFlare is appropriate for websites, but not all Internet activity is over HTTPS. DoS attacks have happened for a long time — IRC users with disputes (IRC traditionally exposing user IP addresses) would flood each other, for example, and it’d be nice to have a general solution to the problem that isn’t limited to HTTPS.

It could also potentially mitigate DoS attacks more-effectively than do CDNs, since it’d permit pushing a blacklist request further up the network than a CDN datacenter, up to an ISP level.

Thoughts?

https://lemmy.today/post/48806177

9 posts in conversation

$$7662
https://lemmy.world/u/non_burglar posted on Mar 6, 2026 14:27
In reply to: https://lemmy.today/post/48806177
  1. Akamai is by a huge margin the single biggest CDN in the world, they are the 800lb gorilla. Fastly and Cloudflare aren’t minor players by any means, but their volume is not in the same league.
  2. CDNs and DDOS don’t have much to do with each other. Cloudflare mitigates DDOS by scaling up network capacity and using pretty advanced pattern detection to simply soak up the traffic. Cloudflare is really, really good at scaling.

Now on that last point, there will indeed come a time when simply using the engineering technique of “making things bigger” won’t work if the attacks become sophisticated enough, but at that point networking will have fully become geopolitical tools (more than they are now).

https://lemmy.world/comment/22511977
$$7749
https://programming.dev/u/clean_anion posted on Mar 6, 2026 19:05
In reply to: https://lemmy.today/post/48806177

A Layer-3 (network-layer) blacklist risks cutting off innocent CGNAT and cloud users. What you’re proposing is similar to mechanisms that already exist (e.g., access control lists at the ISP level work by asking computer B which requests it wants to reject and rejecting those that originate from computer A). However, implementing any large-scale blocking effort beyond the endpoint (i.e. telling an unrelated computer C to blackhole all requests from computer A to computer B) would be too computationally expensive for a use case as wide and as precise as “every computer on the Internet”.

Also, in your post you mentioned, “A host would need to have a way to identify itself as authoritative, responsible for the IP address in question.” This already happens in the form of BGP though it doesn’t provide cryptographic proof of ownership unless additional mechanisms are in use (RPKI/ROA).

https://programming.dev/comment/22569089

Conversation

$$7609
https://lemmy.ml/u/Lysergid posted on Mar 6, 2026 09:46
In reply to: https://lemmy.today/comment/22698896

My networking knowledge is not good, so maybe it’s nonsense indeed. I just thought if everyone in the network knows what is blocked then DDoS protection could be distributed because every “reputable” switch/router in the network can block connection as early as possible without hopping close to destination creating unnecessary traffic

https://lemmy.ml/comment/24365662

Conversation

$$4651
https://lemmy.world/u/NekoKoneko posted on Feb 26, 2026 18:22
In reply to: https://lemmy.today/comment/22524168

That’s incredibly helpful and informative, a great read. Thanks so much!

https://lemmy.world/comment/22362365

$$4729
https://lemmy.world/u/zorflieg posted on Feb 26, 2026 20:58
In reply to: https://lemmy.world/comment/22362365

Abefinder/Neofinder is great for cataloging but it costs money. If you do a limited backup it’s good to know what you had. I use tape formatted to LTFS and Neofind both the source and the finished tape.

https://lemmy.world/comment/22365159

Mini PC to replace fiber modem and wifi router. How to proceed?

$$2999
https://lemmy.umucat.day/u/xavier666 posted on Feb 23, 2026 11:22

My current internet setup is like this (which is common for most people).

fiber line from ISP <-> ISP fiber modem <-> Personal wifi router <-> switch

This is working fine with no issues. But I need to power two devices. I want to reduce this to a single device.

fiber line from ISP <-> Modem+Firewall PC <-> Switch <-> AP1,AP2...

From my initial research, what I need is an SFP module which can be attached to a PC which supports SFP. OPNsense should be able to handle most SFP modules.

What is the community’s take on this? Is this worth the effort? Can I find a mini-PC which supports SFP? Will it be cost effective?

https://lemmy.umucat.day/post/951922

$$3340
https://lemmy.world/u/FlexibleToast posted on Feb 23, 2026 22:23
In reply to: https://lemmy.umucat.day/comment/2403667

Older 10G SFP+ models were definitely power hungry. I think they’ve gotten better since then, but I haven’t really looked into how much better.

https://lemmy.world/comment/22309369
$$3430
https://fedia.io/u/DaGeek247 posted on Feb 24, 2026 01:11
In reply to: https://piefed.world/comment/4007627

Yeah. I ended up getting a couple ms of latency back when i pulled the isp router too.

https://fedia.io/m/selfhosted@lemmy.world/t/3493611/-/comment/14171711

OpenWrt & fail2ban

$$1785
https://lemmy.world/u/pogodem0n posted on Feb 20, 2026 18:36

Hi, c/selfhosted! This is my first post on Fediverse and I am glad to be making it here.

I recently got fed up with having to using Tailscale to access my server at home and decided to expose it publicly. A friend recommended segregating the server into a dedicated VLAN. My router’s stock firmware does not allow that, so I flashed OpenWrt on it (I am amazed how simple and easy the process was).

Getting the router to actually assign an IP address to the server was quite a headache (with no prior experience using OpenWrt), but I managed to do it at the end with a help from a tutorial video on YouTube.

Now, everything is working perfectly fine and as I’d expect, except that all requests’ IP addresses are set to the router’s IP address (192.168.3.1), so I am unable to use proper rate limiting and especially fail2ban.

I was hoping someone here would have an experience with this situation and help me.

https://lemmy.world/post/43381650

$$1904
https://lemmy.dbzer0.com/u/mic_check_one_two posted on Feb 20, 2026 22:05
In reply to: https://lemmy.world/comment/22255233

Yeah, Tailscale’s “zero-config” idea is great as long as things actually work correctly… But you immediately run into issues when you need to configure things, because Tailscale locks you out of lots of important settings that would otherwise be accessible.

For instance, the WiFi at my job blocks all outbound WireGuard connections. Meaning I can’t connect to my tailnet when I’m at work, unless I tether to my personal cell phone (which has a monthly data cap). Tailscale is built on WireGuard, and WireGuard only. If I could swap it to use OpenVPN or IKEv2 instead, I could bypass the problem entirely. But instead, I’m forced to just run an OpenVPN server at home, and connect using that instead of using Tailscale.

https://lemmy.dbzer0.com/comment/24537584
$$1987
https://lemmy.world/u/non_burglar posted on Feb 21, 2026 13:30
In reply to: https://lemmy.today/comment/22402829

Wow, there’s a lot going on in there.

https://lemmy.world/comment/22267007
Create New Post