In reply to: https://ioc.exchange/users/nw/statuses/116330253414940067
In reply to: https://ioc.exchange/users/nw/statuses/116330253414940067
Yet another critical vulnerability in systemd, this time involving snapd. Ubuntu folk are affected.
“A serious security issue has been discovered in Ubuntu, and it is gaining attention in the cybersecurity community. The vulnerability is identified as CVE-2026-3888 and mainly affects Ubuntu Desktop systems from version 24.04 onwards. This flaw is dangerous because it allows an attacker with limited access to gain full root privileges. Root access means complete control over the entire system.”
Ah, well, yet another mark against using snap then. My bad. Thanks for letting me know. :)
And that’s why you use at least very basic owner/group and mod permission validation on internal files
Right now, I have it set up so that the initiator for the connection does a peerConnection.restartIce(); if the peerConnection.iceConnectionState is failed or disconnected when peerConnection.oniceconnectionstatechange is triggered. This triggers a peerConnection.onnegotiationneeded event where I handle sending a new offer as the start of renegotiation which is then followed up by the initial renegotiation steps.
The problem is that after all of this, the remote streams for both peers are frozen where they were when the webRTC failure happened. Does anyone know how to handle this so that the remote streams continue playing after recovery? If possible, I would like to do this without setting up new peer connections as it would be simpler.
@geniodiabolico I haven't looked at Starling in detail yet, but as I understand it it's intended as a standalone server. Connecting it to your existing blog, regardless of what it runs on, would most likely take some nontrivial programming work. 🫤
Almost two weeks ago, someone on GNOME’s Discourse forum asked whether the missing Google Drive support in GNOME 50 was a bug or a deliberate decision.
GNOME developer Emmanuele Bassi replied, confirming that Drive was no longer supported.
He went on saying that libgdata, the library that coordinates communication between GNOME apps and Google’s APIs, has gone without a maintainer for nearly four years. Furthermore, GVFS dropped its libgdata dependency about ten months ago, and GNOME Online Accounts now checks for that before offering the Files toggle under its Google provider settings at all.
I appreciate that the author suggests that Google take on maintenance of the Drive integration. I’d rather volunteers work on supporting open platforms and protocols, which seems to be what they’re doing.
Which reminds me: I need to get a Syncthing server set up.
Oh damn. I was at a shitty startup once which uses macs/windows with google everything and additional external services. Worked through thunderbird and drive also through KDE’s KIO.
While I wouldnt want to maintain these things (similar to exchange support in thunderbird or DRM support in Firefox) they are extremely important for harm reduction.
Excited for the official KDE Distro. I am a big fan of Fedora Kinoite but will probably give KDE a show once it is out of beta.
Kinoite kind of is KDE Linux but better. Ostree relies on grub which is a bit annoying (NixOS uses systemd-boot but limine would be nice too) and they support more things like kernel modules.
Oh but Kinoite will switch to bootc some day, which currently is worse and looks like it will ever be. Container people vs git people. Flatpak still uses ostree just fine.
Here’s an interesting thought experiment.
Way back in the 1980s and 90s, Usenet was a sorta-federated discussion forum (using the NNTP protocol) that was very popular. It still exists and is distributing 400 million messages each day (mostly spam and trash as far as I can tell). Hard numbers are difficult to come by but it seems like Usenet is capable of significantly higher throughput. Why is that?
The big thing holding ActivityPub back is the fan-out. You know the story - someone with 50,000 followers causes their instance to send up to 50,000 HTTP POSTs every time they click the little spinny star or reply to something.
It’s basically a hub-and-spoke network topology. Except everyone takes turns being the hub, ideally, but not much in practice. And in this topology, the hubs are where the strain and bottlenecks are.
Back in the 1980s they had computers literally 1000 times slower than ours and network links to match. So how did they do this? With a peer to peer network topology! When a new post is made, they don’t send it to everyone they just send it to a handful of other servers. Those servers in turn forward the post on to a handful of other peers, and so on, until the whole network receives the post. No individual server is a single point of failure and none has to bear the full brunt of orchestrating it all.
Let’s do a picture. A creates a post and sends it to B and D.
A ─ B ─ C
\ /
─ D ─
B sends it on to C.
Meanwhile D sends it on the C also but C already has it so does nothing more. IRL this would be a much larger mesh. Who peers with who can be a mixture of manual selection and random spiciness.
Posts can arrive out of order so each server would need to wait until the dependencies between posts are resolved before making them available to clients. That’s a bit tricky.
In the ActivityPub-over-NNTP idea, each NNTP post would be a thin wrapper around a data structure containing the HTTP headers (with signature and digest) and JSON that a normal HTTP POSTed Activity would have. Servers would use NNTP to distribute the activities and upon receiving one they’d POST it to their own /inbox to run the usual ActivityPub processing that their AP instance does.
{
"headers": {
"Signature": "...",
"Digest": "...",
"Date": "..."
},
"activity": { ... normal ActivityPub JSON ... }
}
In this way there is no need to rewrite ActivityPub semantics as only the transport layer changes. Our existing inbox logic remains intact.
NNTP comes with a lot of historical baggage so we’d probably need to evolve the protocol a bit. Maybe use HTTP requests (even http2 streams?) instead of the original line-oriented text protocol using raw TCP sockets. But you get the idea.
Thoughts?
Yes, I think that’s part of NNTP already. Each post has a list of the servers it has traveled through so when considering where to forward the post on to a server can check if it’s already been there. That would help somewhat but still there would be quite a few times when a server discards posts.
I haven’t gotten deep enough into this yet but I’m sure there have been protocol improvements since NNTP that address this. Gossip protocols have been experimented with since the early 2000s. For example, rather than servers saying to others “I have this post, do you want it?” they might say “the most recent post I have in the fediverse@lemmy.world community is #5” and another server which only has posts #1 and #2 would respond “cool, give me posts #3, #4 and #5”.
Good point.
It took the whole article to get to what he’s actually proposing to do with AI. It’s effects. He wants to eliminate effects made by humans.
For years, many Ubuntu users have felt that traditional .deb packages were being gradually sidelined in favor of the Snap ecosystem.
It started quietly. Double-clicking a downloaded .deb file would open it in Archive Manager instead of the installer. Then came controversial changes. Apps like Chromium, Thunderbolt and Firefox began defaulting to Snap packages, even when users tried installing them via the apt command in the terminal.
It continued further as Ubuntu introduced its new Snap Store. In Ubuntu 24.04, it ignored .deb packages completely. Double-clicking a .deb file would open the App Center, but wouldn’t actually install the package and just hang there. That behavior was later reverted after I highlighted it through It’s FOSS.
Agreed, but how many requests do you see for support for people understanding mesh networking enough to set a server up in docker? Those are the people that use it and need it, and surprise they use docker, not flatpack or snap.
I am talking about non powerusers, non it personnel that are installing steam or gimp via snap or flatpack and then flood support threads with their problems. In those cases sandboxing is fucking stupid.
It’s also really easy to avoid the company that made the slow, propietary, corporate-owned Snaps default as a slap-in-face to the open-source community they depend on.
Maybe they should implement something beyond toy security.