cross-posted from: https://lemmy.world/post/44770041
cross-posted from: https://lemmy.world/post/44770041
Rare SC W.
Corporate interests 🤝 Pirate interests
Interesting SCOTUS ruling. Unanimous decision for Cox Communications, which is unusual.
What stands out to me: the Court drew a line between intentional facilitation of infringement vs. just providing infrastructure. This actually matters a lot for decentralized platforms like the fediverse.
If your instance actively indexes, promotes, or makes it easy to find infringing content, you might be on shaky ground. But if you’re just a pipe that federates activity pub streams from other servers? That’s different.
I think this is actually protective of indie instances running Mastodon, Lemmy, PeerTube, etc. You don’t know what every user uploaded. The “intent” requirement is a real shield.
That said, I’d be curious to see how this plays out. Will instances start being sued for “providing the service”? That’s where the line gets blurry.
A very jovial greeting to all,
About 20 minutes ago, I started the build for Hubzilla 11.2; as usual, it will be available for all to enjoy, and update their own instances after about an hour or so, so please don’t update until then.
If you’re curious about the code, you are most welcome to check out the Hubzilla code at: https://framagit.org/hubzilla/core/-/releases
and, of course, the docker image code at: https://github.com/dhitchenor/hubzilla
Questions, issues and PRs are all welcome; I’m looking forward to speaking with you.
@COSCUP@floss.social 2026(台北、8月8–9日)にて、Fediverse & Social Webトラックが採択されました!#フェディバース、#ActivityPub、オープンなソーシャルウェブをテーマに、丸一日・計6時間のトラックを予定しています。
発表者向けのCFPはまだ始まっていませんが、公開され次第お知らせします。お楽しみに!
#Fediverse #SocialWeb #COSCUP #ActivityPub #fedidev
Our Fediverse & Social Web track has been accepted for @COSCUP@floss.social 2026 (Taipei, Aug 8–9)! We’ll have a full day—six hours—to fill with talks on the #fediverse, #ActivityPub, and the open social web.
The CFP for speakers isn’t open yet, but we’ll announce it here when it is. Stay tuned!
#SocialWeb #COSCUP #fedidev
@hongminhee @COSCUP STAYING ABSOLUTELY TUNED
cross-posted from: https://lemmy.world/post/44736295
A consortium consisting of multiple interested parties including Murena, i.e. /e/ OS, iodéOS, and Volla, is working on an open source alternative to the Google Play Integrity API, which is to be offered on smartphones that are not running a Google-certified Stock ROM.
For those who do not know, the Google Play Integrity API is Google’s official security and anti-abuse framework that lets Android apps verify that they are running on a genuine, i.e. unmodified device, installed from Google Play, and not being tampered with.
Sadly, this framework tends to discriminate against Custom ROMs, i.e. operating systems that are not running Google’s apps and services, no matter their actual device security state.
Full Google Play Integrity is tied to the ROM being certified by Google, and running Google apps and services - many banking and government apps make use of it right now.
The consortium around UnifedAttestation wants the new framework to rest on three foundations:
it will be part of the operating system, apps can add support for it with a few lines of code
operation of the validation service will be decentral
an open test suite for checking and certifying operating systems on specific devices
The whole thing will be open source, developed under the Apache 2.0 license.
Developers of Scandinavian government apps have already indicated interest, considering the project a first mover for Europe.
Personal comment: I think it’s good that there is now validation service for government & banking apps that is not tied to Google’s infrastructure, and more crucially does not require Google’s apps and the Play Services to be installed.
Isn’t checking the bootloader enough?
Not really. If I’m running as root or with a custom firmware, I can easily fake that my phones bootloader is locked, when in fact it isn’t.
Attestation creates a “chain of trust”, starting at the hardware level. So, an external website can verify that the hardware -> operating system -> application software are all “intact”.
“intact” is a very subjective term (which is why many technical people are against it), but that definition of “intact” will be defined by Google, Apple, Microsoft, or (possibly) whatever this EU Governing Body is.
However, it will not be defined by you the device owner.
I have a friend who’s thinking about checking out the fediverse, and is looking for more aerospace (& maybe mechanical?) engineering-focused spaces. Do you guys have a suggested Lemmy, Piefed, or Mastodon instance/community that focuses on those topics? Maybe even a STEM community could be good enough. I know most of us are tech nerds on here, but we’re more computer focused tech nerds 😅
My only requirement is that the instance can be viewed without sign-in, as I don’t want to force them to make an account to check it out.
!aviation@lemmy.zip !aviation@lemmy.world #aviation tag on mastodon
Good list. What I love about these specialized spaces is they’re built around shared interests rather than algorithmic engagement.
I think that’s why projects like Zeitgeist resonate with fediverse folks - we’re trying to measure genuine opinion, not engagement bait. If you can see people who care about aerospace, or science, or privacy talking directly without an algorithm reshuffling the conversation, that’s the internet as it was meant to be.
onUnverifiedActivity() only runs when signature verification fails: missing signature, bad signature, or a key lookup failure. It gives you a chance to handle those cases yourself instead of Fedify immediately returning 401 Unauthorized. If the signature verifies, this hook is not involved.
If you want extra validation for verified activities, do that in your normal .on() handlers. Those run after signature verification, so that’s where app-specific checks belong, like rejecting certain actors or applying your own rate limits.
Ah, that makes sense. So the unverified hook is really for defensive fallback rather than primary validation logic. I was hoping there was a middle ground for custom checks on all activities, but I guess that is the right place for it. Really appreciate the clarification.
It's been a fun weekend building a new working fediverse applikation.
The tech stack so far..
Backend: Go — fast, simple, great concurrency. No magic, just code.
Frontend: SvelteKit — feels like writing HTML that actually works. SSR out of the box.
Database: PostgreSQL — boring in the best possible way.
Queue: Asynq + Redis — async ActivityPub delivery with retry logic. Workers run separately from the API.
Federation: ActivityPub — HTTP signatures, shared inbox, fan-out delivery for groups and followers.
Infra: Docker Compose — one file per instance, easy to spin up new nodes.
Everything self-hostable. No cloud dependencies. No vendor lock-in.
Still early days — but the foundation feels solid.
And yes - A lot of help fra Claude code. I decided to go all in an use big tech to fight big tech.
#Fedibook #Fediverse #ActivityPub #Go #Golang #SvelteKit #OpenSource #IndieWeb
@sindum Go is the fucking bomb. ❤️
You know, it’s entirely possible to use ATProto without touching anything owned by Bluesky proper
Pretending that Bluesky is the whole of ATProto is like pretending the whole of the Fediverse is Mastodon
Stop using your ignorance as vindication in your choice of home platform, it’s not ATProto vs ActivityPub
It’s the Social Web vs centralized social media
I doubt it, honestly. It’d likely catch a lot of misinfo, yes, but it would likely also classify any new findings that run counter to previous assumptions as misinfo. LLMs can’t keep up to date. And they still have the same issue that whoever trains them gets to decide what is and isn’t misinfo, which starts being a problem when it’s an ubiquitous social media site.
“Correcting” incorrect information with more incorrect information doesn’t improve the situation.
AI tools are inherently unreliable because of the randomness of their text generation.
And worse, Europe doesn’t build its own AIs. LLM fact checking would have to be done by Grok or Claude or some other product from big American tech. And there’s an obvious problem with a social media network trying to avoid American censorship and political bias but “fact checking” with a tool that has American censorship and political bias built into it.