In reply to: https://social.coop/users/smallcircles/statuses/116289320453692795
@smallcircles OK, great. Let's try to keep the conversations there instead of here, so everyone can participate.
@smallcircles OK, great. Let's try to keep the conversations there instead of here, so everyone can participate.
@smallcircles I'm not sure. It's pretty important for me to track user stories as issues -- that's been the best way to get things done in task forces so far.
@julian @Profpatsch oh, yeah, definitely. It's really our only way to authenticate requests right now.
@hongminhee oh, I'm so happy. I've seen too many implementations that assume the key id is a fragment, and just load that as the actor.
And I saw one that loaded the actor of the received activity and verified the signature against the actor's key, ignoring the keyID entirely!
I knew you would do it right! Thanks for the reassurance
Great, thanks.
@smallcircles @julian I think we might have different ideas about what the ActivityPub API task force is for.
To me, it's about making it possible for clients to use different servers, and different implementations of the API. That's going to include the social API defined in the ActivityPub standard, but it will also encompass things like rate limits, authentication, caching, CORS, and so on.
How that all gets documented will probably be in one or more community group reports.
The extent to which the default profile becomes a 'straightjacket' impact scope, applicability, and usability. I guess its alright as long as there's sufficient flexibility and extensibility taken into account. Guess the "sufficient" does the heavy lifting here.
@smallcircles @julian I think that's always a tension in standards! How do you make it explicit enough that developers can build interoperable software, but extensible enough that they can try new things?
I think one pattern that works well is some base-level standards assumed, and easy ways for extensions to be discoverable and negotiable. If your preferred extension isn't available from the software on the other side of the line, you fall back to the base-level standard.
So, if the rate limit is 300 requests every 5 minutes, and you've already used 143 requests, you might see headers like this:
X-RateLimit-Remaining: 157
X-RateLimit-Reset: 2026-03-22T22:10:00Z
@julian Unfortunately, there are a ton of conflicting variations on this pattern. Some APIs use a Unix timestamp for the reset datetime (!), others use HTTP header values. Mastodon uses an ISO 8601 datetime.
The X-RateLimit-* headers also don't work well if there are multiple quota policies. That can happen if there are particular types of requests that are under a stricter quota than others. There are some variants that APIs use, but they're specific to the platform.
@julian The big advance is the new rate limit headers RFC draft:
https://datatracker.ietf.org/doc/html/draft-ietf-httpapi-ratelimit-headers
It supports having multiple policies. It's very clean and elegant. Unfortunately, it's still in draft stage. It's probably good to be ready for future changes if you're going to implement this.
> Rate limits are a common part of APIs.
Yes, of API *implementations*, and they may become part of the public interface of these implementation. Whether they should be part of an open standard protocol specification is a different matter imho. Perhaps in a separate implementation guide, suggesting recommended practices.
Or perhaps there may be some way to formulate a generic mechanism in the protocol specification that makes rate limits an extension point, without pinning to a particular method, esp. if it is only a de facto standard.
(Other example. The fediverse is still pinned to an expired draft of HTTP signatures.)
OTOH if the goal of the task force is to mostly just provide implementation guidance, and maybe a reference impl, then I guess examples of rate limiting may be provided.
@smallcircles @julian the point of the API task force is to make using the API across servers possible. That's why we're doing the OAuth work. I think rate limiting is part of the basic profile; it's one of the things you need to support to use the API across different servers.
@julian There are 3 main clusters.
They're linked here for the ActivityPub API task force, but they also apply for the federation protocol:
https://github.com/swicg/activitypub-api/issues/4#issuecomment-4083573914
Anyway, here's my thought: make collection pages real, stable objects, with fixed contents and real modification dates. Return only references, not embedded objects. Do filtering, though. And make pages big -- 100 items or more.
@evan mastodon’s approaches is interesting in this regard, it returns embedded objects for local items, and references for remote objects. Best of both
@django it's good in some ways, but they still don't return Last-Modified headers.
@smallcircles @julian I disagree!
Rate limits are a common part of APIs. For apps to work across servers, the servers need to provide about the same interface.
Using standard rate-limiting headers lets client apps detect what rate limits they will be held to. It reduces the uncertainty.
Fortunately, there is a well known de facto standard and an even better IETF standard coming up. We should point them out.