How Decentralized Is Bluesky Really? https://dustycloud.org/blog/how-decentralized-is-bluesky/
A technical deep-dive, since people have been asking me for my thoughts. I'll expand a bit on some of the key points here in a thread.
First of all, before I say anything else, my goal here is NOT to be mean to Bluesky's devs. I know there's a lot of fediverse-Bluesky rivalry, but I have enormous respect for Jay Graber and her team and I know they believe in their vision!
This started because I got some very kind encouragement by @bnewbold to write something. I'm trying to be technical in my analysis, not unkind. I hope that can be recognized, really and truly.
That said, let's get to the summary: Bluesky / ATProto are not decentralized or federated, according to my analysis.
However, the "credible exit" goal is worth perusing, and does use decentralization techniques! But it is not decentralization/federation without moving the goalposts on those terms.
Furthermore, I think Bluesky is providing something valuable: a lot of people are trying to leave X-Twitter *right now* because it has become a completely toxic place.
The fact that Bluesky's team has managed to scale to receive such users is incredible, nearly feeling miraculous.
On the fediverse we also see a lot of accusations of Bluesky being owned by Jack Dorsey, and this isn't true. My understanding is that Jay performed an impressive amount of negotiation to allow Bluesky to receive funding independently.
These days Jack Dorsey is instead focusing on Nostr, which I can only describe as "a sequel to Secure Scuttlebutt with extremely bad vibes where bitcoin people talk about bitcoin"
I participated a bit in the process of when Bluesky was Jack Dorsey and Parag Agrawal's personal project. I also believe Jack and Parag were sincere about Bluesky as a decentralized social network protocol that Twitter would adopt, which is the directive that Bluesky was given as an organization.
When Jay Graber was awarded the position to lead Bluesky, I was not surprised. To me, Jay was the obvious choice to deliver what Bluesky was being directed, and I do think Jay is an excellent leader
There is also something which Bluesky gets right which the fediverse does not. I mentioned that Bluesky uses decentralization *techniques*, and the most important of those is content-addressing. This allows content to exist even when a server goes down.
This is a great decision and I have advocated that the fediverse do so as well. In fact several years ago I wrote a demo in @spritely's early days showing off how one could build a content-addressed ActivityPub in a spec-compatible way.
So I have opened here with the things that Bluesky does well. As you may guess, we are about to move into critiques territory, and it's a lot of critiques from a *decentralization*/*federation* perspective. It doesn't erase the "credible exit" goals, which I think are good still.
Let's dive in...
A frequent way of describing Bluesky's decentralization, including by Bluesky's team, is "it's like a bunch of blogs (Personal Data Stores), and then the relay/appview/etc pieces are like search engines"
This is a reasonable starting point for thinking about things, so let's run with it.
In fact ATProto's own tutorial even says "Think of our app like a Google": https://atproto.com/guides/applications
And indeed this is a good way to think about things. But it doesn't seem so bad, because we have Personal Data Stores like blogs, so probably things are fine, right?
While most people would argue that blogs and websites are open, few would argue that *Google* is open. So this is a curious place to begin thinking, and yet structually, it is actually quite apt.
PDS'es are like blogs, the rest is like Google. But relays/appviews/etc do a lot *more* than Google.
Relays, AppViews, etc don't just index information. Blogs and their interactions are generally slow-moving, but social media is direct and responsive. Notifications and fast interactions are key. So search engines, yes, but we should also think of these components of doing much more.
But let's stay on this blog/search engine analogy for a while before we unpack what it means on a *technical* level, which is interesting. Let's analyze for the moment from a power dynamics level.
Building a web search engine is actually pretty easy these days, you can do so with off-the-shelf tools. And yet there are only a couple of search engines *really*, Google and Bing (DDG mostly uses Bing). And yet the information is right there. *Anyone* could run their own engine. Why don't they?
Furthermore there is an interesting connection between blogs and social media: the death of blogs + feed aggregation directly aligns with the death of social media.
How many of you were around for the birth and awkward death of blog engine feeds? Because I was! Oh, remember Google Reader?
Feed readers are also simple, and in fact they were even easy to self host, even on the desktop! But Google Reader came in and was such a good design that everyone used it.
When it went away, blogs were still *there*. But blogging as a *syndication medium* died. One big player left, and it's gone.
This was sad for me especially; my favorite medium on the internet ever was webcomics. Webcomics still exist, sort of, but the loss of independent publishing and aggregation meant that they had to change to survive.
The shape of webcomics started to get shaped to the shape of Twitter's image box.
This may seem like an enormous aside, but it isn't. The big sell currently is that "you don't need to run a relay because you can run your own PDS!" but as I have illustrated here, the distribution and syndication power dynamics matter a lot.
So. It isn't enough to self-host your own PDS. Whether or not people can run their own relays/appviews/etc actually matters *a lot* if we want this stuff to survive.
So, can we? How hard is it to run your own AppView/Relay/etc?
Today, there is only one real organization running a Relay that really matters or an AppView that people use for anything other than fun aggregation of statistics. Nothing that resembles meaningful decentralization of the network. It's all run by one company: Bluesky.
But could we change that?
People are trying; most notably alice has done some great work recently: https://alice.bsky.sh/post/3laega7icmi2q
So now someone *can* run their own Relay (not the AppView yet, but maybe soon), and we're getting a sense of the cost and scale. This is good news; we didn't know before.
In fact we also have an idea of the rate of growth. Approximately 4 months prior, @bnewbold.net posted an article detailing how to run a Bluesky relay: https://whtwnd.com/bnewbold.net/entries/Notes%20on%20Running%20a%20Full-Network%20atproto%20Relay%20(July%202024)
This is great. We need more people trying to do so to get a sense of how decentralized things can be.
Just focusing on storage, in July @bnewbold.net estimated the amount of storage expected to run a Bluesky relay is approx 1 terabyte. In just 4 months at start of this month (November), alice estimates nearly 5 terabytes.
This is a fast growth rate and this is *before* the big post-election influx.
I tried estimating how much this would cost; as a lazy approximation I dumped a 5 terabyte machine into seeing what Linode would cost to self-host, and it was approximately 55k a year: https://bsky.app/profile/dustyweb.bsky.social/post/3lah5n3kld42q
That's a lazy estimate, but that's also what many people make in the US every year
However @bnewbold pointed out, correctly!, that there were cheaper options available. If we used even Linode's block storage, it would be cheaper (but still expensive) for the storage component, and this is true https://bsky.app/profile/dustyweb.bsky.social/post/3lah5n3kld42q
In fact @bnewbold and alice had gotten the server down to just close to $200/month in their estimate, much much cheaper than I had, by choosing a dedicated server plan. Much cheaper!
But there's a problem though; that's cheap because you've got a server that has a dedicated disk...
Even if we look at the dedicated hosting provider that @bnewbold provided in June and scale the cost to the pre-election storage requirements, we are adding on a massive amount of cost every month, over $400/month more.
But worse, we have reached the limits of what is possible to do with a dedicated server. We *have to* move to abstracted storage from this point forward because we're starting to hit the limits of what's offered for cheap dedicated storage on one machine. And this number will only grow, and as said previously, is growing at an enormous rate.
@cwebber here to say I do indeed remember Google Reader. I 100% switched from Google Reader to Twitter. It was basically a 1:1 replacement (sad).
@lopezsanchez @cwebber me too. But they are definitely at the margin. Many blogs died in that transition.
@cwebber That's rather bad. Ideally a social media node would have a constant storage need, not one which needs to scale with the amount of users, the media, etc.
Maybe ephemeral services like IRC were not such a bad idea to begin with. ¯\_(ツ)_/¯
The conclusion that we need big billionaire companies, just to fill our "need" to post cat videos, is a little bit silly. BUT when considering climate change, that fits the picture.
We live in absurd times.
@cwebber what is in the 5TB? The whole history? Is media included?
@cwebber I question the assumption that a relay needs to store the entire network to be useful. Just gathering every reply/like/follow/etc that mentions one of my posts (and perhaps the posts of people I follow), and throwing everything else to /dev/null, would already be hugely useful, and if things are storage-bound (as sounds the case) might be dramatically cheaper.
(To be clear I broadly agree with you here, but seeing as all my friends seem to be ending up on bluesky I've done a lot of thinking about how I can be On There as independently as possible)
@damon is nostr any good? I like some of the (very limited) technical stuff I've heard about it but get the impression the people are largely blockchain/free-speech-absolutist types
@cwebber I use RSS to subscribe to my webcomics, but there's a surprisng lot where I can't
(or a sequence of Instagram squares)
@cwebber I remember this sad ending of the RSS era. Still use it, but it’s not the same as before
@cwebber I experienced that so differently! For me google reader wasn't a feed reader, but one of the first social media focussed on news. And still the best to date imo. When I tried reading my feeds later through other apps it felt kinda "lonely"?
@cwebber EEE keeps being a very powerful method ... even when the extinguisher gets extinguished itself
@cwebber Yep! I often cite that as an example of what I call "de-invention"
@cwebber
Speaking of RSS, any chance your blog site will have an RSS or Atom feed? Long threads here are certainly a choice and I (heard) it's possible yo subscribe to Fediverse accounts as RSS, but a whole-thread-in-one blog post sounds better to me.
@aaravchen it does have one but browsers no longer tell you it does
it's in the html headers tho https://dustycloud.org/blog/index.xml
@aaravchen @cwebber I usually _View Source_ to find the RSS feed of sites (search for `feed`).
There are browser extensions to help find them (e.g. https://addons.mozilla.org/en-US/firefox/addon/get-rss-feed-url/)
Turn any Mastodon account into an RSS feed by appending .rss
@cwebber they've invented the google of email of twitter......
@cwebber
one_does_not_simply_a_google.jpeg
(oblig.)
@cwebber certainly not accidentally all the
@cwebber sounds to me like they just offload network and storage costs onto users while retaining power
What exactly do you mean about content continuing to exist? Do you just mean to split into a content server and a regular server, where each had to be taken down separately?
@amici @cwebber Not sure if this is what she means, but I take it to mean that the copies of a post that appear on our followers' servers and on boosts and so on would still be possible to interact with -- boost them further, reply, and so on -- if our original server goes offline rather than existing in the sort of ghost state they do now, where some people might still be able to see it (until that server drops the cached copy), but with the original gone, nothing can act on it anymore.
This can be done by having servers store their copies with a consistent id (IPFS uses a hash value of the content, IIRC) and asking the servers it knows about if they have a copy of the item instead of only looking in the original location.
@amici a link that's https://foo.example/cat.jpg can go down but a link that's a magnet link of the hash plus a suggested place to get it can be retrieved even if the suggested place goes down
@cwebber so it'll work like a magnet link? I use magnet links a lot personally, although I have no idea how they work under the surface
will look it up though