#SocialCoop inline poll: should SocialCoop be one of the signatories of the [[Fedipact]] effort to *preemptively defederate* with Threads.net?
https://www.loomio.com/d/AZcJK6y2 is an ongoing Loomio discussion about this but I wanted to see some in-instance discussion ideally.
@flancian I thinking "limit" option is the best choice for us, and I think that prevents us from being a signatory.
@ntnsndr thanks for your input -- I think I agree!
@3wordchant @ntnsndr thank you so much for raising this point!
I am unaware of the fraction involved, and you're right I should be made aware. I am also unaware in detail of the position of [[threads]] w.r.t. blocking well-defined subsets of users en masse, which is the direction I think we should go in in the general case of very large instances that cater to large diverse populations while maintaining a reasonable approximation of a rational pro-social ethical stance in the case of conflicts.
@3wordchant @ntnsndr in general I just want to try to think first, as a community, of the large number of *people* who are in [[threads]] because that's where they friends are, for example -- and how to help them onboard to the #Fediverse as well as we can!
I would rather their first contact is with friendly open people and groups like those at #socialcoop
@3wordchant @ntnsndr of course no tolerance for fascists goes without saying?
@flancian @3wordchant Unfortunately I think there is need for treating Threads a bit differently than other instances, given that is so large and varied. Despite its failures of enforcement and policy, there is at least a bare-bones policy against hate speech, which distinguishes it from platforms that actively encourage such things. https://help.instagram.com/477434105621119?ref=igtos&helpref=faq_content
The problems it poses should be weighed against the benefits, esp. enabling our members to reach a larger network of people.
@flancian @3wordchant I don't think we have much leverage against Threads by refusal to federate with our few hundred members. In contrast, being visible on threads could help more people there see the option of doing social media cooperatively.
Unlike a space like Gab, most people are joining Threads simply by default, and are not directly associating with the accounts you mention.
I think limiting, is an appropriate compromise.
@ntnsndr I don't see how the implication that we'd federate with Gab if it had a few million more non-bigoted users is in line with s.c's Federation Abuse Policy.
Folks who want to do outreach to Threads (or Gab) users are completely able to sign up for accounts on those platforms if they like; going back to "balance" it seems obvious that s.c users' safety is more important than making life slightly more convenient for that subset of users who want to evangelise in that way (1/2)
@3wordchant What do we gain in safety by defederating that we wouldn’t gain by limiting?
If we limit, then we will not see any Threads posts in the federated TL. If SC users decide to follow individual Threads accounts and boost any toxic bigotry into local, they’ll be in violation of our own internal codes of behavior and will be dealt with.
This does raise a new Q though. Are we in violation of rules if we quote boost something in order to critique it? Are CWs sufficient?
@ntnsndr @flancian
@jotaemei off top (others with more knowledge of ActivityPub might be able to think of further examples), defederating would prevent Threads users from organising harassment campaigns invisibly in replies to s.c users, and prevent s.c. users' content from reaching unexpected audiences of hate groups on Threads by being boosted (or whatever Threads calls it there).
@3wordchant @jotaemei @ntnsndr these are good examples, thank you. Playing devil's advocate here a bit:
- The telephone can be used to organize a harassment campaign. Should numbers not be able to call each other freely because of this? Should the government tap all lines because of this? My gut feel says no to both. Does this intuition not apply here because of speed or some other factor in this particular network? I'm unsure.
@3wordchant @jotaemei @ntnsndr
- On boosts as a danger/weapon: I'm sorry but I don't see how federation makes the problem significantly worse for what amount to public web posts that can already be scraped, etc. Maybe a visibility rule to 'only show to logged in users from instances in a user-kept allowlist' would be needed for such cases?
Essentially user-defined per-post federation allowlists might be needed in the long term.
@flancian "authorized fetch" is part of what you're describing, and I hope its adoption continues to increase.
As for "you can still see the content on the web", sure, but there's a wide zone between "technically impossible" and "absolutely trivial to do" – surely you agree that putting *any* friction in the way of the bigots who demonstrably exist on Threads will reduce the amount of harm caused, even if not to zero?
@3wordchant @jotaemei @ntnsndr full disclosure: I am currently not into adopting authorized fetch in Social.coop either. IIUC it makes federation significantly more complex to implement, in particular for smaller/new servers (that don't run Mastodon). I'm happy to be shown wrong here though, maybe I am over-estimating the barriers to federation it would add.
@3wordchant @jotaemei @ntnsndr On the principle of minimizing/obstructing harm: this point is of course valuable but it also reminds me of many conversations I've had about scraping the Fediverse. In the end I think there might be a philosophical gap here between camps 'the Fediverse should be part of the open web first' and 'the Fediverse should be a walled garden first' -- a more ethical and federated one, but a walled garden in the end.
@3wordchant @jotaemei @ntnsndr My (surely privileged, tech-bro-influenced position) is currently "open web first", and if someone doesn't want their posts to be seen widely they should use a non-open visibility setting.
This doesn't mean I think we shouldn't defederate with actively fascist instances, or we shouldn't work to improve the paltry visibility settings we have now in Mastodon. We should do both. It's just threads doesn't seem like a fascist/troll instance to me, and I've seen plenty.
@3wordchant @jotaemei @ntnsndr now, if once threads has set up moderation/admin communication channels harmful accounts stay up... then my position about them will change.
You pointed out earlier that this position might be inconsistent/irrational as the onus of work should be on them given their track record. That's fair. I'm still processing this and I might change my default position because of this.
@flancian Exactly. In February, Facebook will celebrate 20 years of having had the opportunity to set up effective moderation. The parent company's 2022 revenue was over $116 billion; Instagram (the business unit of which Threads is a part) had estimated revenue over $50 billion the same year. I think it's very fair to say that they have had a huge opportunity to improve their content standards, if they were going to.
@3wordchant @flancian @jotaemei I think the basic fact of the matter is that moderation at that scale is a fool's errand. You're always going to be either too restrictive or not restrictive enough for huge numbers of people. That's the beauty of the fediverse—we can be in a global network with more fine-grained moderation choices at the server level.
@ntnsndr while I agree with you about the benefits of decentralisation, I think framing this as a "basic fact" about scale ignores the factors specific to FB's organisational structure, constituency of its investors, business model, and the nature of the (lack of) legal regulation in its country of origin, and many of the countries where it is most popular.