social.coop is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Fediverse instance for people interested in cooperative and collective projects. If you are interested in joining our community, please apply at https://join.social.coop/registration-form.html.

Administered by:

Server stats:

499
active users

#effectivealtruism

0 posts0 participants0 posts today

"A couple years ago, Oliver Habryka, the CEO of Lightcone, a company affiliated with LessWrong, published an essay asking why people in the rationalism, effective altruism and AI communities “sometimes go crazy”.

Habryka was writing not long after Sam Bankman-Fried, a major funder of AI research, had begun a spectacular downfall that would end in his conviction for $10bn of fraud. Habryka speculated that when a community is defined by a specific, high-stakes goal (such as making sure humanity isn’t destroyed by AI), members feel pressure to conspicuously live up to the “demanding standard” of that goal.

Habryka used the word “crazy” in the non-clinical sense, to mean extreme or questionable behavior. Yet during the period when Ziz was making her way toward what she would call “the dark side”, the Berkeley AI scene seemed to have a lot of mental health crises.

“This community was rife with nervous breakdown,” a rationalist told me, in a sentiment others echoed, “and it wasn’t random.” People working on the alignment problem “were having these psychological breakdowns because they were in this environment”. There were even suicides, including of two people who were part of the Zizians’ circle.

Wolford, the startup founder and former rationalist, described a chicken-and-egg situation: “If you take the earnestness that defines this community, and you look at civilization-ending risks of a scale that are not particularly implausible at this point, and you are somebody with poor emotional regulation, which also happens to be pretty common among the people that we’re talking about – yeah, why wouldn’t you freak the hell out? It keeps me up at night, and I have stuff to distract me.”

A high rate of pre-existing mental illnesses or neurodevelopmental disorders was probably also a factor, she and others told me."

theguardian.com/global/ng-inte

The Guardian · They wanted to save us from a dark AI future. Then six people were killedBy J Oliver Conroy
Replied in thread

@mojala @ms_eikku Tuo on yksi syy, miksi jostain efektiivisestä altruismista jotkut etsii apua, kun yhteiskunta ei hoida sille kuuluviksi arveltuja tehtäviä, eikä itse pysty millään kalkyloimaan seurauksia.

Tässä yleinen sivu effectivealtruism.org/, tässä suomalainen altruismi.fi/, ja Wikipedian otto suomeksi fi.wikipedia.org/wiki/Efektiiv, ja maailma-kielellä en.wikipedia.org/wiki/Effectiv, ja tiettävästi löytyy nykyään jo kunkin ideologiaan sovitettuja. En tiedä, millä perusteilla tuollaisia hankkeita on missäkin kritisoitu, eikä itselläni ole vielä kantaa niihin.

Hyvän tekeminen missä vain on arvokasta, vaikka sitten rahalla, mutta uusi maailmantilanne Trumppeineen sun muineen sekoittaa päätä, joka ei muutenkaan osaa hyvin laskea (cf. cognitive biases). En tahtoisi niukasta toimeentulosta lahjoittamalla lahjoittaa pelkästään omalletunnolleni tai kuvittelemalleni tulevaisuudelle.

www.effectivealtruism.orgEffective Altruism

#EffectiveAltruism folx should work on their reading comprehension.

I got an #AI in 2024 retrospective from 80,000 Hours, an Effective Ventures project (related to EA). In it, they mention that "the o1 language model [developed by OpenAI ][...] has the ability to deliberate about its answers before responding."

The OpenAI o1 release says: "We introduce deliberative alignment, a training paradigm that directly teaches reasoning LLMs [...] safety specifications..."

Quite the leap of faith...

As a #philanthropy wonk, I've been an #effectivealtruism skeptic since I first learned about it. To many people it represents a sense of arrogance, that morality can be distilled into utilitarian quantitative calculations. While there's some truth to that, I think critics forget that it was a direct response to another sense of moral arrogance coming before it, treating all local nonprofit 'pet causes' as unimpeachable and equally urgent, while overlooking the most vulnerable populations in corners of the world suffering at the lowest socioeconomic rung largely out of sight and out of mind. If you are donating to causes with the mantle of "impact", there's no right way to grapple with this, but you're gonna have to grapple with it nonetheless.
vox.com/future-perfect/372519/

An illustration of a man handing something to another man. They’re surrounded by small scenes of people in need of food, clothing and shelter.
Vox · I give to charity — but never to people on the street. Is that wrong?By Sigal Samuel
Wow this is beautiful, if bleak. Sorry to quote at such length but it's too good. He also pegs #ElonMusk well towards the end (though I'm not going to quote that bit).
Mars does not have a magnetosphere. Any discussion of humans ever settling the red planet can stop right there, but of course it never does.

...

The South Pole is around 2,800 meters above sea level, and like everywhere else on Earth around 44 million miles closer to the sun than any point on Mars. It sits deep down inside the nutritious atmosphere of a planet teeming with native life. Compared to the very most hospitable place on Mars it is an unimaginably fertile Eden. Here is a list of the plant-life that grows there: Nothing. Here is a list of all the animals that reproduce there: None.

Life on earth writ large, the grand network of life, is a greater and more dynamic terraforming engine than any person could ever conceive. It has been operating ceaselessly for several billions of years. It has not yet terraformed the South Pole or the summit of Mount Everest. On what type of timeframe were you imagining that the shoebox of lichen you send to Mars was going to transform Frozen Airless Radioactive Desert Hell into a place where people could grow wheat?
From https://defector.com/neither-elon-musk-nor-anybody-else-will-ever-colonize-mars

#Eugenics #Longtermism #EffectiveAltruism #Mars #MarsColony
defector.com · Neither Elon Musk Nor Anybody Else Will Ever Colonize Mars | DefectorMars does not have a magnetosphere. Any discussion of humans ever settling the red planet can stop right there, but of course it never does. Do you have a low-cost plan for, uh, creating a gigantic active dynamo at Mars’s dead core? No? Well. It’s fine. I’m sure you have some other workable, sustainable plan […]

EU Scream: Ep.110: Philosophy and Future Generations

euscream.com/philosophy-and-fu

MP3:

buzzsprout.com/178148/15694918

Discussion about the long-term future and how to be a good ancestor. No that doesn't mean longtermism or effective altruism. Don't let ancaps steal this too.

<💬>
Close your eyes. Imagine a young person you know and care about. Picture them at age 90. And then think about the kind of world you want to leave them. Is it ridden by conflict and chaos? Or is it peaceful and habitable? Such thought experiments can lead us to change behaviour and priorities. But they also have wider application to government and policymaking, says social philosopher Roman Krznaric who wrote The Good Ancestor and is Senior Research Fellow at Oxford University’s Centre for Eudaimonia and Human Flourishing. Roman’s thinking has become part of a push to get governments and leaders to make better policy choices by taking a far longer perspective. That push seems to be bearing fruit. President of the European Commission Ursula von der Leyen may create a portfolio for intergenerational fairness for her next five-year term, and UN Secretary-General António Guterres seems set to appoint a Special Envoy for Future Generations at a summit this month in New York. But how a focus on future generations works in practice raises thorny questions, among them: how many generations of descendants should we plan for, and over what time spans? And how can the focus on future generations be kept separate from controversial ideas like Longtermism and Effective Altruism that are associated with jailed cryptocurrency mogul Sam Bankman-Fried? Also in this episode: Roman introduces his new book History for Tomorrow in which he explores the role of so-called radical flank movements, like Extinction Rebellion. “It’s too late to leave the problems of our time to simmer on the low flame of gradualism,” he says. “You need the disruptive movements to accelerate things.”
</💬>

EU Scream · Philosophy and Future Generations - EU ScreamThe vogue for policy on future generations versus ideas like Longtermism and Effective altruism.

Between EA and AI, we've been sold so many great tools to change the world in great ways. Can hardly wait to see the exciting, wonderful results —

"Behind the scenes of how ‘effective altruism’ went to die in Sam Bankman-Fried’s Bahamas penthouse"
fortune.com/2024/09/02/effecti
#EA #EffectiveAltruism #crypto #cryptocurrency #misinformation

Fortune · Behind the scenes of how ‘effective altruism’ went to die in Sam Bankman-Fried’s Bahamas penthouseBy Andrew R. Chow

I saw another negative toot about #EffectiveAltruism. What's with all the hate? Is it a philosophical concept that has been abused? Yes. Is God a philosophical concept that has been abused? Is the blockchain a great idea abused? The car was a great idea, abused. Who wouldn't have that over a horse? Procreation is a great idea, abused. See a pattern here?

Corruption powers; Absolute corruption powers absolutely.

It's just human nature.

There are no bad ideas, only bad people.

Trots om sinds vorige maand als vrijwilliger bij te dragen aan het succes van Doneer Effectief als backend ontwikkelaar. Doneer Effectief is een organisatie die 100% van de donaties aan hen door zet naar echt effectieve doelen. Want wist je dat de meest effectieve goede doelen wel 100 keer zoveel impact maken als het gemiddelde? 100 keer! Het is dus zeker de moeite waard om even stil te staan waaraan je doneert!