social.coop is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Fediverse instance for people interested in cooperative and collective projects. If you are interested in joining our community, please apply at https://join.social.coop/registration-form.html.

Administered by:

Server stats:

479
active users

#generativeAI

61 posts49 participants0 posts today

"Amidst the chaos and upheaval at the Social Security Administration (SSA) caused by Elon Musk’s so-called Department of Government Efficiency (DOGE), employees have now been asked to integrate the use of a generative AI chatbot into their daily work.

But before any of them can use it, they all need to watch a four-minute training video featuring an animated, four-fingered woman crudely drawn in a style that would not look out of place on websites created in the early part of this century.

Aside from the Web 1.0-era graphics employed, the video also fails at its primary purpose of informing SSA staff about one of the most important aspects of using the chatbot: Do not use any personally identifiable information (PII) when using the assistant."

wired.com/story/social-securit

WIRED · Behold the Social Security Administration’s AI Training VideoBy David Gilbert
#USA#SSA#AI

"The bizarre replies are the perfect distillation of one of AI's biggest flaws: rampant hallucinations. Large language model-based AIs have a long and troubled history of rattling off made-up facts and even gaslighting users into thinking they were wrong all along.

And despite AI companies' extensive attempts to squash the bug, their models continue to hallucinate. Even OpenAI's latest reasoning models, dubbed o3 and o4-mini, tend to hallucinate even more than their predecessors, showing that the company is actually headed in the wrong direction.

Google's AI Overviews feature, which the company rolled out in May of last year, still has a strong tendency to hallucinate facts as well, making it far more of an irritating nuisance than a helpful research assistant for users."

futurism.com/google-ai-overvie

Futurism · "You Can’t Lick a Badger Twice": Google's AI Is Making Up Explanations for Nonexistent Folksy SayingsBy Victor Tangermann

#GenerativeAI tool marks a milestone in #biology
Trained on a dataset of 100,000 organisms from #bacteria to #humans – and a few extinct ones – #Evo2 can predict the form and function of #proteins in the #DNA of all domains of life and run experiments in a fraction of the time it would take a traditional lab.The system can quickly determine what #gene mutations contribute to certain #diseases and what mutations are mostly harmless.
news.stanford.edu/stories/2025
#genetics

news.stanford.eduGenerative AI tool marks a milestone in biologyTrained on a dataset that includes all known living species – and a few extinct ones – Evo 2 can predict the form and function of proteins in the DNA of all domains of life.

Albert Burneko over at Defector has referred to ChatGPT as a "jumped-up Speak and Spell" and I will be using that from now on.
The entire article, "Henry Blodget Invents, Hires, Sexually Harasses, Blogs About Nonexistent AI Subordinate" is an only-child Easter basket of snark, and exactly what I needed on this particular Friday.
#generativeAI #chatgpt #defectormedia #FridayPetty

"The core problem here is designing for attachment. A recent study by researchers at the Oxford Internet Institute and Google DeepMind warned that as AI assistants become more integrated in people’s lives, they’ll become psychologically “irreplaceable.”
Humans will likely form stronger bonds, raising concerns about unhealthy ties and the potential for manipulation. Their recommendation? Technologists should design systems that actively discourage those kinds of outcomes.

Yet disturbingly, the rulebook is mostly empty. The European Union’s AI Act, hailed a landmark and comprehensive law governing AI usage, fails to address the addictive potential of these virtual companions. While it does ban manipulative tactics that could cause clear harm, it overlooks the slow-burn influence of a chatbot designed to be your best friend, lover or “confidante,” as Microsoft Corp.’s head of consumer AI has extolled. That loophole could leave users exposed to systems that are optimized for stickiness, much in the same way social media algorithms have been optimized to keep us scrolling.

“The problem remains these systems are by definition manipulative, because they’re supposed to make you feel like you’re talking to an actual person,” says Tomasz Hollanek, a technology ethics specialist at the University of Cambridge."

bloomberg.com/opinion/articles

Even #US Gov Says #AI Requires Massive Amounts of #Water
A new gov illuminates environmental impact of #generativeAI.
47-page analysis found cooling #datacenters -- which demand between 100-1000 megawatts of power--constitutes 40% of their #energy consumption, a figure expected to rise as global temperatures increase.
Water usage varies dramatically by location, with geography significantly affecting both water requirements and carbon #emissions.
404media.co/even-the-u-s-gover
archive.ph/OepWn

404 Media · Even the U.S. Government Says AI Requires Massive Amounts of WaterA new government illuminates the environmental impact of generative AI.

"Inherent security flaws are raising questions about the safety of AI systems built on the Model Context Protocol (MCP).

Developed by Anthropic, MCP is an open source specification for connecting large language model-based AI agents with external data sources — called MCP servers.

As the first proposed industry standard for agent-to-API communication, interest in MCP has surged in recent months, leading to an explosion in MCP servers.

In recent weeks, developers have sounded the alarm that MCP lacks default authentication and isn’t secure out of the box — some say it’s a security nightmare.

Recent research from Invariant Labs shows that MCP servers are vulnerable to tool poisoning attacks, in which untrusted servers embed hidden instructions in tool descriptions.

Anthropic, OpenAI, Cursor, Zapier, and other MCP clients are susceptible to this type of attack..."

thenewstack.io/building-with-m

The New Stack · Building With MCP? Mind the Security GapsA recent exploit raises concerns about the Model Context Protocol, AI's new integration layer.

"This report outlines several case studies on how actors have misused our models, as well as the steps we have taken to detect and counter such misuse. By sharing these insights, we hope to protect the safety of our users, prevent abuse or misuse of our services, enforce our Usage Policy and other terms, and share our learnings for the benefit of the wider online ecosystem. The case studies presented in this report, while specific, are representative of broader patterns we're observing across our monitoring systems. These examples were selected because they clearly illustrate emerging trends in how malicious actors are adapting to and leveraging frontier AI models. We hope to contribute to a broader understanding of the evolving threat landscape and help the wider AI ecosystem develop more robust safeguards.

The most novel case of misuse detected was a professional 'influence-as-a-service' operation showcasing a distinct evolution in how certain actors are leveraging LLMs for influence operation campaigns. What is especially novel is that this operation used Claude not just for content generation, but also to decide when social media bot accounts would comment, like, or re-share posts from authentic social media users. As described in the full report, Claude was used as an orchestrator deciding what actions social media bot accounts should take based on politically motivated personas. Read the full report here."

anthropic.com/news/detecting-a

Profile with Claude sunburst
www.anthropic.comDetecting and Countering Malicious Uses of ClaudeDetecting and Countering Malicious Uses of Claude

"Datacentres, vast warehouses containing networked servers used for the remote storage and processing of data, as well as by information technology companies to train AI models such as ChatGPT, use water for cooling. SourceMaterial’s analysis identified 38 active datacentres owned by the big three tech firms in parts of the world already facing water scarcity, as well as 24 more under development.

Datacentres’ locations are often industry secrets. But by using local news reports and industry sources Baxtel and Data Center Map, SourceMaterial compiled a map of 632 datacentres – either active or under development – owned by Amazon, Microsoft and Google.

It shows that those companies’ plans involve a 78% increase in the number of datacentres they own worldwide as cloud computing and AI cause a surge in the world’s demand for storage, with construction planned in North America, South America, Europe, Asia, Africa and Australia."

theguardian.com/environment/20

The Guardian · Revealed: Big tech’s new datacentres will take water from the world’s driest areasBy Luke Barratt
The thing about this is, I worked for bosses in the 1990s who'd spend an afternoon goofing around with FileMaker Pro, and then tell their entire staff they made a "database" that had to be put into "production" ASAP. My belief then was that the boss person went golfing with one of their equally-uninformed boss buddies, heard a bunch of tall tales about some software or another, and then had to mimic what their buddy did so they could brag next time.

This looks like that, except the outputs are creepier.

https://defector.com/henry-blodget-invents-hires-sexually-harasses-blogs-about-nonexistent-ai-subordinate

#AI #GenAI #GenerativeAI #LaborDisciplineAsAService #bosses
defector.com · Henry Blodget Invents, Hires, Sexually Harasses, Blogs About Nonexistent AI Subordinate | DefectorHave you ever been lonely? I mean really lonely. Think of the loneliest you ever were. Transpose that experience of loneliness to the bottom of a mineshaft. Oof, right? Anyway, bear that in mind. Former Business Insider CEO and co-founder Henry Blodget, these days the sole proprietor and staffer of the blog/media company Regenerator, published […]