social.coop is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Fediverse instance for people interested in cooperative and collective projects. If you are interested in joining our community, please apply at https://join.social.coop/registration-form.html.

Administered by:

Server stats:

489
active users

Luis Villa

Increasingly think that before llms become a genuine misinfo crisis because of misinfo they themselves generate, they’re just going to stupidly burn out every human moderator on every platform, and allow a wave of human misinfo as a result. (Ongoing🧵) fosstodon.org/@VincentTunru/11

FosstodonVincent Tunru (@VincentTunru@fosstodon.org)Attached: 1 image So we recently received a pull request on Firefox Relay that was clearly not written by a human. How could we tell? Well, there were some red flags...

(And I keep doing this thread in part because it is funny, but also because we can’t let speculative “AI will be horrible in the future” distract from the ways AI is horrible *now*. Whether that distraction is intentionally manipulative or just naive, the distraction is real.)

And today in this thread, StackOverflow mods go on strike in part because of AI spam: openletter.mousetail.nl/

openletter.mousetail.nlDear Stack Overflow, Inc.

@luis_in_brief Yeah probably.

What are your thoughts on issues like search engines volunteering info from LLMs that turn out to be misinfo? These sort of situations: bbc.com/news/technology-652025

If you've written about this already, I'd love to read that. Been having a discussion with a couple people on the whole debate about banning LLMs and usage by tools to volunteer results from them when searching/looking up information, and how to address it all.

BBC NewsChatGPT: Mayor starts legal bid over false bribery claimBrian Hood says chatbot ChatGPT spread false information about him serving time in prison.

@chipx86 They’re definitely going to happen, and they’re bad—we’re already seeing lawsuits and we should see more of them. But my hunch is that they’re also going to be easier to find/filter/control than the spam targeted at a single community. No way for a generative engine to be able to tell the difference between “I need a patch” and “I need a patch (that is going to overwhelm this maintainer)”.

@luis_in_brief Definitely looks like an annoyance for code generation/patch submission.

Having played with using ChatGPT to experimentally write code, as impressive as it's been, I certainly wouldn't want to accept code generated that way. Quality leaves much to be desired.

I could see a lot of people rely on it the way they've relied on StackOverflow to generate code, and in some cases get themselves into trouble.

@luis_in_brief On a wider scale, I'm curious about the usefulness of bans and the challenges with privacy.

It's funny seeing what ChatGPT sometimes thinks about my career, but then, it hasn't accused me of something terrible, like it has with, say, the person in that article.

If one wanted to exercise a right to forget, surely transcripts could be removed, but it doesn't seem like that's purgeable from the model? So then... train a filter? Or keep PII within a filter (extending GDPR issues)?

@luis_in_brief It seems to mostly be a response to decrees from the landlords at Stack Overflow, Inc in my reading. LLM generated content is the focus of the decree being opposed, but the strike seems less about LLMs specifically and more about community independence.