Increasingly think that before llms become a genuine misinfo crisis because of misinfo they themselves generate, they’re just going to stupidly burn out every human moderator on every platform, and allow a wave of human misinfo as a result. (Ongoing) https://fosstodon.org/@VincentTunru/110146668322885122
As I was saying https://mastodon.social/@rodhilton/110197338997037252
As I was saying https://twitter.com/conspirator0/status/1647671394476478467
(And I keep doing this thread in part because it is funny, but also because we can’t let speculative “AI will be horrible in the future” distract from the ways AI is horrible *now*. Whether that distraction is intentionally manipulative or just naive, the distraction is real.)
(
?) cont.: @kissane adds another example to what I have been saying: LLM-generated text that is very mundane spam is going to destroy online moderation teams well before the custom-AI-misinformation that some people are hyperventilating about: https://mstdn.social/@kissane/110328423077544500
And today in this thread, StackOverflow mods go on strike in part because of AI spam: https://openletter.mousetail.nl/
Bringing back this classic “AI is going to destroy mods before anyone else” thread for this DDOS on several GH/FOSS projects (via @jarekpotiuk ) https://fosstodon.org/@jarekpotiuk/113896395317444556
@luis_in_brief Yeah probably.
What are your thoughts on issues like search engines volunteering info from LLMs that turn out to be misinfo? These sort of situations: https://www.bbc.com/news/technology-65202597
If you've written about this already, I'd love to read that. Been having a discussion with a couple people on the whole debate about banning LLMs and usage by tools to volunteer results from them when searching/looking up information, and how to address it all.
@chipx86 They’re definitely going to happen, and they’re bad—we’re already seeing lawsuits and we should see more of them. But my hunch is that they’re also going to be easier to find/filter/control than the spam targeted at a single community. No way for a generative engine to be able to tell the difference between “I need a patch” and “I need a patch (that is going to overwhelm this maintainer)”.
@luis_in_brief Definitely looks like an annoyance for code generation/patch submission.
Having played with using ChatGPT to experimentally write code, as impressive as it's been, I certainly wouldn't want to accept code generated that way. Quality leaves much to be desired.
I could see a lot of people rely on it the way they've relied on StackOverflow to generate code, and in some cases get themselves into trouble.
@luis_in_brief On a wider scale, I'm curious about the usefulness of bans and the challenges with privacy.
It's funny seeing what ChatGPT sometimes thinks about my career, but then, it hasn't accused me of something terrible, like it has with, say, the person in that article.
If one wanted to exercise a right to forget, surely transcripts could be removed, but it doesn't seem like that's purgeable from the model? So then... train a filter? Or keep PII within a filter (extending GDPR issues)?
@luis_in_brief It seems to mostly be a response to decrees from the landlords at Stack Overflow, Inc in my reading. LLM generated content is the focus of the decree being opposed, but the strike seems less about LLMs specifically and more about community independence.