social.coop is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Fediverse instance for people interested in cooperative and collective projects. If you are interested in joining our community, please apply at https://join.social.coop/registration-form.html.

Administered by:

Server stats:

492
active users

It's like a running joke that nobody likes AI but companies are shoving it into products anyways, but as far as I can tell that's not true: most (many?) people accept it, have mostly replaced search engines with it, and trust anything it tells them. I've had multiple people recently go to their phone and say "ChatGPT says…" to answer simple questions. If I point out that it's wrong a lot they get defensive and tell me how rare that is and how it's mostly right without actually verifying this.

Normally I blame valuing convenience above everything else for this sort of thing, but as far as I can tell it's not even more convenient or easier for them, a search of the web, or Wikipedia, would have turned up more accurate answers to most of their queries (ie. "What time is this local business open?"). I don't know how to counteract this though, or convince people to care about the harm it does, or even convince them that it's not actually helping them.

@sam as an observer of college-level approaches, it's been disheartening to watch the wavering media literacy over the last 15 years - from showing distrust of convenient but unpredictably biased sources like Wikipedia, to 2016-inflected ramping up of good basic approaches like Caulfield's SIFT, and then mostly chucking in the towel in the face of polarization post 2020. Throughout it all, students default is to pick the first answer they see, critical evaluation is mostly not at hand at all.

@sam critical evaluation / media literacy is certainly opposed to convenience. The easy markers have also largely degraded - we all would have increased hesitations about relying on any non-evalulated marker (academic journal? major news org? recognizable media brand domain? eep) at this point, not to absolve LLMs untrustworthiness but they are more alike the rest in needing cross-verification and applied critical thinking. Asking folks to do more work, or may as well take the first answer.

@loppear I think what sets LLMs apart from those is the conversational UI. People interpret "able to write complete sentences that are grammatically correct" (a thing LLMs are good at) with "has the ability to synthesize ideas and context into something new" (a thing they can't do). They may not think about it in those terms, but it gives your brain an easy way to launder "I don't want to think critically" into "this machine has thought critically, and I trust it to have more expertise than me"

Luke O.

@sam and that they present as bias-washing, either explicitly as Google/Bing "summaries" of below results or by marketing as a synthesis of all the sources. Easily/intentionally misinterpreted as already doing the hard work of cross-evaluation of multiple results. So yes in that sense (and plenty others) they are much worse for the information ecosystem.