social.coop is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Fediverse instance for people interested in cooperative and collective projects. If you are interested in joining our community, please apply at https://join.social.coop/registration-form.html.

Administered by:

Server stats:

492
active users

I continue to be negative about generative AI assistants insofar as every time someone has written a piece of any length "co-written by an AI" it has all sorts of errors I point out the author hadn't even noticed were there

Most irritatingly is that MULTIPLE TIMES people have written drafts about my work that simply injected and made up history. If they had just published them, that history would be cyclically referenced in the future as if fact

That kind of thing just happens all the time now

Christine Lemmer-Webber

Study after study also shows that AI assistants erode the development of critical thinking skills and knowledge *retention*. People, finding information isn't the biggest missing skillset in our population, it's CRITICAL THINKING, so this is fucked up

AI assistants also introduce more errors at a high volume, and harder to spot too

microsoft.com/en-us/research/u
slejournal.springeropen.com/ar
resources.uplevelteam.com/gen-
techrepublic.com/article/ai-ge
arxiv.org/abs/2211.03622
pmc.ncbi.nlm.nih.gov/articles/

@cwebber@social.coop There are definitely ways of using these things that are more or less dangerous. I've been piloting one of these at work (Claude 3.5 Sonnet with Aider, if anybody is curious), and the proper way is to actually code review, understanding every line out of it. It's really easy to just blast through, not review the changes, and make the codebase into a generated mess that you can't understand, though.

The former path saves me a little bit of time, and makes some things slightly easier, particularly around having to look up method names in Ruby and Rails (where IDE completion behaves poorly). The latter way makes me like 4x faster, is significantly easier, and is by far the path of least resistance, which is really an issue, because it's objectively the wrong thing to do. It's the equivalent to skipping code review on code from a very junior team member but still putting your approval on it.

I'm not surprised if most people are using the latter approach. Every output from this model (which is considered state of the art) has at least one error or subtle edge case that's needs to be fixed. It can be impressive, but it's not perfect, and it's foolish to trust a single line of code that it produces without verifying it.

And that's before you even get into the energy use and copyright concerns.

@taylor @cwebber GenAi looks indeed especially impressive with programming languages with dynamic typing and when tooling of IDE is lacking or poorly configured.

@taylor @cwebber And we know that everybody will use the second approach, because they will.

Human nature, managerial pressure, whatever. It will happen.
And it will seep from software into bridge building, aircraft design and construction and everything else.

Extreme pedantry about processes can counteract this, but the current trend is to gut that.

What I'd love to know - who uses AI to build stuff :)

@catarinac acho que antes da ia isso na estava acontecendo

@ointersexo se calhar não a tão grande nivel

@cwebber it’s amazing how many smart people have their heads in the sand about this. They’re so sure that GenAI must be useful for something. Or perhaps, their investors are. 🤦‍♂️

@cwebber Finding information is also a missing skillset. If it weren't, people wouldn't be using AI for that in the first place.

@zilti @cwebber

Finding information is easy. Finding truth is harder.

AI is just promoting junk science and junk education, at scale.

@cwebber i think even among people who have higher than average awareness of how these work, a lot of people are still unprepared for something that produces unreliable information in a way that so strongly resembles real information. like if someone's mental model was previously to associate decently structured writing with "the author at least has last Some understanding of the actual subject matter" then they are already at a disadvantage to parse llm output imo

@cwebber yeah it’s pretty much use it or lose it situation

@cwebber And for this we are told - not just by Republicans - that we must be willing to give up any hope of reducing climate change to a manageable level.

@cwebber @PedestrianError because AGI robo-god will solve it for us. 🙄

… we already know how to solve it. They’re just doing the opposite of that. For profits.

@agentultra @cwebber @PedestrianError it was quite telling when "new model that might not have to burn the entire world for profit as much" tanked Nvidia stock, some really perverse incentives there.

@cwebber this is a feature. tech-barons need a malleable, dependent populace.

simon wardley had an excellent article where he likens ai to a new theocracy. while i disagree with his longer term ai optimism, its a very well reasoned piece:

"the change of language, medium and tools raises a concern because if you can gain control over these, you can change a person’s reasoning of the world around them. You can tell them how to think."

couple this with reduced critical thinking facilities, and you could "legitimately" capture a democracy for generations by managing truth through AI.

we are seeing this today in the US: language bans and a massive a push for AI. this is key to the tech-baron's strategy, especially given the raw access people like elon have -- money, tech/x, influence, political access, physical access. the folks in control are exploiting the opportunity to cement their power.

the scary future of ai isn't robots rising up: it's the humans in control.

medium.com/mapai/ai-and-the-ne

MapAI · AI, Safety, Risks and the New Theocracies. | by swardley | Medium | MapAIBy swardley

@cwebber @jstatepost

Having #AI do your work is exactly like bullying or paying the class #nerd to do your #homework.

In the short term, you pass, and in the long term, you failed to learn that lesson.

And, the more you do it, the less you learn.

@vor @cwebber @jstatepost

Some people will still do it. But maybe not after they realize that the nerd has a 3-27% error rate, so it won't pass. THEN, do they still take that bargain?

What if it wasn't junior high homework, but rather a medical diagnosis or driving your kid around?

@cwebber
People clamor for labor saving devices, but when they get one, they want it to think. That's not the labor you're saving.

@cwebber @hacks4pancakes Hear hear.

I’d been using perplexity.ai for a few months — it was recommended in an article I read for actually supplying straight-forward answers to queries instead of supplying a list of related articles or topics, and usually did so in satisfying ways — but I had a few results from it that were blatantly wrong with an even basic knowledge of the topic. Which then eroded any confidence about how correct it was about subjects I don’t know the answers to, which is of course a lot of what one would query.

That was when I deleted it.

@cwebber This is why I refuse to use them and did not participate in creating them.

@cwebber Oh, yeah.
Before AI we had to collaborate with people, who were lazy thinkers, bad data analysts and liars. Now we have to collaborate with people, who are lazy thinkers, bad data analysts and liars - and an AI, which is a lazy thinker, a bad data analyst and a liar.
Nicely done humanity.

@cwebber "and harder to spot too"

This is a thing that I feel is underestimated by so many people: these tools are trained to generate output that is as *convincing* as possible, regardless of whether the output is *correct*.

@vanderZwan @cwebber well, that's exactly how my former boss worked.

It's a common behavior in [insert_corporation_name].

@Er_mecanico @cwebber I'm extremely lucky and privileged to have never experienced this first-hand, but based on the stories I hear from colleagues I'm not surprised.

Also tracks with the many variations I've seen of the meme that the reason a lot of bosses think that LLMs can automate away their employees, because they can actually automate away the kind of bullshitting that their own profession involves.

@cwebber yes, when you have a question, the ability to look at various sources and analyse the information to take your own conclusion is so important. The ability to research. AI clouds all of the sources, instead of looking at all the information available you’re looking at a wall.

@cwebber as somebody who actively takes coding risks to the point where I have to be mindful of things breaking and coding being not easily understandable by outsiders, it seems perverse seeing a (growing?) body of people accepting increased chaos and brittleness of processes for instant convenience. En masse, it looks like a dangerous moral hazard experiment. A shame ML hasnt been explored by more prior to AI. Its adoption is akin to caffeine, amphetamine or opiates as an industrial strategy.

@cwebber So, funny story, I'm a technical translator (mostly German to English, mostly pharma these days) and I've been through this story five years ago.

Machine translation *so very* erodes your translation skills if you're not paying attention. Catching the errors of MT is "less work" so it's paid significantly less (the more corporate the customer, the more likely they'll consider productivity gains to belong to them and them only).

@cwebber You can compensate, but that's a whole separate skillset. And you have to know that you need to do it.

@cwebber and even if you look at information gathering as the missing skillset, a vital piece of that skillset is source evaluation; a statistically plausible bullshit generator is not an adequate source for anything other than plausible sounding filler text

@cwebber At some level, it feels like we've already been going there anyway. But it is getting worse.

As a 40 year old who grew up alongside the Internet (which is a capitalised word, damn it!), I probably don't retain as much knowledge as older people do because "I'll search it when I need it" became prevalent and so the knowledge shifted slightly more towards "how to rediscover the information" rather than "how to retain everything".

BUT I think my cohort still a) needed critical thinking for parsing those results and b) at least retained the knowledge of the important/frequent stuff and a sense of what was out there.

LLM usage? Not so much. It's "Artificial Intelligence" therefore it can Just Do It™ 🙄

@cwebber Sadly, this is because #LLMs (which most people think are AI instead of just a particular tool in the toolset) are primaily designed to "fill in the blank" with the most statistically likely words.

Being a very computationally expensive type of Mad Libs makes them sound convincing even when they are spouting nonsense. People can do this too, but LLMs often excel at it.

AI is great at a lot of things. As a natural language search of a given data set, it can be very effective and a heck of a lot easier to use than a search engine or regular expression, both of which require you to already have a good idea of what keywords you need to look for to find the information.

Even the new chain-of-thought systems are really just layering another pass through the LLM to turn its search strategy into natural language. It's not really #XAI yet. Someday, but not today.

I'm glad you're calling out the critical thinking part! Using current AI tools is a lot like using a calculator: it can make things easier, but you have to already know what you're trying to do and have at least a ballpark idea of what a valid answer should be to use one. Metaphors are tricky, but "AI as a calculator" is my current go-to for explaining when not to use one.