#LLMs feel *exactly* like crypto did in 2017, with nearly daily articles about how it can't possibly work, and a die hard community earnestly pleading "but you just don't UNDERSTAND!"
The main difference is that there *are* reasonable use cases. They're just far smaller than people want to admit.
The biggest problem with this rush to replace jobs with LLMs is that they all have a very naive view of what "the job" is. Reducing a very human process with an LLM, which even if it works (which is usually doesn't) still misses out on the very human cost of using such a dehumanizing process.
This was well documented in the 80s with "The Social Life of Information" by JS Brown. We've seen this naivete so many times, it's expected at this point.
@scottjenson yup. Everyone selling LLMs is aware that if their pitch is "here is a useful tool" then there's no billions of dollars in funding for that, so the pitch is 10% reality and 90% "change the world" hype. And everyone else can feel that, instinctively, and so rejects it without necessarily having good technical arguments as to why. Someone could start selling LLMs as the mildly useful small-use-case tool they are, but that would be breaking omerta and they'd be shunned by their allies.
@scottjenson I mean, there are plausible use cases for digital cash too (if they could figure out how to move away from the horribly inefficient implementations we have now).
It's just that 99% of the market is crime and speculators rather than people actually conducting commerce in lawful goods and services.
@azonenberg Oh, I totally agree. There is a reason that so many drank the koolaid. But as we've seen over and over, vision isn't enough, execution is everything.
@scottjenson Honestly I would argue that buttcoin is more inherently useful than a spicy markov bot is.
Which says a lot about the size of the remaining market when all this nonsense implodes.
Been hearing about all the positive use-cases *in the abstract* for years now. Never specifics.
Certainly, no use cases that are mission critical (driving, medical diagnosis, weapons deployment), though, due to the endemic hallucination problem—not without expert human review (such as senior programmers, or scientists) of the output.
Which means, at best, a replacement for a *bunch* of junior-level jobs.
Meanwhile the anthropomorphizing of AI companions proceeds apace.
@Mark_Harbinger @scottjenson LLMs can very obviously replace the first tier of phone tech/chat support--basically walking a checklist but accounting for fuzzy input. They're doing this today on all sorts of websites.
They are almost ready to replace drive thru cashiers--some places rolled it out and it had glitches, but I honestly think they'll be there in the next 12 months. It's not mission critical if the bot screws it up and it's cheaper for the restaurant, so of course it will happen.
Basically they're a strong contender to replace a ton of low level jobs, which is not great.
Prediction: There *will* be a hard push to treat the very disruption (paired with the large portion of the population who are establishing relationships with their own AI companions) as evidence of AI being akin to AGI, and use that as reason to secure them rights as legal persons.
At that point, they become a liability buffer for all their corporate owners. That's the endgame.
@Mark_Harbinger @scottjenson I don't see how that would work. You can't punish an AI. You can't recoup damages or enforce jail time. It seems too obviously like a Get Out of Jail Free card.
How is that any different than with the 'legal personhood' of corporations, now?
@Mark_Harbinger @scottjenson Corporations can suffer consequences--you can take their money, or in cases of malfeasance the responsible parties can still go to jail.
@stilescrisis @Mark_Harbinger The problem is that most companies are looking at #LLMs as a way to save money instead of improving the product.
Most corp boosters are falling all over themselves to slash jobs. Theses attempts have already failed badly and likely continue to do so as they don't understand what they are trying to replace (Classic #UX mistake)
If, as you suggest, it's an upfront triage that *still* leads you to a human, but one which is now better prepared to help you. That's cool
@scottjenson @Mark_Harbinger Not necessarily? A request like "reset my password" could be entirely level-one tech support serviceable, thus fully automatable. And of course, drive-thru ordering isn't up-front triage, it's the whole enchilada.
@stilescrisis @Mark_Harbinger I think we have a fundamental disagreement on what is "easy".
Note I'm not saying LLMs are doomed or useless! I'm saying "tasks are not as trivial as you think they are". I'm all for trying and learning. I've just been around the block enough to know that things are ALWAYS harder than you think they are.
@scottjenson @Mark_Harbinger Who said anything about easy?
This positive use-case is already being accomplished by regular algorithms...there is no need to conflate those types of jobs with LLM machine learning AI.
So, again...I'm still waiting for the positive use-case (that inures to the benefit of the general population/work force, not the corporate bottom-line).
@Mark_Harbinger @scottjenson There's no existing algorithm which allows someone to pull up to a drive-thru, say "I want a six pack of McNuggets and a large coke" and automate the point-of-sale entry. I don't know what you're talking about.
My response was to specifically to your "password recovery" example. But, yes, there are automated POS kiosks all over the world, too, so...?
@Mark_Harbinger @scottjenson At drive thrus? Are there?
Well, I guess it would depend on whether you drove up to it or not. Clearly the technology exists. And, my point is, it has nothing to do with machine learning LLMs...
@Mark_Harbinger @scottjenson You're just being reductive.
An LLM is strictly better than a phone tree for front-line tech support and will solve many more issues without needing to summon a human agent. If you doubt this, you're in the extreme minority.
There's no algorithmic substitute for an LLM to handle voice based interactions with humans. We've had simple voice based menu trees for many, many years and they are universally reviled. LLMs solve this.
FWIW I don't actually like LLMs.
LLMs can certainly be better than IVR systems. But even then just like IVRs, they can be horrible (and many likely will be)
I'm not making a categorical point about the tech but how so many companies are going to screw it up. Execution is 90% of the problem.
There are entire books written about over simplifying your understanding of the business problem to be solved. This is a near universal issue with any tech.
But I'm seeing it far more with LLMs today
@scottjenson @Mark_Harbinger This is where cloud providers like Azure and GCloud can add value. Integrating an LLM with an existing system is going to be challenging work for folks who aren't experienced at it, but the cost savings almost certainly make it worth the effort.
@stilescrisis @Mark_Harbinger you're focusing on the low level tech. Take a simple voice line application. What tasks do you give it, how do you roll over to humans, do you even offer humans? All have simplistic (and cheap) answers that can create a terrible experience.
Almost always, companies choose the cheap option believing in the hype, thinking "it can't be that bad" and it almost always is.
The premise of your first paragraph doesn't agree with the premise of your third one.
@Mark_Harbinger @scottjenson Sure it does. Companies have multiple tiers of support agents. If you can radically shrink the lowest tier, you save a ton of money. It doesn't mean all the tiers have to vanish overnight.
No, I just meant that you need to be careful about conflating the goals of the org. In one part, you say their goal is money savings, whereas in the other you say it is to help consumers. Not the same thing.
@Mark_Harbinger @stilescrisis Let me spell it out for you.
The first paragraph was a BAD THING.
The third paragraph was a GOOD THING.
We need more good things for products to succeed.
@scottjenson Almost everyone thinks that almost everything - except that which they have deep personal experience in - is "simple".
It never is. Because what they think of as an external "that thing is simple" is actually an internal "my understanding of that is simplistic".
And lo; humanity repeats systemic mistakes endlessly. Human nature is to assume we know things better than we do, as long as our surface understanding is all _we_ need.
@mattwilcox Yeah, whether it's Dunning-Krugger, Narrative Fallacy, 1st vs 2nd order thinking, or Chesterton's fence, I feel humans have a LITANY of thinking challenges that make us usually get it wrong the first time.
@scottjenson I'm pretty convinced it's just a manifestation of evolutionary pressure and not unique to humans - other than we're the peak "smart" we know of atm.
Evolution does tons of different things - but it always favours the lowest energy state required to do any specific job as long as it's "good enough". That applies to thinking as well.
We need a higher level of cognition for the exact niche we each are in... but outside of that "it'll do" rules. No need to spend the energy for more.
@mattwilcox That's a really interesting point! I agree
@scottjenson Does make it a really hard problem to do anything about on a "whole system" and "for any real length of time" level though. We're battling how the universe works.
I think the first step is to make sure that everyone is aware of it. In a "I make mental shortcuts _all the time_ about _almost everything_ I think I understand."
Then "and I should know the limits of that, and why I should then trust many topics to other people who know more".
Then "how do I choose who/what to tust?"
@scottjenson ... all of which takes more energy than "eh, it'll do as it is, I'm OK".
Which is the issue.
@mattwilcox yeah I think lots of people understand this at some level. It's why we do prototypes after all. It's never a perfect process of course, but people ARE trying
@scottjenson @mattwilcox I didn't see why Chesterton's fence belongs in this list. Can you remove it?
@scottjenson LLMs are potentially transformational in accomplishing work that was previously unreasonable to tackle. They can be extremely helpful in assisting with some tasks. But my word, by it's nature it can't actually "do" the work.
@codatory Exactly. Well said. We are in a hype cycle and people are just being stupid and greedy. There is a reasonable way to use this tech, it'll just take a bunch of expensive failures for people to catch on. We saw this when mobile first appeared. So many bad, stupid websites/apps.
@scottjenson I definitely enjoy throwing AI at a problem where the fidelity is verifiable (writing code) or where the fidelity isn't important (filtering information that otherwise would never be looked at). But it's never gonna approve mortgages.
@codatory But it could improve the mortage process but doing some of the REALLY basic stuff yet still keep a human in the loop. Don't replace people, support them with this tech.
@scottjenson yes. There's a ton of correspondence, processing unstructured data, fraud detection, etc. that it can absolutely accelerate and make nicer for a human.
AI is fantastic at toil.
@scottjenson I just ignore people who make unilateral assertions that a technology is the future or doesn't work. They're usually oversimplifying what's going on.