#LLMs feel *exactly* like crypto did in 2017, with nearly daily articles about how it can't possibly work, and a die hard community earnestly pleading "but you just don't UNDERSTAND!"
The main difference is that there *are* reasonable use cases. They're just far smaller than people want to admit.
The biggest problem with this rush to replace jobs with LLMs is that they all have a very naive view of what "the job" is. Reducing a very human process with an LLM, which even if it works (which is usually doesn't) still misses out on the very human cost of using such a dehumanizing process.
This was well documented in the 80s with "The Social Life of Information" by JS Brown. We've seen this naivete so many times, it's expected at this point.
@scottjenson Almost everyone thinks that almost everything - except that which they have deep personal experience in - is "simple".
It never is. Because what they think of as an external "that thing is simple" is actually an internal "my understanding of that is simplistic".
And lo; humanity repeats systemic mistakes endlessly. Human nature is to assume we know things better than we do, as long as our surface understanding is all _we_ need.
@mattwilcox Yeah, whether it's Dunning-Krugger, Narrative Fallacy, 1st vs 2nd order thinking, or Chesterton's fence, I feel humans have a LITANY of thinking challenges that make us usually get it wrong the first time.
@scottjenson @mattwilcox I didn't see why Chesterton's fence belongs in this list. Can you remove it?