Color me surprised.
Basically a walk through of a recent paper that shows performance of classifiers flattens out as you add more data which basically means things are NOT going to exponentially explode into general intelligence (using current models)
https://www.youtube.com/watch?v=dDUC-LqVrPU
#LLM #ChatGPT #gai
@scottjenson I'm not sure more data is what's going to cause AGI though. We're not letting the models "live": https://blog.troed.se/2023/03/13/the-delta-between-an-llm-and-consciousness/
@troed Exactly. I'm just shocked that we've all seen Garner's Hype cycle literally 100s of times yet every few years ANOTHER killer tech that will CHANGE EVERYTHING comes along and no one expects it to have scaling issue.
I'm not saying LLMs aren't important, I'm saying the hyperbole about them is predictable and frustrating.
@scottjenson I believe that in statistics this is called reverting to the mean :-). Until there are some fundamental design changes all the current models are going to give us are "better" stochastic parrots