ChatGPT went all in on increasing the amount of data to work with to boost the AI capabilities, but just like all other tech sectors, it seems pretty clear that we find diminishing returns with each doubling in the amount of data.
Currently we have reached that point where only trillion dollar companies are capable of building better and better AI fast.
General AI even now can regurgitate the best answers, but it hasn't shown the capacity to create something truly new, that comes with the randomness that emerges from biological evolution.
General AI is limited by the total knowledge of humanity being fed into it, beyond which it cannot grow no matter what. It is the world's smartest answering machine.
Will AI be able to replace humanity? No.
Will AI be able to make robotic equivalents to human labor? Yes.
Today, the most advanced AI in the world is Gemini by google.
It is taking the efforts of the 4th most valued company in the world to keep developing and upgrading AI further.
The current AI boom was a random discovery and it will slow down as fast as it rose up.
General AI will very likely peak at being 2-3 times smarter than the smartest human, then go no further.
This is because it cannot create anything new, it can only pattern match all the data that already exists out there.
Until and unless we give AI the ability to "mutate" like human DNA does, it will reach an upper limit and stagnate at that point.
Conclusion:
AI won't take over the world. It will be another tool to help humans create more things faster, and finish up all the current backlog of research projects.
Jump in the discussion.
No email address required.
It was always obvious to me that, barring any major technological revolutions, this technology was going to plateau out fairly quickly. The initial models already contained most of the information on the internet - which represents a very large portion of the useful human knowledge - and neural nets (as are most if not all ML algorithms) are notorious for being extremely difficult to improve after training on large amounts of data. I don't know the exact complexity for the data required, but anyone who has ever tried to train a NN knows that the utility of every next data point falls off very quickly. At least quadratically, or probably even faster.
There simply isn't enough data in the world to feed to the machine, to improve its effectiveness by any significant amount.
And this is ignoring the already monumental computational requirements to train and run these models, which would very likely also grow polynomially as you increase the size of the model.
Jump in the discussion.
No email address required.
I agree with you.
Jump in the discussion.
No email address required.
More options
Context
More options
Context