ChatGPT went all in on increasing the amount of data to work with to boost the AI capabilities, but just like all other tech sectors, it seems pretty clear that we find diminishing returns with each doubling in the amount of data.
Currently we have reached that point where only trillion dollar companies are capable of building better and better AI fast.
General AI even now can regurgitate the best answers, but it hasn't shown the capacity to create something truly new, that comes with the randomness that emerges from biological evolution.
General AI is limited by the total knowledge of humanity being fed into it, beyond which it cannot grow no matter what. It is the world's smartest answering machine.
Will AI be able to replace humanity? No.
Will AI be able to make robotic equivalents to human labor? Yes.
Today, the most advanced AI in the world is Gemini by google.
It is taking the efforts of the 4th most valued company in the world to keep developing and upgrading AI further.
The current AI boom was a random discovery and it will slow down as fast as it rose up.
General AI will very likely peak at being 2-3 times smarter than the smartest human, then go no further.
This is because it cannot create anything new, it can only pattern match all the data that already exists out there.
Until and unless we give AI the ability to "mutate" like human DNA does, it will reach an upper limit and stagnate at that point.
Conclusion:
AI won't take over the world. It will be another tool to help humans create more things faster, and finish up all the current backlog of research projects.
Jump in the discussion.
No email address required.
Lots of people agree with you
Jump in the discussion.
No email address required.
They should incorporate rdrama into their memory banks!
Jump in the discussion.
No email address required.
More options
Context
Neurodivergent enough to independently reach correct niche expert opinion, too neurodivergent to make money.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
Yes, surely now AI has plateaued forever. This post will age well.
Jump in the discussion.
No email address required.
I never said forever, I said slowed down.
Jump in the discussion.
No email address required.
Well it hasn't even done that.
Jump in the discussion.
No email address required.
it has gpt 5 wont come out in 2025
The US is scrambling to find the next moores law and nothings working
Jump in the discussion.
No email address required.
It'll come from some random sexy Indian dude intern at google, just like the current boom
Jump in the discussion.
No email address required.
Disagree. Human IQ increases 2 points per decade in the best case scenario.
Tech development scales up at far higher rates.
The only reason tech continues to scale up so fast is not due to intelligence increase in humans but due to aggregation of smart humans together into ever tighter organizations.
The world will run out of geniuses to stack together before we are going to get another technological revolution on the scale of the internet and the mobile phone.
The US already has problem keeping Boeing and SpaceX competent at the same time. There are just not enough competent people at the high end to spread around.
Microsoft buying up all the smaller companies ( hundreds of billions instead of trillions of dollars ) is the only way ahead.
Jump in the discussion.
No email address required.
The current boom is not due to stacking raw intelligence, but random trial and error by many many people where 1 idea happened to be the next level. This is simple evolution on a memetic scale.
The world has 8 billion people, finding individuals a few standard deviations above the mean is ultimately not that hard.
Jump in the discussion.
No email address required.
Alright, hear me out.
An IQ of 147 puts you in the top 0.1% of humans out there.
The IQ of the average person with a PhD is 125 which is considered superior. That's the top 4.8% of humans.
Top 0.1% of humans would be about 8 million humans.
Top 4.8% of humans would be 384 million humans.
Today, less than 2% of the world has a P.hD.
So, we could probably increase that number by 2.5 times.
The geniuses that actually make real progress are generally in the top 0.1% though. Honestly, the US has probably already brain drained 90-99% of those top 0.1% group.
So again, the cream of the crop has already been used up, now the US is stuck bringing in the top 5% of humans from around the world, where they are competing with the EU, China, Japan, Oceania, and Canada, for those people.
The world just doesn't produce enough geniuses per year to feed the machine. We are at the high end fruit side of things. Only thing that improves every two years is semiconductors and who knows how long that is going to hold.
I really do believe there aren't enough geniuses around the world to keep the gravy train going and the number of fields that are growing exponentially will keep declining over the years as more and more geniuses need to be focused together on acquiring a single high level fruit.
Jump in the discussion.
No email address required.
K
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
More options
Context
More options
Context
Claude 3.5.
Jump in the discussion.
No email address required.
Barely above chatgpt 4. Worse in some areas. Gemini is the peak currently and that has a multi trillion company behind it.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
More options
Context
More options
Context
More options
Context
What a bummer, there was some hope that this wouldn't be the case
https://gwern.net/scaling-hypothesis
I wonder whether this doesn't invalidate the scaling-hypothesis, but actually is just a result of us using up all high quality training data and now trying to throw the trash into the model too.
Jump in the discussion.
No email address required.
I think of it more as moving towards a core. Just as we see with empires and technologies and nature, you have a large boom, then a stagnation, then a receding area which is more like a regrouping of the better pieces, for example - people figuring out how to get similar quality AI output with lower number of parameters to create a smaller sized AI, the issue currently would primarily be that there isn't enough data and storage out there to feed a single AI forever. Sure google could afford that much data storage space to exponentially increase the number of parameters a bunch of more times, but even they would run out of space at some point.
At least that's my take on it.
As I understood it there was no high quality data, they just threw everything they could from the beginning. The issue is, to give an AI the internets worth of data, you would have to create a second internets worth of storage. You would also need a supercomputer powerful enough to process all that data.
Basically you are going to have one super specialized company leading in AI like we do with semiconductors.
Jump in the discussion.
No email address required.
These are text based models, storage isn't that much of an issue. Text is very easy to store and compress. Even storing all images on the internet probably wouldn't be a problem and could be done with at most a few million dollars (archive.org manages to do that just fine and they're not rich). Video is the real beast that will be too expensive for most but the biggest corpos.
Of course you can't exclude all low quality data and go through it by hand, but you can definitely heuristically control the ratio of it. One trivial example to lower the average quality would be to include the youtube kids comment section. Each source has a degree of trust (established news papers, reddit, twitter, irc-logs and spam-emails) bottoming out at text that is most likely already machine generated or code by junior programmers that are somewhat cheap to filter.
Especially AI generated text is becoming an issue as SEO spammers are picking up.
Jump in the discussion.
No email address required.
I think Grandma answers your question better than I do.
https://rdrama.net/h/nerdshit/post/281855/ai-development-is-going-to-slow/6650211#context
What you said makes sense. Thanks for educating me.
AI turned out to be halfway blowout just like autonomous cars.
I think we have reached that point where meaningful progress can only be made when the entire planet is collaborating on a project like we did with the ozone layer and are now trying to do with climate change and plastic pollution.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
More options
Context
I have bet a good chunk of my net worth that successful AI use cases will be essentially a replacement for industry/operation analysts. Feed large sets of data into the system and it gives you optimal results or multiple options for directives.
Generative AI for writing media is a fun parlor trick but like the article discusses, it misses the little quirks that fundamentally come from human error/ambiguity.
Faster thought horsepower is the real use case and will long term eat away at the middle management tier decision makers, which is why I am invested in Palantir, Microsoft, C3.ai and Oracle. Okay, that last one is because SAP exists only by rent seeking strategies and their AI development is smoke and mirrors (personal experience).
Jump in the discussion.
No email address required.
This is a good bet, but it's also not really AI specific. Computers have been crunching data better, faster for generations. Excel meant a lot of analysts had to upskill or lose their jobs. How many hedge funds trade mainly based on signals from their computer models? Was it Renaissance Technologies that pretty much said "we just buy what the computer tells us to and it somehow makes money"?
Anyway I guess this is another chapter of defining-down AI so we claim what the computer is doing isn't "intelligence"
Jump in the discussion.
No email address required.
That is a very good point. The "AI" that I've seen as actually useful is machine learning models that are able to collect otherwise inaccessible data. Like for instance, using machine learning to scan every purchase order and invoice a given company processes in their supply chain, where every vendor and customer uses different formats. Now that data can be readily synthesized and then it's just a matter of defining the optimizing equation and constraints, and that's just linear algebra. But wrapped all together i know many would call that "AI"
Jump in the discussion.
No email address required.
More options
Context
This shit drives me fricking crazy.
Jump in the discussion.
No email address required.
More options
Context
Yeah predictive AI through not ChatGPT was and is still better than ChatGPT.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
!ghosts
Make note of this for your investments.
Jump in the discussion.
No email address required.
More options
Context
@JimieWhales
Jump in the discussion.
No email address required.
More options
Context
People are going to scream at me insider trading and other nonsense when stocks like Palantir pop off, but the knowledge is here, today.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
It seems like LLMs best case will basically be able to return something in the convex combination of what's in the data that they're trained on. This will expand human knowledge in the sense that it will be able to combine previously separated ideas that weren't yet connected thereby filling in "interior" gaps we haven't filled yet, but it won't be able to expand beyond the convex hull (except as a research tool that complements a human, e.g., aiding in coding up data analysis.)
Jump in the discussion.
No email address required.
Exactly my belief. Great minds think alike.
Jump in the discussion.
No email address required.
Wrong
Jump in the discussion.
No email address required.
great minds think independently and reach the same conclusion
Jump in the discussion.
No email address required.
Oh yeah? Did Einstein come up with relativity by reaching the same conclusion as everyone else?
Jump in the discussion.
No email address required.
Yes? He just got there first.
Jump in the discussion.
No email address required.
So you would have come up with relativity, huh?
Jump in the discussion.
No email address required.
yes
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
More options
Context
More options
Context
More options
Context
More options
Context
More options
Context
AI wipp become really hilarious when a significant amount of the data it's trained on is AI generated.
It will be like a singularity, just utterly r-slurred.
Jump in the discussion.
No email address required.
Either r-slurred or ever smarter. No third option this time.
Jump in the discussion.
No email address required.
Google's AI tells you to make mustard gas to get rid of stains on the floor, or shit like that.
I wouldnt bet on the "ever smarter" path.
Jump in the discussion.
No email address required.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
More options
Context
It was always obvious to me that, barring any major technological revolutions, this technology was going to plateau out fairly quickly. The initial models already contained most of the information on the internet - which represents a very large portion of the useful human knowledge - and neural nets (as are most if not all ML algorithms) are notorious for being extremely difficult to improve after training on large amounts of data. I don't know the exact complexity for the data required, but anyone who has ever tried to train a NN knows that the utility of every next data point falls off very quickly. At least quadratically, or probably even faster.
There simply isn't enough data in the world to feed to the machine, to improve its effectiveness by any significant amount.
And this is ignoring the already monumental computational requirements to train and run these models, which would very likely also grow polynomially as you increase the size of the model.
Jump in the discussion.
No email address required.
I agree with you.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
AI can already write a better essay than this
Jump in the discussion.
No email address required.
so can a human
Jump in the discussion.
No email address required.
More options
Context
More options
Context
12 replies by some random dude from 2 Weeks ago
Are you just clickbaiting too ?
Jump in the discussion.
No email address required.
What?
Jump in the discussion.
No email address required.
More options
Context
More options
Context
You are fricking wrong. Emergent properties is what makes the AI of LLM so fricking great. They can solve tasks that models with fewer parameters couldn't solve without any programming done.
Read this: https://arxiv.org/abs/2206.07682
Jump in the discussion.
No email address required.
Nothing I said disagrees with what you said.
The issue is they are going to run out of data to feed the machine at some point. GPT 4 had 10x the parameters of GPT3. GPT 5 will probably need another 10x increase in parameters. This is clearly not sustainable.
Jump in the discussion.
No email address required.
Now you are talking about order of magnitudes and you are now arguing that they might run out of data. Those are things I agree with.
Your initial statement was "diminishing returns when we put in double the data", which is the opposite what happens.
Jump in the discussion.
No email address required.
its a technical gotcha
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
More options
Context
Sometimes it be like 3 but other times it's more 1 or 2
Jump in the discussion.
No email address required.
yes. Also I think the world only has enough geniuses to make noticeable advances in like 6-7 industries at the same time at most.
Currently we are leading in:
1. Mobiles and other consumer electronics
2. Batteries ( Mobile phones that can stay charged the entire day or two even with constant use )
3. Renewable energy
4. Space tech ( Elon Musk is the only guy making a real difference )
5. Biotech ( Brain interfaces, cancer survival improvements, robot pills )
6. AI ( Slowing down already )
7. Semiconductors ( I have no clue how they are going to keep advancing it for another decade it feels like it is already at its limits )
Everything else including physics, math, and chemistry appear to be stuck in place waiting for the next billions of dollars to discover something new by mistake.
Jump in the discussion.
No email address required.
Batteries are probably the most magical underappreciated thing about modern life. I just have little earbuds in my pocket that can connect to my phone and play music for 6-8 hours and I don't even need to think about charging them. They're 4 or 5 years old at this point. Old phone and iPod and Zune batteries always lost their ability to hold a charge after a 2 years max. It's not always that you get to see such huge jumps in technology during 10 years or so. How neat is that?
Jump in the discussion.
No email address required.
That's true. One thing I like about China is that they haven't yet reached the peak of their technological capabilities, maybe once they start falling into the demographic decline, they can once again cooperate with the west to push forward human technology.
I bet the world could go back to a high innovation and technological development rate if the Chinese and the west weren't busy trying to develop the same things separately.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
More options
Context
I don't care about intelligence , I care about scaling it down for shitty hardeare.
Jump in the discussion.
No email address required.
One day your windows vista should be able to run ps5 games or society has failed us.
Jump in the discussion.
No email address required.
This but unironically. Windows XP should be enough for everyone
Jump in the discussion.
No email address required.
I hate that I agree. It went downhill after xp
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
More options
Context
Chat bots are a dead end. I don't get the hype.
Jump in the discussion.
No email address required.
why do you consider them a dead end?
Jump in the discussion.
No email address required.
What can they actually do? They're not at a quality level high enough for them to replace anything, and they can't meaningfully improve. They're text predictors, not knowledge aggregators.
Jump in the discussion.
No email address required.
AI art? AI movies? Anything where they can be used as an additional tool? Robot training?
Jump in the discussion.
No email address required.
Have you looked at AI art recently? It's pretty awful and only getting worse. Robo-training is and always has been a meme.
Its uses are like improving bad cgi in cheap movies and cheating on high school homework.
Jump in the discussion.
No email address required.
I disagree. It is improving in simulation side of things. It just isn't effective enough without a human operator for now beyond very simple tasks such as moving stacks of boxes.
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context
More options
Context
More options
Context
More options
Context
THIS YOU?
Jump in the discussion.
No email address required.
Yes?
Jump in the discussion.
No email address required.
More options
Context
More options
Context
Yeah no duh, ChatGPT 4 was like 6 months after ChatGPT 2, and now it's been 2 years and we've gotten nothing.
Jump in the discussion.
No email address required.
More options
Context
I feel the spirit of the negro speaks to me more than the fluffy white people music. Killing and robbing and beating people up and mistreating women and doing copious amounts of drugs while having zero shame is a mentally I share with them. In fact I believe the negro will be the last true American left on this soy polluted gift we've squandered
Snapshots:
https://community.openai.com/t/new-articles-are-saying-chat-gpt-5-not-coming-out-until-2025-this-is-way-too-long-as-4-refuses-at-coding-10s-of-thousands-of-lines/835858:
ghostarchive.org
archive.org
archive.ph (click to archive)
Jump in the discussion.
No email address required.
More options
Context