Jump in the discussion.

No email address required.

The AI lift will end badly eventually, last gasp of the tech and tech adjacent rises

Jump in the discussion.

No email address required.

no, the final frontier will be AI that can make itself better and learn new things on its own

but that is called AGI

Jump in the discussion.

No email address required.

Not necessarily :marseynerd:

The latter already exists via targeted web scraping based on prompt

The former can also exist on its own but would be kinda worthless if not abstracted away into improving itself from our perspective instead of just an easily measurable aspect like speed to get a result

Bro the bot can make itself 0.00000000% faster with a year of processing time (after the inevitable influx of optimization) isn't really impressive if that's all it can do, and it'll pretty quickly bottleneck itself to the hardware it's on


Give me your money and I'll annoy people with it :space: https://i.rdrama.net/images/16965516366194396.webp

Jump in the discussion.

No email address required.

>Bro the bot can make itself 0.00000000% faster

I think you're vastly underestimating how fast an AI could improve itself. If it only takes the humans working on ChatGPT a few months of work to make the radical jump from GPT3 to GPT4 and soon GPT5, then a self improving AI could be making these jumps in days if not hours.

Jump in the discussion.

No email address required.

The problem is processing power. I suspect a large percentage of the gains is just throwing more GPUs at it.

Jump in the discussion.

No email address required.

A little bit yes but luckily Moore's law scales accordingly to our GPU needs. The real bottleneck with future LLMs is running out of meaningful data to train on. OpenAI says they are going to run out of well documented text in 2024.

Jump in the discussion.

No email address required.

It is, and that’s why ChatGPT 5 is taking so much longer. The leap was largely in more tokens or connections or whatever (and thereby processing power) and that improvement appears to be asymptotic.

Jump in the discussion.

No email address required.

Good luck defining "better" in a measurable way though


Give me your money and I'll annoy people with it :space: https://i.rdrama.net/images/16965516366194396.webp

Jump in the discussion.

No email address required.

Calculations per second, calculations per second per dollar, token count, transistor count, training days.

There is a million way to measure improvement in AI.

Jump in the discussion.

No email address required.

>Calculations per second, calculations per second per dollar

Prepare for it to do the fastest performance CPU operation several quadrillion times while managing to do nothing productive

Transistor count is literally just hardware, which the AI has no control over, why tf would you tell it to measure that when it's effectively a constant?

Likewise for token count, it'll just make a billion for no reason if it's something you want it to prioritize

Training days will result in it just training itself forever with no real end goal in mind. more is always better by that metric and it's a shitty one

you suck


Give me your money and I'll annoy people with it :space: https://i.rdrama.net/images/16965516366194396.webp

Jump in the discussion.

No email address required.

What makes you so sure it would be incapable of being productive with that fast calculation speed and high token count? Also you're just sperging out over each metric I listed as if they are supposed to matter individually and not be considered together with all available metrics.

Jump in the discussion.

No email address required.

Because it always takes the easiest route possible. we've developed AI, but it's sexy Indian dude based with minimal work ethic. you have to be meticulous with the way you "reward" it


Give me your money and I'll annoy people with it :space: https://i.rdrama.net/images/16965516366194396.webp

Jump in the discussion.

No email address required.

More comments

>and it'll pretty quickly bottleneck itself to the hardware it's on

Humans are proof that you can run an AGI on 20 Watt on relatively shitty hardware. And this is what a blind idiot god achieved. I imagine and actually goal directed intelligence could optimize that pretty heavily

Jump in the discussion.

No email address required.

I'm just looking forward to replacing webdevs


Give me your money and I'll annoy people with it :space: https://i.rdrama.net/images/16965516366194396.webp

Jump in the discussion.

No email address required.

>rises

:#marseyconfused:

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.