Reported by:

Complex Systems Won't Survive the Competence Crisis

61
Jump in the discussion.

No email address required.

Reported by:
  • Fresh_Start : Literally me but the imposter syndrome consumed me.

This guy is smart enough to fake it until he makes it. He's learning on the job, which isn't ideal for whoever hired him, but he's going to be fine after while. I doubt he's as bad at coding as he's making it sound in his post, he's probably just having a case of impostor syndrome from relying on AI as a crutch too much. After he meets enough :marseytunaktunak: in the field he'll feel better.

Jump in the discussion.

No email address required.

Pretty soon there's going to be "ghosts in the shell", NPCs that just plug things into generative AI and don't produce a single output of their own thought, just proofreading what was generated.

White collar jobs will be filled with these subhumans and literal blade runners will have to test them to filter out their leeching.

Jump in the discussion.

No email address required.

Don't talk about project managers like that. Its offensive to women.

Jump in the discussion.

No email address required.

I want you to write a three paragraph summary about what your job actually is where you replace all proper nouns with the N-word

https://i.rdrama.net/images/17282522963860228.webp

Jump in the discussion.

No email address required.

Yeah that's the scary part. Especially because LLMs can't produce new information, so if everyone is just using GPT to produce output, it completely kills innovation in their field.

Jump in the discussion.

No email address required.

Most new information is just a combination of existing ideas.

Jump in the discussion.

No email address required.

OK sure, but when LLMs propose "new ideas" they're mashing shit together because it appeared in many existing texts and they have no way of verifying whether the new ideas are correct or not because they're not really intelligent and can't prove concepts. People in the AI field have already come up with a term, hallucination, for when LLMs make untrue assumptions based on probability, so if you asked one of them to prove something, it might confidently state that it knows something is true because the volume of text its been fed leads it to believe that it's probably right.

Jump in the discussion.

No email address required.

That's the same with code it produces, once you get past the toy examples it's suggesting what it thinks you want by combining ideas New ideas are often built on combining a few very simple concepts as well, like look how many systems use a Log + something to get the behavior they want.

It's not a stretch to be able to prompt one into creating something new that works if you had the right base knowledge to correct smaller errors.

Jump in the discussion.

No email address required.

Yeah, I don't really trust chatGPT'd code since you always have to do a double pass over it, since the AI'll gloss over edge cases that're completely obvious to anyone with an above room temperature IQ.

Jump in the discussion.

No email address required.

Pretty soon there's going to be "ghosts in the shell", NPCs that just plug things into generative AI and don't produce a single output of their own thought, just proofreading what was generated.

Literally just wrote a firefox addon like this. All I had to do was edit the permissions

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.