Unable to load image

x86 Chads :marseywereback: :chadcryguy:

https://news.ycombinator.com/item?id=41440567

But RISC is more efficient. It just is!

:#marseygigaretardtalking:


Why?

:#chadcryguytalking:


https://media.tenor.com/CO15JfKFIP0AAAAx/idiocracy-electrolytes.webp

:#marseygigaretardtalking:

Source on being back: an Intel press release with no specs :marseyemojilaugh:

40
Jump in the discussion.

No email address required.

I can never find the source of this but supposedly modern x86 CPUs have a "RISC-y" real architecture that traditional x86_64 is translated to.

In that sense a lot of discussion around CISC vs. RISC in the x86 vs. ARM war is moot.

I believe ARM just has had more R&D in low power situations which turned out to scale way better than people expected.

Jump in the discussion.

No email address required.

What you're looking for are called "micro operations." Basically every modern x86 CPU turns the legacy x86 ops into a series of smaller pieces, so it's basically turning one CISC instruction into a bunch of RISC ones.

Jump in the discussion.

No email address required.

that's nice but have you ever had even a crumb of kitty in your life?

Jump in the discussion.

No email address required.

Nope :marseyno:

Jump in the discussion.

No email address required.

Stay on the gay and narrow, brother :marseylgbtflag3:

Jump in the discussion.

No email address required.

He actually sucks gock and fricks bussy, for your information. :marseyindignant:

Jump in the discussion.

No email address required.

the extra translation is costly in performance. Also all of the extra cruft from decades ago (real mode, unreal mode, trusted computing coprocessor, etc.) is a major vector for malware in the same way that a heckin' chonker's :marseychonkmaxx: fat folds are vector for bacteria

Jump in the discussion.

No email address required.

the extra translation is costly in performance.

Is it, though? :marseyquestion:

High-level instructions have the upside of microarch-specific translation. If it made sense to compile more directly to microarch microops (or even cache the translation at scale), I suspect there would have been at least one company attempting it. GPUs got big into shader caches, so it's not like low-level, device-specific generated data is off limits for optimization.

Itanium's biggest flaw was how explicit it was; pushing microop sequencing and synchronization all the way back to the compiler proved inflexible.

Jump in the discussion.

No email address required.

Even if that's true, you can just use an intermediate representation that does the same thing you're describing but more flexibly. They just didn't have LLVM back in like 2001. The difference would only be that the translation from the IR is done transparently on a compiler level rather than a hardware and ucode level.

P.S. Edit:

GPUs got big into shader caches, so it's not like low-level, device-specific generated data is off limits for optimization.

This is kind of absurd and fricked up because the GPU manufacturer can alter your shader without you knowing. As a graphics programmer I hate this.

You know what else GPUs did? GPGPU stuff. It's like you can compile to SPIRV, AMDGPU, etc. now with LLVM. These kinds of optimizations should be done at the compiler level these days. Like these optimizations could just be linked assembly code in a library or something.

Jump in the discussion.

No email address required.

Even if that's true, you can just use an intermediate representation that does the same thing you're describing but more flexibly.

Modern x64 instructions are basically that. There's a big difference between a legacy x86 instruction that's inefficiently implemented in microcode (to preserve compatibility) versus a modern (maybe even vectorized) CISC instruction that's high-level but efficiently implementable.

The x64 instruction set has evolved considerably over time.

Jump in the discussion.

No email address required.

Jump in the discussion.

No email address required.

x86 needs to die because every single x86 processor has backdoors the government can get into. ARM may not be much better but RISC V gives me hope :marseybegging:

Jump in the discussion.

No email address required.

>implying they won't embed those backdoors once riscv gets widespread

Jump in the discussion.

No email address required.

RISC management engine would be much catchier than IME :marseythinkorino:

Jump in the discussion.

No email address required.

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.