Unable to load image
Reported by:

Intel is in complete panic and disarray. Sold all of its stake in ARM Holdings

https://www.reuters.com/markets/deals/intel-sells-stake-chip-designer-arm-holdings-2024-08-13/

Intel, which is cutting thousands of jobs as it struggles to stay relevant in the chip industry, sold its 1.18 million share stake in British chip firm Arm Holdings in the second quarter, a regulatory filing showed on Tuesday.

Intel would have raised about $146.7 million from the sale, based on the average price of Arm's stock between April and June, according to Reuters calculations.

The chipmaker said earlier this month that it would cut more than 15% of its workforce and suspend its dividend amid a pullback in spending on traditional data center semiconductors and a shift towards AI chips, where it lags rivals such as Nvidia.

So Intel is doubling down on AI chips despite the AI balloon popping? Legit business plan.

Intel has said it is focused on developing advanced AI chips and building out its for-hire manufacturing capabilities, as it aims to recoup the technological edge lost to Taiwan's TSMC, the world's largest contract chipmaker.

The push to energize that contracting foundry business under CEO Pat Gelsinger has increased Intel's costs and pressured profit margins, forcing it to seek cost cuts.

Intel and ARM both declined to comment on Tuesday when contacted by Reuters about the share sale.

"This looks to be consistent with the restructuring plan and the renewed focus on liquidity and efficiency that Gelsinger laid out from the last conference call," said Benchmark Co analyst Cody Acree.

huh? https://media.tenor.com/Fg7Jrii-17kAAAAx/james-franco-wait-what.webp

Put things in perspective:

1. Intel wants to be leading techshit (who doesn't really, :marseyclueless: ?).

2. Intel wants to shift towards AIshit like N(word)vidya

3. Intel wants to shift away from catching up to TSMC's 2->1.4 nm node processes because they realized they can't catch up?

But barely an year ago Intel was forming up memo of understanding with ARM to boost its SOC

Intel Foundry and Arm Announce Multigeneration Collaboration on Leading-Edge SoC Design

https://www.intel.com/content/www/us/en/newsroom/news/intel-foundry-arm-announce-multigeneration-collaboration-leading-edge-soc-design.html

April 12, 2023 โ€“ Intel Foundry Services (IFS) and Arm today announced a multigeneration agreement to enable chip designers to build low-power compute system-on-chips (SoCs) on the Intel 18A process. The collaboration will focus on mobile SoC designs first, but allows for potential design expansion into automotive, Internet of Things (IoT), data center, aerospace and government applications. Armยฎ customers designing their next-generation mobile SoCs will benefit from leading-edge Intel 18A process technology, which delivers new breakthrough transistor technologies for improved power and performance, and from IFS's robust manufacturing footprint that includes U.S.- and EU-based capacity.

Early this year, Intel Foundry Services Head Stu Pann explained how Intel planned to build Arm chips, move more manufacturing to the U.S.

https://www.tomshardware.com/pc-components/cpus/intel-foundry-head-stu-pann-explains-companys-plan-to-build-arm-chips-move-more-manufacturing-to-the-us

Also, how the 5Nodes in 4Years was coming all according to plan and they were ready to take the edge away from TSMC

https://www.tomshardware.com/pc-components/cpus/intel-announces-new-roadmap-at-ifs-direct-connect-2024-new-14a-node-clearwater-forest-taped-in-five-nodes-in-four-years-remains-on-track

https://i.rdrama.net/images/17236300045229952.webp

Just a week ago, Intel's new head of IFS (hired just 3 months ago) was talking enthusiastically about its 1.8nm chip

https://www.intel.com/content/www/us/en/newsroom/news/kevin-obuckley-talks-progress-intel-18a.html#gs.dlvpr9

By August 5th, techshit world was gushing like a hussy gussy about how Intel's 18a was going to blast through TSMC

https://pokde.net/system/pc/cpu/intel-panther-lake-power-on

https://i.rdrama.net/images/17236300043544395.webp

Blah blah etc https://www.patentlyapple.com/2024/02/intel-is-set-to-release-the-worlds-first-14nm-chip-by-2027-and-18nm-in-2025-to-compete-with-tsmcs-2nm-that-apple-will-be.html

So what seems to be the problem?

Is it just because Intel seems to be building its own foundry and fab in US while Nvidia outsources all its production to different partners, so in the short run it's experiencing some negative profits...or is it stuck in limbo? Neither can it just let go of its foundry and embrace AI bubble nor can it let go of the AI lure.

again on Aug 6th it announced that its RibbonFET and 18A are on track and taking over TSMC by 2025. (EVERYTHING AS PLANNED)

https://www.intel.com/content/www/us/en/newsroom/artificial-intelligence.html?filters[-7300596454,-8111542043]#gs.dlwi0i

Yeah I can't find anything on Intel's press releases in AI section about GPUs.

All in all, I have no clue what is going on. Someone explain

!pirates !r-slurs !techshit

98
Jump in the discussion.

No email address required.

Some tech giant is going to go tits up chasing this AI dragon :marseynorm:


https://i.rdrama.net/images/17250659971309795.webp

Jump in the discussion.

No email address required.

I don't understand what an AI chip is supposed to be. Isn't AI just software running on hardware, like digital computing always has been?

Jump in the discussion.

No email address required.

it's somewhere between a GPU and a CPU.

Jump in the discussion.

No email address required.

I'm absolutely not an expert on the subject but basically just like CPUs can be better at handling lots of smaller processes or fewer larger ones, or how a GPU has different part for shading, geometry and textures, you can make a chip's architecture better at AI tasks.

The problem is Nvidia's been in the game for nearly ten years now, Intel's not gonna catch up anytime soon and AMD would be fools to even try.

Jump in the discussion.

No email address required.

Google's TPUs (with help from Broadcom and TSMC) are developed/designed/built independently from NVIDIA, and outperform state of the art GPUs.

Jump in the discussion.

No email address required.

Well good thing I didn't mention Google at all

Jump in the discussion.

No email address required.

*According to google's internal tests that they won't let anyone replicate. Just like alphazero is totally better than stockfish but we won't release it or even do a live demonstration, just publish a paper and hope nobody notices how badly our rules gimped stockfish.

Jump in the discussion.

No email address required.

maybe u missed it but intel has been doing GPUs for ages also

Jump in the discussion.

No email address required.

Intel Arc is not remotely competitive with NVidia's usual offerings, not yet at least.

Jump in the discussion.

No email address required.

I was talking about AI specifically, but Intel's ARC lineup is really weak. The fact that despite now having an actual dedicated graphics unit their iGPUs are still miles behind AMD shows that they're not there yet.

Jump in the discussion.

No email address required.

It's just stuffing GPU and "NPU" (a GPU that does algorithmslop?) cores into a chip. Similar idea to how Qualcomm started doing 4+4 core ARM-based SOC chips 15 years ago to have separate cores optimized for low and high-energy use performance, but this optimizes for people telling Copilot to write their quarterly review for them and give them JOI in the voice of Saddam Hussein

Jump in the discussion.

No email address required.

>JOI in the voice of sadden Hussein

:#arousedpizzashill:

Jump in the discussion.

No email address required.

A focus on highly parallel architecture is a common trait.

Jump in the discussion.

No email address required.

A chip with architecture optimised for matrix multiplications. It's more efficient than a CPU.

Jump in the discussion.

No email address required.

I can do that in my head

Jump in the discussion.

No email address required.

Cpus go really fast but can only truly do like 32 things in parallel. AI number crunching involves solving a huge pile of math problems with no dependencies on each other, so it can make use of much higher concurrency, and in fact requires it.

GPUs are ideal for this because they've always been purpose built for calculating the area of some bullshit triangles, which is easy as piss so you can fill the card with a thousand shitty processors and fat pipes to move the data in and out.

Jump in the discussion.

No email address required.

implying only one will.

:#marseyxd:

Jump in the discussion.

No email address required.

Imagine if Intel manages to fail. One of only two companies legally able to manufacture x86 chips and you frick it up.

Jump in the discussion.

No email address required.

PowerPC (OpenPOWER) is back baybee! :marseywholesome: :motherfucker:

Jump in the discussion.

No email address required.

All according to keikaku.

Jump in the discussion.

No email address required.

https://www.zdnet.com/article/if-intel-cant-come-up-with-a-qualcomm-killer-soon-its-game-over-for-x86-pcs/

The retrospectives in a few years will probably point at the stagnation around x86 as a major reason for Intel's failure.

Jump in the discussion.

No email address required.

Ironically that was my rationale for buying AMD shares back when they were like $3.

Jump in the discussion.

No email address required.

>only one

ohnononono

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.