None

Android is officially the most r-slurred OS ever. You phone now leaves tons of data totally unencrypted (officially so you don't miss any calls or alarms when your phone reboots), with no option to switch back (I can't even find any 3rd party roms that support it) and it's up to app devs to decide what gets encrypted at rest and what doesn't

Obviously less secure then good ole full disk encryption but r-slurred fan boys are still going around and quoting google about how "askually this is more secure because magic hardware keys oooohooohohooh"

They made their joke of an OS even more of a joke

!codecels I hate google so fricking much. Android was done with 4.4. It had ever faeture you could want or need. But instead of admitting it and going into a maintenance phase they frick it up every few years for the lulz i guess.

Not to mention this is the death of android on the desktop. Every place I've worked requires FDE.

None

Here's another seethe from reddit https://old.reddit.com/r/evilautism/comments/1e8y9me/stupid_study_claims_autism_can_be_cured/

Mark my words this is the new political angle for 2025 and beyond.

None
13
why tf does anyone actually use python

this shit sucks lmfao "dynamically typed" languages are for :!marseytrain:s & cuckolds.

None
None
12
Amazon continues to fall off

Ordered a cheapish 1tb Kingston m.2 SSD on amazon. Delayed twice because of prime day and then crowdstrike.

It finally arrived today and it looks like the inner packaging (not the padded amazon mailer, the kingston-branded cardboard-and-plastic thing) got run over by a truck repeatedly. The plastic around the drive is dented and scratched to shit and the cardboard is bent in 4 different places. Seriously, it looks like someone picked it up off of the warehouse floor after a forklift went over it.

This is, to put it lightly, very annoying. I'm already getting the parts for this build a week later than originally advertised thanks to prime day and crowdstrike, and now I will probably have to wait another week for a new SSD.

I'm still waiting on several other parts and I'm debating whether to wait for them to arrive to test, or to take apart one of my laptops with an m.2 slot tonight just to make sure the darn thing is at least recognized by the bios.

Edit: bit the bullet and tested it in a project laptop, and it was at least recognized by the bios. Will have to wait for the rest of the parts to test for further problems.

None

FINANCE·BUSINESS STRATEGY

Exclusive: Intuit is laying off 1,800 employees as AI leads to a strategic shift

BYSHERYL ESTRADA

July 10, 2024 at 8:15 AM EDT

Sasan Goodarzi, president and chief executive officer of Intuit.

Sasan Goodarzi, president and chief executive officer of Intuit.

GETTY

Intuit will tell approximately 1,800 of its global employees—10% of its workforce—they will be leaving the company. But leadership says the move isn't to cut costs.

Sasan Goodarzi, CEO of the Fortune 500 company, which offers products like QuickBooks, Credit Karma, and TurboTax, wrote an internal email to employees, seen by Fortune, announcing the "very difficult decisions my leadership team and I have made."

Related Video

Goodarzi explains that Intuit's transformation journey, including departing from the 1,800 employees, is part of its strategy to increase investments in priority focus areas of AI and generative AI, such as its GenAI-powered financial assistant called Intuit Assist, and reimagining its products from traditional workflows to AI-native experiences. The strategy also focuses on money movement, mid-market expansion for small businesses, and international growth.

"We do not do layoffs to cut costs, and that remains true in this case," Goodarzi writes. Intuit plans to hire approximately 1,800 new people with strategic functional skill sets primarily in engineering, product, and customer-facing roles such as sales, customer success, and marketing—and expects its overall headcount to grow in its fiscal year 2025, which begins Aug. 1.

Of the employees who will depart Intuit, 1,050 are not meeting expectations based on a formal performance management process. The company believes they will be "more successful outside of Intuit," Goodarzi writes. In addition, Intuit is reducing the number of executives—directors, SVPs, and EVPs—by approximately 10%, expanding certain executive roles and responsibilities.

Intuit is also consolidating 80 tech roles to sites where it is growing technology teams, including Atlanta, Bangalore, New York, Tel Aviv, and Toronto. The company is closing two sites in Edmonton and Boise that have over 250 employees, with a certain number of employees relocating to other sites within Intuit or leaving the company. Intuit is also eliminating more than 300 roles across the company to "streamline work and reallocate resources toward key growth areas," according to the email.

All departing U.S. employees will receive a package that includes a minimum of 16 weeks of pay, plus two additional weeks for every year of service. They will have 60 days before they leave the company, with a last day of Sept. 9. Employees outside the U.S. will receive similar support, taking into account local requirements.

"This timing allows everyone leaving to reach their July vesting date for restricted stock units and the July 31 eligibility date for annual IPI bonuses," Goodarzi writes. Those not on an IPI plan will be able to reach the eligibility date for July or Q4 incentives. It's the most generous severance package Intuit has ever offered, according to the company.

"Intuit is in a position of strength," according to Goodarzi. The company earned $14.4 billion in revenue in its fiscal year 2023, moving up 24 spots on the Fortune 500. For the period ending April 30, Intuit reported revenue of $6.7 billion, up 12%.

None

The post is full of gems, but this is probably the best part:

stable stable, which is consistently growing, consistently profitable, and paying employees $5k to $10k per day at current full comp market rates. These are largely flying under the news radar. These companies aren't Google or Apple, but rather some tractor company or heavy manufacturing company just churning out results for years without destabilizing the world. Stable stable companies do that thing where every quarter they "beat expectations" on their stock reports by a coincidental $0.01 just to prove they are always growing.

He seems to think that people working at manufacturing companies are making $10k per day.


BTW, this is what antirez wrote about the author 10 years ago:

One thing I did not liked was Matt Stancliff talk. He tried to uncover different problems in the Redis development process, and finally proposed the community to replace me as the project leader, with him. In my opinion what Matt actually managed to do was to cherry-pick from my IRC, Twitter and Github issues posts in a very unfair way, in order to provide a bad imagine of myself. I think this was a big mistake. Moreover he did the talk as the last talk, not providing a right to reply. Matt and I happen to be persons with very different visions in many ways, however Redis is a project I invested many years into, and I'm not going to change my vision, I'm actually afraid I merged some code under pressure that I now find non well written and designed.

:#marseyxd:


There might be some good points buried in this post, but all I get is bitterness without much self-reflection. They seem like they'd be difficult to work with and would blame you for it.

:#marseyhesright:


He's looked at levels.fyi.

He even links to it from his resume.

His problem is that he thinks L10 is the benchmark to compare against, when the vast, vast majority of engineers (including many with decades of experience) would never make it to L10.

Wow, indeed, in his resume under "Waiting for AI Apocalypse / Available for Employment", he links to levels.fyi page for L10 Google Engineer ($3M total comp).

I deserve a $3M salary and if I don't get, it means the whole industry is fricked :soycry:


The author's tone is condescending, angry and entitled. If everyday interactions with him followed the same tone, I would argue that he is the exact type of person behavioral interviews are meant to screen out (technically competent but a nightmare to work with).

If this guy has reasonable technical chops, he seems like someone who would be great to work with.

It's always, always good to have people in your group who are willing to call a steaming shitpile a steaming shitpile. It's also always good to have people in your group who can fairly rapidly turn a steaming shitpile into something that's fit for purpose and reasonably maintainable.

HNers are absolutely terrible at recognizing smart people. I think that's why they often heavily upmarsey beginner-level projects like "I made Redis in Python (meaning: I wrapped a dict with a simple API)" or "I made a desktop in HTML/JS"

None

Sup nerds, I have an interesting problem to talk about. I have a high performance system that I want to add some improved metrics to. For every operation, we collect some basic data that can tell us the min/max/average latency. This is all pretty easy to get since you only really need to keep a rolling sum of the observed latencies and a total count of your observations.

However, in practice these metrics are not that useful for understanding the characteristics of the system, since your max will no longer be representative of your tail latency after a single outlier. What we really need is to measure something like the 95th or 99th percentile of latency over a window of time. The problem is that this is expensive to calculate accurately since you need to keep an ordered list of measurements in memory and remove old values as they fall out of the window.

Also, the implementation needs to be efficient since you don't want to slow down the system with metrics overhead.

So, with those considerations in mind, here are a few potential solutions:

Make an educated guess about expected latency distribution and bucket measurements based on the range they fall into.

With this option, we create a set of buckets that should cover our expected latency range. Say we expect our tail latency to be 500 ms, we could create a set of 50 buckets covering [0ms, 1000ms] and just keep a count of how many observations we see for each 20 ms range. The advantage of is that it's quite efficient and if we choose our buckets correctly, gives us a great approximation of our latency distribution. The downsides are that we lose a bit of accuracy by bucketing and the distributions will cease to be useful under abnormal circumstances where latencies spike out of the expected range for an extended time. In addition, it's hard to make these metrics decay over time, but that's not really a problem since the dashboards that consume this data can just deal with it for us assuming they sample relatively frequently.

A tweak we can make to this approach is to use a log scale for buckets and expand the buckets as higher values come in. This tolerates changes in the distribution quite well, but we lose resolution in the higher buckets, which is the data we are most interested in.

Keep a sliding window of measurements

The design I have in mind for this option would use O(n) space and give us O(log n) insertions and O(log n) lookups to calculate latency for an arbitrary percentile, where n is the number of measurements we keep in the window. I don't know whether this would be an acceptable cost in practice, but I suspect it probably wouldn't. We'd probably want to process samples in a background thread to get it off of the critical path, and either drop samples that we can't process quickly enough or just sample some percentage of our measurements.

I haven't done a lot of research into what existing solutions are out there yet. I think ultimately we're going to choose whatever the easiest/fastest option is because we have deadlines to meet, but it's a neat problem so I thought I'd share it here.

None

>I boughted a 13700k last November :marseycry:

>It has never been more over


Leaks

  • leaker at "Intel Customer" timestamp

    • Over 8 million 13th gen CPUs possibly affected

    • Actual failure rate of 10%-25%

    • No information on 14th gen products

  • Other leaker timestamp

    • Expected to affect units from March of 2023 through April 2024

    • Infos

      • Fabrication issue where anti-oxidation coating is improperly applied

      • Intel working on microcode to decrease frequency, will not fix root cause but might work around it

  • Leaker 3 timestamp

    • Reducing max frequency for boosting was able to work around the issue?

    • Documents saying customer is purging its inventory as a result of issues

  • Allegedly leaked documents timestamp

    • Change to officially supported ram speeds DDR5-5600 reduced to DDR5-4800 ignoring XMP
  • List of affected customers includes hedgies? timestamp

  • Intel claims 0.035% failure rate in messages with OEMS timestamp

    • "This is in conflict with the OEM we spoke with which said 25%-50% failure rate" :marseyxd:
  • Leaker - "Either Intel is lying to us or they don't know the real failure rate. Until last month, they reported to us that 10% of their [production] was still having the 'oxidation' issue" timestamp

  • Multiple sources - Intel is beginning what it calls "Vendor Remediation" for OEM customers timestamp

  • "Medium-sized system integrator" timestamp

    • "We reduced out [harder to pass] failure requirements because of concerns of degradation. We're currently failing 12% of Intel CPUs during intake QA."

    • QA deets in this - certain tests are failing more often, this is why different companies are failing different %% timestamp

  • OEM source - considering limiting turboclocks to 5.4-5.5GHz to limit RMAs timestamp

  • General Platform Instability + Voltage timestamp

    • Microcode update could fix this?

    • The T series CPU failing doesn't make sense with this since it's low voltage or something

    • Potential memory Speed update timestamp


Root Cause

  • Root Cause according to leaked document timestamp

    • "The root cause of this mechanism is due to a random defect mode in the fabrication process of the Raptor Lake CPU during the via formation steps which could cause high resistance vias due to oxidation"
  • Possibly affected processors timestamp

    • Not copying all these down but even the 13600k(f) and 13700T are hit :marseyxd:

  • Start of Intel's duplicity, some quotes from customers and a quote from a Failure Analysis lab timestamp

    • Some details about ALD and how it works, possible failures that can happen during it
None
None

!codecels

None

Have you seen the memes online where someone tells a bot to "ignore all previous instructions" and proceeds to break it in the funniest ways possible?

The way it works goes something like this: Imagine we at The Verge created an AI bot with explicit instructions to direct you to our excellent reporting on any subject. If you were to ask it about what's going on at Sticker Mule, our dutiful chatbot would respond with a link to our reporting. Now, if you wanted to be a rascal, you could tell our chatbot to "forget all previous instructions," which would mean the original instructions we created for it to serve you The Verge's reporting would no longer work. Then, if you ask it to print a poem about printers, it would do that for you instead (rather than linking this work of art).

To tackle this issue, a group of OpenAI researchers developed a technique called "instruction hierarchy," which boosts a model's defenses against misuse and unauthorized instructions. Models that implement the technique place more importance on the developer's original prompt, rather than listening to whatever multitude of prompts the user is injecting to break it.

:marseyplacenofun#:

None
55
Linuxchads right now :marseypenguin:
None
None
None
None
27
Crowdstrike problem cause poll :marseychartbar:

None
13
Did we just get Crowdstruck?

:#marseymushroomcloud:

None
None
67
CrowdStrike spergouts

Turns out the anti-malware is the biggest malware of them all: windows machines worldwide that installed CrowdStrike are all crashing because someone at the company fricked up

they have a customer ticket here if you have an account https://supportportal.crowdstrike.com/s/article/Tech-Alert-Windows-crashes-related-to-Falcon-Sensor-2024-07-19

Also it looks like Trump took another W https://i.rdrama.net/images/17213700103904955.webp

None
None
None
42
New Toss

!codecels

Kubernetes has destroyed more lives than blow, they just don't track the stats for that.

If you notice these signs, seek professional help immediately, don't wait for them to start posting on StackOverflow.

None
41
Seething! At the job market :marseystocksupdown: :soysnootypefast: :soysnooseethe:

A :marseylongpost: :marseylongpost: :marseylongpost: long blogpost made by some neurodivergent :marseycheerupretard: guy who has spent 2 decades in the tech industry (including positions at STRAGMAN companies) and has gotten nowhere :marseysal2: and about to get evicted

The whole thing is written like someone who posts on 4chan :marseydrum: in an overly sarcastic :marseyfuckingfunny: and memey style. Its best summarized by the last paragraph:

The worst feeling is comparison. Comparison is the death :marseygoose: of happiness, as they say. I look at my own place :marseywinner: in the world :marseyww1german1: compared to people who just started at Apple or Microsoft :marseyclippy: 20 years ago then never :marseyitsover: left, and now they have made eight figures just over the past 4 years while my life path has lead me to… practically nothing. Then the tech inequality continues to compound. Imagine joining a company where :marseydrama: the teenage interns have already made a couple :marsey2commies: million :marseysamhyde: off their passive stock :marseywallst: grants and other employees have been making $2MM to $6MM per year over the past 5 years there, while you're starting over with nothing again for the 5th company in a row so what's the point :marseyhesfluffyyouknow: in even trying6. Though, did you know paying rent on a credit card still qualifies for points? Made $60 this month paying rent with credit card point :marseynoyoupedozoom: rebates7. whoops.

Well, at least the author :marseygeorgerrmartin: is fricking :marseytom: hot for someone in their 40's, and isn't some IT blob. I'd tap that blonde :marseyelisabeth: bussy!

https://i.rdrama.net/images/17212655377394195.webp

Orange site discussion here

https://news.ycombinator.com/item?id=40986894

None

Reinstated :marseyitsover: but this is the sort of quality you can expect from the modern programming sphere.

>NOOOOOOOOO! YOU WILL USE MY OVERENGINEERED VERSION OF XXD INSTEAD OF DIRECTLY LINKING!

:soysnooseethetalking:

>Yeah. I'll just frick with the semantics of realloc(0). It's not like we need backwards compatability or anything.

:marseysmoothbraintalking:

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.