Jump in the discussion.

No email address required.

UnitedHealthcare used AI that denies 90% of claims

!codecels

Jump in the discussion.

No email address required.

>if client.condition=obesity;

>deny client.claim

:gigachad2:

Jump in the discussion.

No email address required.

Lol I had a boss from the Philippians who had a brother still there whom had a gas tank of a motorbike explode in his face when trying to work on it and was brought to their local ER with life-threatening burns and injuries. According to her their process is

>if client.condition=money.up.frount

>then:admit.treat

>else:contact.relative

>loop until:money.up.front

>else.relative=0: patient.dies.on.floor

Jump in the discussion.

No email address required.

Yup, same thing in China. A lot of Asian countries do not value human life at all

Jump in the discussion.

No email address required.

a pluralistic society if you will

Jump in the discussion.

No email address required.

It says 90% error rate.

Jump in the discussion.

No email address required.

UnitedHealthcare used AI determinations technically reviewed by humans that denies 90% of claims

"Optum Health told Ars Technica: "The NaviHealth predict tool is not used to make coverage determinations. The tool is used as a guide to help us inform providers, families, and other caregivers about what sort of assistance and care the patient may need both in the facility and after returning home. Coverage decisions are based on CMS coverage criteria and the terms of the member's plan. This lawsuit has no merit, and we will defend ourselves vigorously."

This is actually correct lol, they have no case. NOT using AI here would actually be so grossly negligent to your average tax payer and it would be a massive scandal worthy to sue the government for utterly wasteful spending when all the computer does is spit out a nice summary of why or why not the legally established guidelines for care coverage are met or not met.

Jump in the discussion.

No email address required.

  • case file lands on me dashboard

  • AI made a judgment

  • Click "verify judgment"

  • Go back to playin solitaire

  • simple as

Jump in the discussion.

No email address required.

Damit do you even know how much money these bad boys cost or the effort it takes to trigger them confirm a denial?

https://i.rdrama.net/images/17007078682133079.webp

:marseyindignant:

Jump in the discussion.

No email address required.

  • appeal comes in

  • click "deny appeal"

  • cash paycheck

Jump in the discussion.

No email address required.

eVErY CaSe is rEViewWd bY a HumAn

Jump in the discussion.

No email address required.

Somewhat surprised Ars would have that poor of an article.


Follower of Christ :marseyandjesus: Tech lover, IT Admin, heckin pupper lover and occasionally troll. I hold back feelings or opinions, right or wrong because I dislike conflict.

Jump in the discussion.

No email address required.

They should just stick to vidya game dev interviews :derpwhy:

Jump in the discussion.

No email address required.

I thought the same thing. “90% error rate” because 90% of claims appealed are overturned… what is selection bias


:#marseyastronaut:

Jump in the discussion.

No email address required.

o ya let's blame the glorified computer software someone programmed to do this and blame it on {some vague notion of a computer program mysteriously acting on its own}

Jump in the discussion.

No email address required.

Yep, a series of if-else statements written based on insurance bean counters input did this. Not some farcical killer AI that wants to deny patients their basic human right to expensive healthcare.

Jump in the discussion.

No email address required.

regular people don't even know how regular programs work so I guess it's normal they don't see the two be conflated, but it's unnerving to see it being conflated by people who obviously do know the difference.

Jump in the discussion.

No email address required.

Does "nh predict" not involve machine learning?

Jump in the discussion.

No email address required.

I'm not an expert so disclaimer here if I say something that's way off the mark, but machine learning is essentially software that has instructions for creating its own variables, so it's not some completely autonomous magical 'thing' that just does everything on its own.

But to simplify it and kinda show u what I mean by conflating the two, I can create a program and tell it to make the background color be based on how long a user stays on the website. So it would just display random colors to every visitor and change it every so often, and it will eventually find a color range that correlates the strongest with a user's online time. By using modern definition of an 'AI' I could then market this as "we used AI to pick the best background color" and 95% of people will not question it.

Jump in the discussion.

No email address required.

To me heuristic becomes AI when you can no longer explain an individual outcome using the source code.

In your example the relationship between session length and color value is explicit in the source. AI is when "screen said so"

Jump in the discussion.

No email address required.

Even in my simplified example though, you cannot predict the outcome. It's based on the external factor (which is whichever color correlates to longest online time).

Expand this overly simplified example to have more information to consider and have more possibilities of how an outcome is expressed and you have essentially what you describe.

But what I'm arguing is that the source code for the claim denials was intentionally biased and even manipulated further by guiding the answer and they are probably going to try and blame it on the "AI" to try and worm their way out of responsibility.

Jump in the discussion.

No email address required.

Oh, you're describing the training data for an ML program with one dimension. If you need access to all the training data in addition to the source code in order to understand how an input becomes an output, it's an AI

Jump in the discussion.

No email address required.

The UnitedHealth executive overseeing NaviHealth, Patrick Conway, was quoted in a company podcast saying: "If [people] go to a nursing home, how do we get them out as soon as possible?"

Why would AI do this?

Jump in the discussion.

No email address required.

The AI replaced all of the employees who would recognize an incorrect determination, so it's at least partially responsible

Jump in the discussion.

No email address required.

You could sell an AI model to insurance that is just a script that always outputs No

Jump in the discussion.

No email address required.

AI is basically a racism machine so we can finally have enough racism to meet the demand.

Jump in the discussion.

No email address required.

forced to pay a combined $210,000

was a choice.

Jump in the discussion.

No email address required.

:#marseyrasta:

Snapshots:

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.