Jump in the discussion.

No email address required.

o ya let's blame the glorified computer software someone programmed to do this and blame it on {some vague notion of a computer program mysteriously acting on its own}

Jump in the discussion.

No email address required.

The UnitedHealth executive overseeing NaviHealth, Patrick Conway, was quoted in a company podcast saying: "If [people] go to a nursing home, how do we get them out as soon as possible?"

Why would AI do this?

Jump in the discussion.

No email address required.

Yep, a series of if-else statements written based on insurance bean counters input did this. Not some farcical killer AI that wants to deny patients their basic human right to expensive healthcare.

Jump in the discussion.

No email address required.

regular people don't even know how regular programs work so I guess it's normal they don't see the two be conflated, but it's unnerving to see it being conflated by people who obviously do know the difference.

Jump in the discussion.

No email address required.

Does "nh predict" not involve machine learning?

Jump in the discussion.

No email address required.

I'm not an expert so disclaimer here if I say something that's way off the mark, but machine learning is essentially software that has instructions for creating its own variables, so it's not some completely autonomous magical 'thing' that just does everything on its own.

But to simplify it and kinda show u what I mean by conflating the two, I can create a program and tell it to make the background color be based on how long a user stays on the website. So it would just display random colors to every visitor and change it every so often, and it will eventually find a color range that correlates the strongest with a user's online time. By using modern definition of an 'AI' I could then market this as "we used AI to pick the best background color" and 95% of people will not question it.

Jump in the discussion.

No email address required.

To me heuristic becomes AI when you can no longer explain an individual outcome using the source code.

In your example the relationship between session length and color value is explicit in the source. AI is when "screen said so"

Jump in the discussion.

No email address required.

Even in my simplified example though, you cannot predict the outcome. It's based on the external factor (which is whichever color correlates to longest online time).

Expand this overly simplified example to have more information to consider and have more possibilities of how an outcome is expressed and you have essentially what you describe.

But what I'm arguing is that the source code for the claim denials was intentionally biased and even manipulated further by guiding the answer and they are probably going to try and blame it on the "AI" to try and worm their way out of responsibility.

Jump in the discussion.

No email address required.

Oh, you're describing the training data for an ML program with one dimension. If you need access to all the training data in addition to the source code in order to understand how an input becomes an output, it's an AI

Jump in the discussion.

No email address required.

The AI replaced all of the employees who would recognize an incorrect determination, so it's at least partially responsible

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.