AI - The good, the ...
 
Notifications
Clear all

AI - The good, the bad, the...future of AI and your thoughts about it

(@danielle)
Estimable Member
Joined: 5 years ago
Posts: 26
 

@polarberry For sure these AI creations will be able to subvert the “I am not a robot”. But it will still keep the garden variety spammers away.



   
PamP and Lauren reacted
ReplyQuote
(@polarberry)
Illustrious Member
Joined: 7 years ago
Posts: 1082
 

@danielle 

I was joking about me secretly being a robot.



   
PamP, ghandigirl and Lauren reacted
ReplyQuote
(@laura-f)
Illustrious Member
Joined: 9 years ago
Posts: 1966
 

Here is an article that describes what happens when medical insurance companies decide to use AI to make blanket denials without a human eyeball ever seeing the claim or request for authorization. They deny up to 90% of claims and requests, knowing that less than 5% of patients will bother to appeal.

How Cigna Rejects Claims

And it's not just Cigna - ALL the big medical insurances are doing this. This article struck close to home for me because in the last 6 months I was denied 2 surgeries as "not medically necessary" even though my doctors said it was. I appealed both denials for prior authorization (because I know better than to try to reverse a claim denial) and was still denied.  In one case, it's clear the algorithm scans for the words tumor and cancer - without those words, and despite significant other key phrases and diagnoses, it's like posting a resume` on Indeed - it goes nowhere in other words, except into the electronic trash bin. (I have Anthem Blue Cross)



   
PamP, Isabelle, Vesta and 3 people reacted
ReplyQuote
(@melmystery)
Prominent Member
Joined: 6 years ago
Posts: 108
 

@laura-f 

I think that one of the things we need to do when discussing AI is to define exactly what AI is and what it is not. 

This article did not mention AI specifically, but suggested that Cigna has an "algorithm" that determines whether claims fit their specific criteria.  As you mentioned, this appears to be essentially a keyword search of claim applications for certain terms in much the same way that employers use computer programs to screen resumes for key words.

I would argue that this is not true AI.  According to Oxford Languages dictionary on Google, an algorithm is "a process or set of rules to be followed in calculations and other problem-solving operations, especially by a computer." And Artificial Intelligence is "the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision making, and translation between languages."

My perception of AI is that keyword searches of claims or resumes don't require any real semblance of intelligence - they do just basic searching, sorting, and flagging.  An AI may use algorithms to make decisions, but is much more complex.  I would think that a true AI used to approve or deny claims or resumes would have the ability to look at the larger picture, piece together a person's medical or employment history, and make predictions and inferences that go beyond whether certain key words are used or not used.  An AI would be able to look at these more "intelligently" than a basic algorithm. 

That said, I do feel that AI's will need human oversight for the foreseeable future.  In this case, the AI could potentially identify claims that need a human to look at them in further detail.  The AI could potentially even summarize a claim or work history for a human to read over to make the final decision. 

 



   
PamP, Isabelle, Vesta and 3 people reacted
ReplyQuote
(@tgraf66)
Illustrious Member
Joined: 6 years ago
Posts: 815
 

@melmystery Your explanation of the difference between the two is excellent, but unfortunately, any automated process done by computers will be now referred to as AI out of sheer journalistic laziness for the purposes of the usual 10-second sound bites to accommodate the 15-second American attention span. 😂



   
PamP, Isabelle, melmystery and 2 people reacted
ReplyQuote
(@isabelle)
Noble Member
Joined: 7 years ago
Posts: 196
 

A beneficial effect of AI:  Apparently AI can detect lung cancer way before the traditional CT scan can. Please see below:

 

ttps://www.aol.com/news/promising-ai-detect-early-signs-210112022.html



   
PamP, ghandigirl, raincloud and 1 people reacted
ReplyQuote
(@isabelle)
Noble Member
Joined: 7 years ago
Posts: 196
 

In an interesting article in today's NY Times, The Writers Guild of America is now in tense contract negotiations with The Alliance of Motion Picture and Television Producers, hoping to avoid a strike.  For the first time in history, the Union is insisting upon inserting a clause to all their contracts to protect the Union's writers and voice artists from being displaced by AI-generated artists/voice artists which could disrupt their entire job market.

However, the unions have legal cards to play, Mr. Crabtree-Ireland of SAG-AFTRA said, like the U.S. Copyright Office’s pronouncement in March that content created entirely by algorithm is not eligible for copyright protection. It is harder to monetize a production if there is no legal obstacle to copying it.

 

https://www.nytimes.com/2023/04/29/business/media/writers-guild-hollywood-ai-chatgpt.html



   
PamP, Maggieci, Lauren and 2 people reacted
ReplyQuote
(@isabelle)
Noble Member
Joined: 7 years ago
Posts: 196
 

Increasing concern in the modeling world that AI-dedicated modeling firms (such as Lalaland, Inc.) may greatly reduce need for live models and/or disrupt entire modeling industry gradually pushing live models out of the modeling business.  AI images are far cheaper, easier and virtually indistinguishable from images of live humans.  Current law leaves some gray area around models' rights to organize unions resulting in limited labor protections.  There is hope that regulators will step in.

https://www.nbcnews.com/business/business-news/ai-models-levis-controversy-backlash-rcna77280



   
PamP, Maggieci and Lauren reacted
ReplyQuote
(@ghandigirl)
Illustrious Member
Joined: 8 years ago
Posts: 1094
 

@melmystery 

I agree. We are trained to be fearful of new things. 



   
PamP, Lauren and melmystery reacted
ReplyQuote
(@raincloud)
Noble Member
Joined: 5 years ago
Posts: 334
 

In the NYT today.

From one of the developers of AI who is leaving Google to sound an alarm:

"But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said."

The full article:

https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html

 



   
PamP, Maggieci, Isabelle and 2 people reacted
ReplyQuote
Page 2 / 6