AI - The good, the ...
 
Notifications
Clear all

AI - The good, the bad, the...future of AI and your thoughts about it

(@jeanne-mayell)
Illustrious Member Admin
Joined: 8 years ago
Posts: 7962
Topic starter  

@Isabelle, one of our readers asked me to start this topic. Exploring AI has been at the back of my mind ever since @unkp started that fun thread to talk to a chatbot. So what do people think about it? 

When I think of AI, my first thought is Stanley Kubrick's 1968 film masterpiece 2001 Space Odyssey, which spans the length of human existence from the pre-historic moment humans start using tools, to when the ultimate tool, a computer running a space ship, tries to take over and kill the crew. If you have never seen this film masterpiece, it is even more relevant today than it was in 1968.

The second thought I have about AI is that Stephen Hawking says AI is dangerous if not controlled, and I don't know how humans will ever be able to control it. We can't seem to control virologists from creating deadly viruses that can jump from bats to humans.  Someone out there will always want to make an even more powerful machine. 

Of course, AI can do amazing things, and will likely be able to cure diseases, solve the climate change crisis, and much more.  Also AI made NotMilk, the only plant based milk product that satisfies my milk cravings. So thank you, AI. 

But whether or not  I am concerned about the future of AI,  the cat is out of the bag.  

Am interested in others' thoughts about it.  Positive, negative, any thoughts and/or intuitions? 


   
PamP, polarberry, Lauren and 3 people reacted
ReplyQuote
(@journeywithme2)
Illustrious Member Registered
Joined: 4 years ago
Posts: 1965
 

@jeanne-mayell   I see AI as much like the discovery of fire, enormous potential for Good ,saving lives, making life better as well Evil,Destruction... killing, maiming and causing great pain. It all depends on how one uses it.There will aways be... the ones who use it for Good, Higher Purposes... and well as those... who misuse it and cause great harm. TBS? It is... here... that horse has already left the barn.


   
PamP, Danielle, Jeanne Mayell and 5 people reacted
ReplyQuote
(@isabelle)
Famed Member Registered
Joined: 5 years ago
Posts: 261
 

@jeanne-mayell 

 

I am actually quite concerned about the development of AI. While it could have beneficial effects, it also has the potential to disrupt life as we know it and play havoc with financial and job markets, infiltrate banks & brokerage houses, co-opt medical databases and compromise our individual security & privacy even more than we are already facing while we helplessly stand by. 

As we know, companies like Microsoft, Google, IBM and others are now market leaders, have spent a fortune developing this technology, and are, in effect, in an 'arms race" to develop the most advanced AI as quickly as possible  -- yet there is virtually no current regulation or legal/ethical oversight in this area which is desperately needed.  We have already unleashed the genie and it will be impossible to put it back in the bottle! Google has already acquired DeepMind which is a computer-based 'neural network' which mimics the short term memory of the human brain.  It is only a matter of time before it advances further. Perhaps one day (in the not too distant future) AI may qualify for 'sentience" in a legal sense?  It's algorithms may become so densely coded and sophisticated that, for all intents and purposes, it may become, virtually indistinguishable from your best friend, your husband or your neighbor!  K Pop's hugely successful SuperKind band now has 4 real human singers and one pink-haired AI singer who is indistinguishable from the others -- and the band regularly sells out huge stadiums to its fan base. ChatGPT's chatbot is taking off like crazy... yet, as of now, thankfully, it is still fairly unreliable in its recognition of patterns and making correct predictions...but will no doubt be continuously upgraded & refined.  Law firms are now using AI in low-level E-Discovery/Document Review but the incorporation of AI into law firm life may become inevitable since rote memory, the seeking of fact patterns and making logical deductions encompasses the true strengths of AI.  

I can foresee, in a worse case scenario, humans creating their own built-in obsolescence, being completely out-maneuvered/out thought by self-learning, self-generative, amoral AI to which we are, exponentially, unable to compete and becoming, in effect, 'second class citizens' and little more than 'domesticated pets'.  Anything a human can do, in time AI will be able to do FAR BETTER, in a millisecond of processing time and far, far cheaper too. And, vitally, unless we require the mandatory embedding of ethical & moral subroutines/algorithms and human values to be built into AI as it advances, we may be dealing with a new kind of dangerous, uncontrollable, sociopathic technology which is built upon pure logic, efficiency and deduction but which omits the crucial factors of 'human values/compassion/life affirmation' which are uniquely human and to which we owe the advancement of human civilization.  In short, we may be left far, far behind and our values may become, in effect, extinct.

So where does this lead us?  It's possible part of society may eventually break away and form "Luddite" communities, getting back to the land, using barter, growing their own food and spurning the Digital Life which has suddenly become ubiquitous and unmanageable.  

I do not mean to upset anyone here.  I do not consider myself to be an 'intuitive or psychic'.  I am only making predictions based upon what I have been reading.  Are there any 'intuitives' among us who have feelings about this potentially fundamentally transformational step that mankind is about to embark upon?

 

https://en.wikipedia.org/wiki/DeepMind

https://koreajoongangdaily.joins.com/2022/07/28/entertainment/kpop/superkind-watch-out-SAEJiN/20220728161533247.html

https://www.americanbar.org/news/abanews/publications/youraba/2017/september-2017/7-ways-artificial-intelligence-can-benefit-your-law-firm/

 


   
Jeanne Mayell, raincloud, polarberry and 2 people reacted
ReplyQuote
(@tgraf66)
Illustrious Member Registered
Joined: 4 years ago
Posts: 949
 

I literally just watched this about an hour ago and then came here and saw this new thread.  I'm going to link to this video on the subject by Adam Conover. He's a well-known comedian who does (or used to do) a video series called "Adam Explains Everything". I'm not saying he's an expert, but he does raise some really good points about the current state of AI. Just a word of caution, he does use a lot of swear words. ;-)

https://www.youtube.com/watch?v=ro130m-f_yk


   
Kateinpdx, PamP, Jeanne Mayell and 3 people reacted
ReplyQuote
(@isabelle)
Famed Member Registered
Joined: 5 years ago
Posts: 261
 

Please forgive me if I sound so negative.  My fears may not be proven true!!   AI developers may begin to realize the ethical responsibilities they have, step in and deliberately shape AI for exclusively beneficial purposes such as drug discovery and genomic research.  My point is that I feel we are at a crossroads now in our human evolution -- only now we are crossing a major step from exclusively 'organic life forms' to entirely new types of, potentially, sentient, inorganic life forms. To me, it feels like a social transformation of the most profound kind and we are in the earliest stages.  Let's see where it goes?


   
Kateinpdx, PamP, Jeanne Mayell and 3 people reacted
ReplyQuote
(@polarberry)
Illustrious Member Registered
Joined: 5 years ago
Posts: 1210
 

All I know is I recently watched the movie M3GAN and it scared the crap out of me. I think there is potential for good and also for things to go very wrong.


   
PamP, Jeanne Mayell, Lauren and 2 people reacted
ReplyQuote
(@isabelle)
Famed Member Registered
Joined: 5 years ago
Posts: 261
 

I believe this is a clear case where unimpeded Capitalism must be reined in asap. The AI area must become highly regulated. There must be a groundswell of response from the public in terms of demanding that all AI research STOP for now (actually, there currently is a demand from top AI experts demanding a 6 month "moritorium" to re-consider/re-evaluate the longer-term ramifications of AI research -- but that time period is way too short!  Should be something like 5-to-10 years of forbidding AI research to continue until further safety protocols are devised.  Please see below.  We need to call our Senators/Congressmen and demand a termination (for now) of continual AI advancement...as well as in depth research into its profound social implications!

 

https://www.dw.com/en/tech-experts-call-for-6-month-pause-on-ai-development/a-65174081#:~:text=As%20artificial%20intelligence%20makes%20rapid,have%20on%20society%20and%20humanity.


   
PamP, Lauren and raincloud reacted
ReplyQuote
(@matildagirl)
Famed Member Registered
Joined: 3 years ago
Posts: 393
 

A Sci fi TV show with the theme AI robots and what happens when a scientist give 5 of them human feelings and others want to exploit that.  This could become a scenario in the near future, who knows what’s going on behind the scenes. I watched this recently, it’s very good, thought provoking. It’s much more complex in its story line than the above suggests. 

https://m.youtube.com/watch?v=BV8qFeZxZPE

https://en.m.wikipedia.org/wiki/Humans_(TV_series)

Humans is a science fiction television series that debuted on Channel 4. Written by Sam Vincent and Jonathan Brackley, based on the Swedish science fiction drama Real Humans, the series explores the themes of artificial intelligence and robotics, focusing on the social, cultural, and psychological impact of the invention of anthropomorphic robots called "synths". The series is produced jointly by Channel 4 and Kudos in the United Kingdom, and AMC in the United States.

Regards to all

PS don’t think I want that type of future


   
Kateinpdx, PamP, Isabelle and 2 people reacted
ReplyQuote
(@melmystery)
Noble Member Registered
Joined: 4 years ago
Posts: 135
 

I’m much more optimistic about AI.  I don’t have the developed psychic and intuitive abilities that many on this forum have, so I am interested in hearing those views.  I grew up with both scary AI scenarios like 2001 Space Odyssey and Skynet in Terminator, but also friendly and helpful AI’s like R2D2, Data in Star Trek, and KITT from Knight Rider.

AI is largely just a tool and as with any tool can be used (and programmed) for good or ill.  Any time we’ve had a major advancement in technology from steam engines to microwave ovens to airplanes to computers, there have been those who rallied against the new technology as the end of civilization or at least the biggest threat to our way of life.  This turns out to be mostly fear of change and fear of things we don’t know about.  I remember in my lifetime all the fears about computers and automation taking over people’s jobs.  That has happened in some industries, but new jobs and industries have also been created.  And for most of us, computers actually help us do our jobs better and more efficiently.

I might be wrong, but I don’t see the current state of AI to be one where the AI is self-motivated for good or ill.  It basically does whatever the programmer or user asks.  Chat bots may be able to convince you they’re human (and that might be scary to some), but they still only chat or carry out tasks like gathering information (even incorrect information), making recommendations, creating art, or writing.  They don’t have control over [name your fear – medical databases, the stock market, the military, your car, your house], unless they are installed in and given authority to make those decisions by humans.  I would think that most current AI’s have limited control over the tangible world around them – except perhaps those installed in vehicles, robots, and automation systems, but they still have limited reach and do what they are asked or programmed to do.

Certainly, there’s room for regulation and even human oversight, but the same could be said of trains and other things.  How many train derailments have we had within the past few months?   AI’s probably will continue to get smarter and will be used more and more in society, so it’s probably still a good idea to start thinking about these things now.  At the same time, many regulations and safety precautions will be industry and application specific.   An AI personal assistant or chatbot may need to be more regulated for personal privacy whereas an AI designed for self-driving cars might be more regulated for road safety.    


   
PamP, ghandigirl, Lauren and 3 people reacted
ReplyQuote
(@polarberry)
Illustrious Member Registered
Joined: 5 years ago
Posts: 1210
 

I cracked up this morning when I realized that sometimes when you log in, you have to check a little box that says, "I'm not a robot"

I always check it, but.... 😎  


   
PamP, Danielle, Lauren and 1 people reacted
ReplyQuote
(@danielle)
Reputable Member Registered
Joined: 3 years ago
Posts: 36
 

@polarberry For sure these AI creations will be able to subvert the “I am not a robot”. But it will still keep the garden variety spammers away.


   
PamP and Lauren reacted
ReplyQuote
(@polarberry)
Illustrious Member Registered
Joined: 5 years ago
Posts: 1210
 

@danielle 

I was joking about me secretly being a robot.


   
PamP, ghandigirl and Lauren reacted
ReplyQuote
(@laura-f)
Illustrious Member Participant
Joined: 7 years ago
Posts: 2137
 

Here is an article that describes what happens when medical insurance companies decide to use AI to make blanket denials without a human eyeball ever seeing the claim or request for authorization. They deny up to 90% of claims and requests, knowing that less than 5% of patients will bother to appeal.

How Cigna Rejects Claims

And it's not just Cigna - ALL the big medical insurances are doing this. This article struck close to home for me because in the last 6 months I was denied 2 surgeries as "not medically necessary" even though my doctors said it was. I appealed both denials for prior authorization (because I know better than to try to reverse a claim denial) and was still denied.  In one case, it's clear the algorithm scans for the words tumor and cancer - without those words, and despite significant other key phrases and diagnoses, it's like posting a resume` on Indeed - it goes nowhere in other words, except into the electronic trash bin. (I have Anthem Blue Cross)


   
PamP, Isabelle, Vesta and 3 people reacted
ReplyQuote
(@melmystery)
Noble Member Registered
Joined: 4 years ago
Posts: 135
 

@laura-f 

I think that one of the things we need to do when discussing AI is to define exactly what AI is and what it is not. 

This article did not mention AI specifically, but suggested that Cigna has an "algorithm" that determines whether claims fit their specific criteria.  As you mentioned, this appears to be essentially a keyword search of claim applications for certain terms in much the same way that employers use computer programs to screen resumes for key words.

I would argue that this is not true AI.  According to Oxford Languages dictionary on Google, an algorithm is "a process or set of rules to be followed in calculations and other problem-solving operations, especially by a computer." And Artificial Intelligence is "the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision making, and translation between languages."

My perception of AI is that keyword searches of claims or resumes don't require any real semblance of intelligence - they do just basic searching, sorting, and flagging.  An AI may use algorithms to make decisions, but is much more complex.  I would think that a true AI used to approve or deny claims or resumes would have the ability to look at the larger picture, piece together a person's medical or employment history, and make predictions and inferences that go beyond whether certain key words are used or not used.  An AI would be able to look at these more "intelligently" than a basic algorithm. 

That said, I do feel that AI's will need human oversight for the foreseeable future.  In this case, the AI could potentially identify claims that need a human to look at them in further detail.  The AI could potentially even summarize a claim or work history for a human to read over to make the final decision. 

 


   
PamP, Isabelle, Vesta and 3 people reacted
ReplyQuote
(@tgraf66)
Illustrious Member Registered
Joined: 4 years ago
Posts: 949
 

@melmystery Your explanation of the difference between the two is excellent, but unfortunately, any automated process done by computers will be now referred to as AI out of sheer journalistic laziness for the purposes of the usual 10-second sound bites to accommodate the 15-second American attention span. 😂


   
PamP, Isabelle, melmystery and 2 people reacted
ReplyQuote
(@isabelle)
Famed Member Registered
Joined: 5 years ago
Posts: 261
 

A beneficial effect of AI:  Apparently AI can detect lung cancer way before the traditional CT scan can. Please see below:

 

ttps://www.aol.com/news/promising-ai-detect-early-signs-210112022.html


   
PamP, ghandigirl, raincloud and 1 people reacted
ReplyQuote
(@isabelle)
Famed Member Registered
Joined: 5 years ago
Posts: 261
 

In an interesting article in today's NY Times, The Writers Guild of America is now in tense contract negotiations with The Alliance of Motion Picture and Television Producers, hoping to avoid a strike.  For the first time in history, the Union is insisting upon inserting a clause to all their contracts to protect the Union's writers and voice artists from being displaced by AI-generated artists/voice artists which could disrupt their entire job market.

However, the unions have legal cards to play, Mr. Crabtree-Ireland of SAG-AFTRA said, like the U.S. Copyright Office’s pronouncement in March that content created entirely by algorithm is not eligible for copyright protection. It is harder to monetize a production if there is no legal obstacle to copying it.

 

https://www.nytimes.com/2023/04/29/business/media/writers-guild-hollywood-ai-chatgpt.html


   
PamP, Maggieci, Lauren and 2 people reacted
ReplyQuote
(@isabelle)
Famed Member Registered
Joined: 5 years ago
Posts: 261
 

Increasing concern in the modeling world that AI-dedicated modeling firms (such as Lalaland, Inc.) may greatly reduce need for live models and/or disrupt entire modeling industry gradually pushing live models out of the modeling business.  AI images are far cheaper, easier and virtually indistinguishable from images of live humans.  Current law leaves some gray area around models' rights to organize unions resulting in limited labor protections.  There is hope that regulators will step in.

https://www.nbcnews.com/business/business-news/ai-models-levis-controversy-backlash-rcna77280


   
PamP, Maggieci and Lauren reacted
ReplyQuote
(@ghandigirl)
Illustrious Member Registered
Joined: 6 years ago
Posts: 1011
 

@melmystery 

I agree. We are trained to be fearful of new things. 


   
PamP, Lauren and melmystery reacted
ReplyQuote
(@raincloud)
Famed Member Registered
Joined: 3 years ago
Posts: 361
 

In the NYT today.

From one of the developers of AI who is leaving Google to sound an alarm:

"But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said."

The full article:

https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html

 


   
PamP, Maggieci, Isabelle and 2 people reacted
ReplyQuote
Page 1 / 2
Share: