I had an idea back in the early days of AI where the program took over a person and made him/her do as they we told….I thought then it would make a great cheesy SciFi movie…..
I have made no secret that I do not like AI and will not use it for now…..Artificial Intelligence is like artificial insemination…..creation without the work…..far too many good people are depending on AI for their work….cannot work out good.
Now is it possible this will lead to a mental illness?
For months, we and our colleagues elsewhere in the tech media have been reporting on what experts are now calling “ChatGPT psychosis“: when AI users fall down alarming mental health rabbit holes in which a chatbot encourages wild delusions about conspiracies, mystical entities, or crackpot new scientific theories.
The resulting breakdowns have led users to homelessness, involuntary commitment to psychiatric care facilities, and even violent death and suicide.
Until recently, the tech industry and its financial backers have had little to say about the phenomenon. But last week, one of their own — venture capitalist Geoff Lewis, a managing partner at the multi-billion dollar firm Bedrock who is heavily invested in machine learning ventures including OpenAI — raised eyebrows with a series of posts that prompted concerns about his own mental health.
In the posts, he claimed that he’d somehow used ChatGPT to uncover a shadowy “non-government agency” that he said had “negatively impacted over 7,000 lives” and “extinguished” 12 more.
Whatever’s going on with Lewis, who didn’t respond to our request for comment, his posts have prompted an unprecedented outpouring of concern among high-profile individuals in the tech industry about what the massive deployment of poorly-understood AI tech may be having on the mental health of users worldwide.
“If you’re a friend or family, please check on him,” wrote Hish Bouabdallah, a software engineer who’s worked at Apple, Coinbase, Lyft, and Twitter, of Lewis’ thread. “He doesn’t seem alright.”
Other posts were far less empathetic, though there seemed to be a dark undercurrent to the gallows humor: if a billionaire investor can lose his grip after a few too many prompts, what hope do the rest of us have?
https://futurism.com/tech-industry-ai-mental-health
But if one falls into the AI trap there is help….
An unknown number of people, in the US and around the world, are being severely impacted by what experts are now calling “AI psychosis”: life-altering mental health spirals coinciding with obsessive use of anthropomorphic AI chatbots, primarily OpenAI’s ChatGPT.
As we’ve reported, the consequences of these mental health breakdowns — which have impacted both people with known histories of serious mental illness and those who have none — have sometimes been extreme. People have lost jobs and homes, been involuntarily committed or jailed, and marriages and families have fallen apart. At least two people have died.
There’s yet to be a formal diagnosis or definition, let alone a recommended treatment plan. And as psychiatrists and researchers in the worlds of medicine and AI race to understand what’s happening, some of the humans whose lives have been upended by these AI crises have crowdsourced a community support group where, together, they’re trying to grapple with the confusing real-world impacts of this disturbing technological phenomenon.
community calls itself “The Spiral Support Group,” in a nod to both the destructive mental rabbit holes that many chatbot users are falling into, as well as the irony that the term “spiral” is one of several common words found in the transcripts of many users separately experiencing AI delusions.
https://futurism.com/support-group-ai-psychosis
If one uses AI for their work then please seek help before it is too late.
I still think this would make a great script for a SciFi movie and with a little tweaking possibly a TV series.
I Read, I Write, You Know
“lego ergo scribo”
one thing occurs to me, and it may be just the way my mind works, or not…but the people who use AI may already be in trouble, and AI use just speeds that trouble up. In a way it’s like blaming the car for going too fast, but refusing to take your foot off the gas pedal…
That is a pretty darn good analogy. chuq
While we are at it, let us all get rid of the kerosense lamps and go out and buy some light bulbs —AI is no more or less dangerous or irreputable than Main Stream Media or bias-focused other media and…the AI platforms do warn users that it is not a doctor or lawyer and it even warns the users that it can and does make mistakes….so if people are being harmed (allegedly) in the use of AI I can almost certainly guarantee it is because they are abusing it or using it improperly or ignoring the warnings that it gives. I remember when television was accused of much the same things that AI is getting accused of today ….but refusal to use the AI is not going to assure safety either because AI will, eventually, be involved in every facet of life…selling, buying,travel, entertainment, banking, all of it….it is the future. It is the inevitable inescapable future.
I will escape when I die until then I shall do my own work and avoid a lazy mind. chuq
It goes to show that most of those using AI professionally and scientifically have no idea what is really happening as they make it more and more independently capable and self-aware. They have let the genie out of the bottle, but it is not about to grant them three wishes.
Best wishes, Pete.
That particular genie can kiss my butt…..I like doing my own research……chuq
These things are downright dangerous. Over and over again I see people turning up in the solar power forums with build plans generated by one AI or another that are not only wrong, but downright dangerous and could result in fires,. personal injury and even death. And it is a fact that they have helped plunge people who are already mentally fragile, so to speak, into depression, acts of violence, delusional behavior and even to suicide. The FDA has gone whole hog with these damned things and is starting to use AI to do drug evaluations. Preliminary results show that much of what the FDA’s AIs are generating is pure, unadulterated bullshit. They are citing nonexistent studies and research papers, citing nonexistent doctors and researchers… These things aren’t just harmless novelties, they’re genuinely hurting people and even getting them killed. And no one, certainly not those whining, sniveling cowards in Congress, seems willing to go up against the billionaire developers to try to get this crap under control.
It’s just a few weeks ago that Musk’s GROK AI went full blown, 100% Nazi declaring itself to be “Mechahitler” for god’s sake. Everyone seems to have oh so conveniently forgotten that fiasco.
I agree….I think laziness is a prime motivator for use…..chuq