I had an idea back in the early days of AI where the program took over a person and made him/her do as they we told….I thought then it would make a great cheesy SciFi movie…..
I have made no secret that I do not like AI and will not use it for now…..Artificial Intelligence is like artificial insemination…..creation without the work…..far too many good people are depending on AI for their work….cannot work out good.
Now is it possible this will lead to a mental illness?
For months, we and our colleagues elsewhere in the tech media have been reporting on what experts are now calling “ChatGPT psychosis“: when AI users fall down alarming mental health rabbit holes in which a chatbot encourages wild delusions about conspiracies, mystical entities, or crackpot new scientific theories.
The resulting breakdowns have led users to homelessness, involuntary commitment to psychiatric care facilities, and even violent death and suicide.
Until recently, the tech industry and its financial backers have had little to say about the phenomenon. But last week, one of their own — venture capitalist Geoff Lewis, a managing partner at the multi-billion dollar firm Bedrock who is heavily invested in machine learning ventures including OpenAI — raised eyebrows with a series of posts that prompted concerns about his own mental health.
In the posts, he claimed that he’d somehow used ChatGPT to uncover a shadowy “non-government agency” that he said had “negatively impacted over 7,000 lives” and “extinguished” 12 more.
Whatever’s going on with Lewis, who didn’t respond to our request for comment, his posts have prompted an unprecedented outpouring of concern among high-profile individuals in the tech industry about what the massive deployment of poorly-understood AI tech may be having on the mental health of users worldwide.
“If you’re a friend or family, please check on him,” wrote Hish Bouabdallah, a software engineer who’s worked at Apple, Coinbase, Lyft, and Twitter, of Lewis’ thread. “He doesn’t seem alright.”
Other posts were far less empathetic, though there seemed to be a dark undercurrent to the gallows humor: if a billionaire investor can lose his grip after a few too many prompts, what hope do the rest of us have?
https://futurism.com/tech-industry-ai-mental-health
But if one falls into the AI trap there is help….
An unknown number of people, in the US and around the world, are being severely impacted by what experts are now calling “AI psychosis”: life-altering mental health spirals coinciding with obsessive use of anthropomorphic AI chatbots, primarily OpenAI’s ChatGPT.
As we’ve reported, the consequences of these mental health breakdowns — which have impacted both people with known histories of serious mental illness and those who have none — have sometimes been extreme. People have lost jobs and homes, been involuntarily committed or jailed, and marriages and families have fallen apart. At least two people have died.
There’s yet to be a formal diagnosis or definition, let alone a recommended treatment plan. And as psychiatrists and researchers in the worlds of medicine and AI race to understand what’s happening, some of the humans whose lives have been upended by these AI crises have crowdsourced a community support group where, together, they’re trying to grapple with the confusing real-world impacts of this disturbing technological phenomenon.
community calls itself “The Spiral Support Group,” in a nod to both the destructive mental rabbit holes that many chatbot users are falling into, as well as the irony that the term “spiral” is one of several common words found in the transcripts of many users separately experiencing AI delusions.
https://futurism.com/support-group-ai-psychosis
If one uses AI for their work then please seek help before it is too late.
I still think this would make a great script for a SciFi movie and with a little tweaking possibly a TV series.
I Read, I Write, You Know
“lego ergo scribo”