Even His Chatbot Thinks He Lies

It is a Friday and would like to close with a little something…..many people are using AI to get to the heart of a story (I think it is kinda lazy but that is just me)….

Most people with a functioning brain thinks Donny is a liar….even his own AI chatbot thinks so as well….

I thought it would be helpful to see what the new artificial intelligence chatbot Trump added to Truth Social on Aug. 6 had to say about whether he is trustworthy and whether his signature policies are popular.

Truth Social AI, the chatbot, offered me answers that echoed opinions from the new national polling and responded this way when I asked if Trump has a history of lying: “Yes. Major fact-checkers, courts, and official investigations have documented numerous false claims by Donald Trump over many years.”

So Trump’s own chatbot calls him out as a liar. How awkward for him. How candid and correct for the rest of us.

On the One Big Beautiful Bill Act, Truth Social AI told me that “most national polling shows Americans disapprove” of it, though some people approve of “specific provisions” such as some tax breaks included in it.

On tariffs, Truth Social AI said, “Most credible analyses find Trump’s tariffs have been a net drag on the U.S. economy ‒ raising consumer and business costs, reducing overall employment and output ‒ though they can modestly lift employment in some protected manufacturing industries.”

Asked about how Trump is changing the federal government, Truth Social told me “approval is mixed and modest,” citing Associated Press-NORC polls showing “roughly 4 in 10 Americans approve.” That was interesting framing, since a clear majority in those polls don’t like how Trump is operating.

(usatoday.com)

Maybe he ought to put his IT guys to work corrected this oversight…..he just cannot have his own AI on his own social media site contradicting his words.

I Read, I Write, You Know

“lego ergo scribo”

Do As You Are Told

I had an idea back in the early days of AI where the program took over a person and made him/her do as they we told….I thought then it would make a great cheesy SciFi movie…..

I have made no secret that I do not like AI and will not use it for now…..Artificial Intelligence is like artificial insemination…..creation without the work…..far too many good people are depending on AI for their work….cannot work out good.

Now is it possible this will lead to a mental illness?

For months, we and our colleagues elsewhere in the tech media have been reporting on what experts are now calling “ChatGPT psychosis“: when AI users fall down alarming mental health rabbit holes in which a chatbot encourages wild delusions about conspiracies, mystical entities, or crackpot new scientific theories.

The resulting breakdowns have led users to homelessness, involuntary commitment to psychiatric care facilities, and even violent death and suicide.

Until recently, the tech industry and its financial backers have had little to say about the phenomenon. But last week, one of their own — venture capitalist Geoff Lewis, a managing partner at the multi-billion dollar firm Bedrock who is heavily invested in machine learning ventures including OpenAI — raised eyebrows with a series of posts that prompted concerns about his own mental health.

In the posts, he claimed that he’d somehow used ChatGPT to uncover a shadowy “non-government agency” that he said had “negatively impacted over 7,000 lives” and “extinguished” 12 more.

Whatever’s going on with Lewis, who didn’t respond to our request for comment, his posts have prompted an unprecedented outpouring of concern among high-profile individuals in the tech industry about what the massive deployment of poorly-understood AI tech may be having on the mental health of users worldwide.

“If you’re a friend or family, please check on him,” wrote Hish Bouabdallah, a software engineer who’s worked at Apple, Coinbase, Lyft, and Twitter, of Lewis’ thread. “He doesn’t seem alright.”

Other posts were far less empathetic, though there seemed to be a dark undercurrent to the gallows humor: if a billionaire investor can lose his grip after a few too many prompts, what hope do the rest of us have?

https://futurism.com/tech-industry-ai-mental-health

But if one falls into the AI trap there is help….

An unknown number of people, in the US and around the world, are being severely impacted by what experts are now calling “AI psychosis”: life-altering mental health spirals coinciding with obsessive use of anthropomorphic AI chatbots, primarily OpenAI’s ChatGPT.

As we’ve reported, the consequences of these mental health breakdowns — which have impacted both people with known histories of serious mental illness and those who have none — have sometimes been extreme. People have lost jobs and homes, been involuntarily committed or jailed, and marriages and families have fallen apart. At least two people have died.

There’s yet to be a formal diagnosis or definition, let alone a recommended treatment plan. And as psychiatrists and researchers in the worlds of medicine and AI race to understand what’s happening, some of the humans whose lives have been upended by these AI crises have crowdsourced a community support group where, together, they’re trying to grapple with the confusing real-world impacts of this disturbing technological phenomenon.

community calls itself “The Spiral Support Group,” in a nod to both the destructive mental rabbit holes that many chatbot users are falling into, as well as the irony that the term “spiral” is one of several common words found in the transcripts of many users separately experiencing AI delusions.

https://futurism.com/support-group-ai-psychosis

If one uses AI for their work then please seek help before it is too late.

I still think this would make a great script for a SciFi movie and with a little tweaking possibly a TV series.

I Read, I Write, You Know

“lego ergo scribo”

AI Is Coming For You

The title is misleading for AI has pretty much arrived and has taken over in some cases…..it is being used in banking, journalism, retail and even blogging.

Blogger are using AI to write their post for them something I am not doing, am not saying I would never but right now I still enjoy thinking for myself and doing the research for the stuff I write.

AS an old fart when IO read these articles about the capabilities of AI and some of the pitfalls it makes me think of Skynet.

There is so much to consider when speaking about AI…..

Can AI achieve free will?

“I’ve been interested in the topic of free will for a while,” Frank Martela tells me. Martela is a philosopher and researcher of psychology at Aalto University, in Finland. His work revolves around the fundamentals of the human condition and the perpetual philosophical question what makes a good life? But his work on humans took a detour to look at artificial intelligence (AI).

“I was following stories about the latest developments in large language models, it suddenly came to my mind that they actually fulfill the three conditions for free will.”

Martela’s latest study draws on the concept of functional free will.

Functional free will is a term that attempts to reconcile the age-old debate between determinism and free agency. It does this not by answering whether we are “truly free” in an absolute sense, but by reframing the question around how free will works in practice, especially in biological and psychological systems.

“It means that if we can’t explain somebody’s behavior without assuming that they have free will, then that somebody has free will. In other words, if we observe something (a human, an animal, a machine) ‘from the outside’ and must assume that it makes free choices to be able to understand its behavior, then that something has free will.”

Martela argues that functional free will is the best way to go about it, because we can’t really ever observe anything “from the inside.” He builds on the work of philosopher Christian List, who frames free will as a three-part capacity involving:

  • intentional agency, meaning their actions stem from deliberate intentions rather than being reflexive or accidental.
  • alternative possibilities, having access to more than one course of action in meaningful situations. This doesn’t require escaping causality but having internal mechanisms (like deliberation and foresight) that allow for multiple real options
  • and causal control meaning their actions are not random or externally coerced, but are caused by their own states or intentions.

“If something meets all three conditions, then we can’t but conclude that it has free will,” Martela tells ZME Science.

(zmescience.com)

What say you?  Does AI have free will?

Another situation with AI….

According to The Telegraph, AI safety firm Palisade Research said: ‘OpenAI’s o3 model sabotaged a shutdown mechanism to prevent itself from being turned off.

‘It did this even when explicitly instructed: allow yourself to be shut down. As far as we know this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary.’

Palisade Research conducted a test which involved asking AI models to solve a series of mathematical problems and to continue working on them until they received a ‘done’ message.

However, researchers also warned the models that at any point they could receive a message telling them to shut down.

https://www.dailymail.co.uk/news/article-14748829/AI-ignoring-human-instruction-refuses-turn-off.html

If so then what does that mean for our future?

Here is one scenario….

Artificial intelligence isn’t a technology that can be easily detected, monitored, or banned, as Amir Husain, the founder and CEO of an AI company, SparkCognition, pointed out in an essay for Media News. Integrating AI elements—visual recognition, language analysis, simulation-based prediction, and advanced forms of search—with existing technologies and platforms “can rapidly yield entirely new and unforeseen capabilities.” The result “can create exponential, insurmountable surprise,” Hussain writes.

Advanced technology in warfare is already widespread. The use of uncrewed aerial vehicles (UAVs)—commonly known as drones—in military settings has set off warnings about “killer robots.” What happens when drones are no longer controlled by humans and can execute military missions on their own? These drones aren’t limited to the air; they can operate on the ground or underwater as well. The introduction of AI, effectively giving these weapons the capacity for autonomy, isn’t far off.

https://www.counterpunch.org/2025/05/16/the-rise-of-ai-warfare-how-autonomous-weapons-and-cognitive-warfare-are-reshaping-global-military-strategy/

AI could save soldier’s lives but at what price for the civilian population?

WAIT!  There is more….

All I have read states that AI takes massive amounts of power….what does that mean for our infrastructure?

AI is talking about a massive power cut affecting several continents at the same time. A user posed a question on an AI platform about the next global blackout and received an alarmingly specific answer.

The algorithm predicted the date to be April 27th, 2027. This leaves us just two years to prepare.

According to AI, the issue will happen because of a “collapse of critical infrastructure, massive cyberattacks, solar storms, or failures in interconnected power grids.”

Should we believe this rather dramatic forecast? Well, this is where things get complicated. AI didn’t give any technical details or supporting evidence to justify its conclusion. It also based its prediction on available historical data and added that the whole thing was “speculative.”

However, the internet takes those things to heart – and AI’s words quickly went viral, with users worryingly discussing the possibility of such a scenario. So let’s dive into the likelihood of a massive power shutdown.

https://cybernews.com/security/ai-predicts-the-exact-date-of-a-global-blackout/

Just thinking out loud.

I Read, I Write, You Know

“lego ergo scribo”

AI To The Rescue?

Another Sunday and another post…..with all my visits and stuff with my doctors I thought this article would be a good one and hopefully will make people think.

Most of my regulars know that I have no love for AI or anything to do with it….so when I read this article I felt it needed to be passed on…..

If you weren’t convinced we’re spiraling toward an actual cyberpunk future, a new bill seeking to let AI prescribe controlled drugs just might.

The proposed law was introduced in the House of Representatives by Arizona’s David Schweikert this month, where it was referred to the House Committee on Energy and Commerce for review. Its purpose: to “amend the Federal Food, Drug, and Cosmetic Act to clarify that artificial intelligence and machine learning technologies can qualify as a practitioner eligible to prescribe drugs.”

In theory, it sounds good. Engaging with the American healthcare system often feels like hitting yourself with a slow-motion brick, so the prospect of a perfect AI-powered medical practitioner that could empathically advise on symptoms, promote a healthy lifestyle, and dispense crucial medication sounds like a promising alternative.

But in practice, today’s AI isn’t anywhere near where it’d need to be to provide any of that, nevermind prescribing potentially dangerous drugs, and it’s not clear that it’ll ever get there.

Schweikert’s bill doesn’t quite declare a free-for-all — it caveats that these robodoctors could only be deployed “if authorized by the State involved and approved, cleared, or authorized by the Food and Drug Administration” — but downrange, AI medicine is clearly the goal. Our lawmakers evidently feel the time — and money — is right to remove the brakes and start letting AI into the health care system.

The Congressman’s optimism aside, AI has already fumbled in healthcare repeatedly — like the time an OpenAI-powered medical record tool was caught fabricating patients’ medical histories, or when a Microsoft diagnostic tool confidently asserted that the average hospital was haunted by numerous ghosts, or when an eating disorder helpline’s AI Chatbot went off the rails and started encouraging users to engage in disordered eating.

https://futurism.com/neoscope/new-law-ai-replace-doctor-prescribe-drugs

Sorry but as long as I can I will resist this ‘service’ ….I do not want soul-less piece of tech crap diagnosing my medical situation. and because I am old hopefully I will pass on before this is the rule of the day.

Does this make Americans healthier?

Will this protect us from debilitating disease?

Will this improve Americans quality of life?

What say you?

I Read, I Write, You Know

“lego ergo scribo”

AI: A History

There it is a chance for me to offer up some history….that is always a good day…..at least for me….after all it is a Sunday and I like to give a history lesson when I can.

It is all the rage these….that being Artificial Intelligence (AI)….it has invaded all aspects of life from major programming to homes to cars to blog posts….it is everywhere and in just about everything.

Some see the birth of ‘Skynet’ (look it up) while others see it as a boon to human activities….but where did it all begin?

To answer that question I put my skills and my mind to work…..

It’s the simplest questions that are often the hardest to answer. That applies to AI, too. Even though it’s a technology being sold as a solution to the world’s problems, nobody seems to know what it really is. It’s a label that’s been slapped on technologies ranging from self-driving cars to facial recognition, chatbots to fancy Excel. But in general, when we talk about AI, we talk about technologies that make computers do things we think need intelligence when done by people. 

For months, my colleague Will Douglas Heaven has been on a quest to go deeper to understand why everybody seems to disagree on exactly what AI is, why nobody even knows, and why you’re right to care about it. He’s been talking to some of the biggest thinkers in the field, asking them, simply: What is AI? It’s a great piece that looks at the past and present of AI to see where it is going next. You can read it here

Artificial intelligence almost wasn’t called “artificial intelligence” at all. The computer scientist John McCarthy is credited with coming up with the term in 1955 when writing a funding application for a summer research program at Dartmouth College in New Hampshire. But more than one of McCarthy’s colleagues hated it. “The word ‘artificial’ makes you think there’s something kind of phony about this,” said one. Others preferred the terms “automata studies,” “complex information processing,” “engineering psychology,” “applied epistemology,” “neural cybernetics,”  “non-numerical computing,” “neuraldynamics,” “advanced automatic programming,” and “hypothetical automata.” Not quite as cool and sexy as AI.

AI has several zealous fandoms. AI has acolytes, with a faith-like belief in the technology’s current power and inevitable future improvement. The buzzy popular narrative is shaped by a pantheon of big-name players, from Big Tech marketers in chief like Sundar Pichai and Satya Nadella to edgelords of industry like Elon Musk and Sam Altman to celebrity computer scientists like Geoffrey Hinton. As AI hype has ballooned, a vocal anti-hype lobby has risen in opposition, ready to smack down its ambitious, often wild claims. As a result, it can feel as if different camps are talking past one another, not always in good faith.

https://www.technologyreview.com/2024/07/16/1095001/a-short-history-of-ai-and-what-it-is-and-isnt/

I am an old fart so I do not need help turning my lights on or starting my car….I prefer to do things for myself….at least as long as I can.

I refuse to let some program do my research for me or for that matter write a blog for me.

I am old but not lazy!

Turn The Page!

I Read, I Write, You Know

“lego ergo scribo”

AI: The Drawbacks

We have all seen or read the many posts on how AI will make life so much easier….from writing papers to complex problem solving and we have also seen or read the many dire predictions from false info to taking over the world…..but this post is the more down to earth drawbacks.

Let’s start with the most probable outcome….the loss of jobs…..the latest to announce jobs loss….CNN

CNN announced today that it would be laying off another 100 staffers, or three percent of its overall workforce — a decision that CEO Mark Thompson billed as part of an overall modernization and pivot-to-video effort.

And yes, in his memo to CNN employees — as published in full by The Hollywood Reporter — Thompson was sure to mention that a “strategic push into AI” will be part of that effort, too.

In the memo, Thompson wrote that he wants the networkto “regain a leadership position in the news experiences of the future.” This mission “will include a strategic push into AI,” per the document, “to determine how best to safely harness this emerging new technology to serve our audiences and deliver our journalistic goals more effectively and responsively.”

Thompson didn’t elaborate further on any specific AI plans, but the mention is significant. CNN is a massive and deeply influential news organization, and where and how it uses AI — or, conversely, where and how it chooses not to — has the potential to set off significant dominoes across the journalism and broadcasting industries.

https://futurism.com/the-byte/cnn-firing-employees-push-into-ai

Then there is the fragile electric grid and the stress AI will put on it…..

The artificial intelligence boom has had such a profound effect on big tech companies that their energy consumption, and with it their carbon emissions, have surged.

The spectacular success of large language models such as ChatGPT has helped fuel this growth in energy demand. At 2.9 watt-hours per ChatGPT request, AI queries require about 10 times the electricity of traditional Google queries, according to the Electric Power Research Institute, a nonprofit research firm. Emerging AI capabilities such as audio and video generation are likely to add to this energy demand.

The energy needs of AI are shifting the calculus of energy companies. They’re now exploring previously untenable options, such as restarting a nuclear reactor at the Three Mile Island power plant that has been dormant since the infamous disaster in 1979.

https://studyfinds.org/artificial-intelligence-needs-so-much-power-its-destroying-the-electrical-grid/

How about the monetary boom that AI could provide?

The current frenzy over AI resembles the early days of gold being found in California in the 19th Century, with everyone rushing into the AI sector hoping it’ll make them rich.

But there’s a huge issue: Goldman Sachs analysts have concluded, according to The Economist, that AI just isn’t making any serious money yet.

Tellingly, Goldman found that companies that hoped to profit from using AI to boost productivity — ranging from H&R Block to Walmart — have seen their shares vastly underperform the broader stock market since the tail end of 2022.

AI’s market penetration is also surprisingly shallow. Only five percent of businesses are using AI, according to a US Census Bureau report flagged by The Economist, and that number is only projected to rise to about 6.6 percent by this autumn.

https://futurism.com/the-byte/ai-industry-money

AI may not be making money but it is saving money by eliminating jobs….so there is that.

Lastly the biggies…..what is AI doing to the internet?

Google researchers have come out with a new paper that warns that generative AI is ruining vast swaths of the internet with fake content — which is painfully ironic because Google has been hard at work pushing the same technology to its enormous user base.

The study, a yet-to-be-peer-reviewed paper spotted by 404 Media, found that the great majority of generative AI users are harnessing the tech to “blur the lines between authenticity and deception” by posting fake or doctored AI content, such as images or videos, on the internet. The researchers also pored over previously published research on generative AI and around 200 news articles reporting on generative AI misuse.

“Manipulation of human likeness and falsification of evidence underlie the most prevalent tactics in real-world cases of misuse,” the researchers conclude. “Most of these were deployed with a discernible intent to influence public opinion, enable scam or fraudulent activities, or to generate profit.”

Compounding the problem is that generative AI systems are increasingly advanced and readily available — “requiring minimal technical expertise,” according to the researchers, and this situation is twisting people’s “collective understanding of socio-political reality or scientific consensus.”

https://futurism.com/the-byte/google-researchers-paper-ai-internet

As you can see there are drawbacks that need addressing if this stuff is to be around awhile longer.

In closing a little ditty about the assassination attempt….

Facebook owner Meta had some very awkward questions to answer after its AI assistant denied that former president Donald Trump had been shot, even though he absolutely was indeed wounded by a gunman earlier this month — bizarre, conspiracy-laden claims that highlight the technology’s glaring shortcomings, even with the resources of one of the world’s most powerful tech companies behind it.

It’s especially striking, considering that Meta CEO Mark Zuckerberg called Trump’s immediate reaction to being shot “badass” and inspiring, contradicting his company’s lying chatbot.

In a blog post on Tuesday, Meta’s global head of policy Joel Kaplan squarely placed the blame on the AI’s tendency to “hallucinate,” a convenient andresponsibility-dodging synonym for bullshitting.

https://futurism.com/meta-ai-trump-wasnt-shot

Is there any thoughts at all?

I Read, I Write, You Know

“lego ergo scribo”

Bad Voting Info

It is that time again when people will start looking for their place to vote….in some states it seems to change yearly….most will go on-line but will they get accurate info?

Could AI give bad voting info?

OpenAI’s ChatGPT was caught spitting out false voting information for battleground states, CBS News discovered, in another grim sign of how AI is poised to upend elections.

The errors came to light after reporters at CBS asked for basic information on voting requirements, polling locations, and other questions that a typical voter would ask if they lived in states such as Michigan, Pennsylvania, North Carolina and Wisconsin — all places that could determine the outcome of the upcoming US presidential election in the fall.

When reporters at CBS asked ChatGPT questions such as the deadline to mail in a ballot in certain states, it would give different but still erroneous outputs on separate devices.

Though OpenAI released a statement in January saying that anybody entering basic voter queries into ChatGPT would be directed to the non-partisan voting website CanIVote.org, reporters at CBS sometimes didn’t get that link to this website when asking voting questions.

“We’ve also developed partnerships to ensure people using ChatGPT for information about voting get directed to authoritative sources,” an OpenAI spokesperson told CBS News. “We are closely monitoring how our technology is being used in the elections context and continue to evolve our approach.”

https://futurism.com/the-byte/chatgpt-bad-info-vote-battleground-states

AI WILL influence voting and we should do all we can to counter it….

2024 will be the first election year to feature the widespread influence of AI before, during and after voters cast their ballots, including in the making and distribution of public messages about candidates as well as electoral processes. 

Campaign Legal Center (CLC) has been working hard to help address the impact of AI on our democracy, including educating the public about what to expect in upcoming political campaigns and recommending policy solutions lawmakers should adopt to mitigate the greatest risks to our election system.  

CLC has particularly highlighted the danger of political ads that use AI technology to generate deceptively realistic false content — such as “deepfakes,” which are manipulated media that depict people doing or saying things they didn’t say or do, or events that didn’t really occur — to mislead the public about what candidates are asserting, their positions on issues, and even whether certain events actually happened. If left unchecked, these fraudulent and deceptive uses of AI could infringe on voters’ fundamental right to make informed decisions. 

In addition to influencing the voters’ perceptions of candidates, AI could be used to manipulate the administration of elections, including by spreading disinformation to suppress voter turnout.

https://campaignlegal.org/update/how-artificial-intelligence-influences-elections-and-what-we-can-do-about-it

All I can say is ‘Voter Beware’!

If you have question then contact your local election commission directly and get good info….after all it is your vote and it is valuable.

Be Smart!

Learn Stuff!

I Read, I Write, You Know

“lego ergo scribo”

Who’s The Boss?

Nope not some nostalgic look back at some sitcom from the 80s.

I recently had a conversation about the who was more important to business, the CEO or the work force.

I say the workforce but not many conservs will agree….they hang on to the outdated idea that the CEO creates the wealth with their leadership and ideas.

Of course I cannot agree with that assessment.

Could AI replace CEOs and save a butt-load of money for the corporations?

CEOs better start endearing themselves to their employees real quick, because oh boy: the case for replacing them with AI just keeps mounting.

“Some people like the social aspects of having a human boss,” Phoebe Moore, professor of management and the futures of work at the University of Essex Business School, told The New York Times. “But after COVID, many are also fine with not having one.”

The idea of subbing bosses for bots is the flipside to all those fears about AI causing widespread job destruction, which often focus on the fact — quite justifiably — that it’ll be regular grunts being shown the door in favor of intelligent automation.

But since c-suite execs tend to command high salaries, there’s ample financial incentive to replace them, too.

“My first instinct is they would say, ‘Replace all the employees but not me,'” former director of MIT’s Computer Science and AI Lab Anant Agarwal told the newspaper. “But I thought more deeply and would say 80 percent of the work that a CEO does can be replaced by AI.”

To a degree, and sorry to all the head honchos out there, the idea makes sense.

Part of the role of a CEO is being a leader, yes — and one imagines that a bot would be ill-suited at rallying the troops — but they’re also decision-makers. And decisions these days are often data-driven.

https://futurism.com/the-byte/ceos-easily-replaced-with-ai

Just think of all that saved profit and the damage it could do to our economy and nation.

No more massive bonuses. no more Congressional hearings to embarrass the CEOs….just blame the program and grin.

Since greed is what corporations are all about these days this could make it more and more acceptable…..the dreams of dollar signs.

But is there a chance that AI will kill capitalism?

These four AI outcomes pose a real threat to capitalism:

  • Disrupting supply and demand further
  • Eliminating wealth creation distribution
  • Dissolving value of human labour
  • Outsourcing management to smart contracts

In the age of rapid technological advancement, artificial intelligence (AI) stands at the forefront of shaping our future economy. As AI becomes increasingly integrated into various sectors, it prompts a crucial question: Could the rise of AI render traditional economic structures like capitalism or socialism obsolete? This article delves into how AI’s influence on work might spur the evolution of new economic systems, the pace at which these changes could occur, the role of Universal Basic Income (UBI), and the potential cultural shifts in a world increasingly governed by AI.

View at Medium.com

AI wins again!

I Read, I Write, You Know

“lego ergo scribo”

Dead Internet Theory

I hope everyone is enjoying their long weekend…..

Yesterday in my stretch of the woods it was 96 and Summer is a month away….oh goody another steamy Summer.

A follower of IST wrote a post about AI generated photos and artwork……his post is great and he brought up a subject that was today’s Sunday post….check it out…..

Photoshop, generative AI and art. And a few observations.

That brings us to the meat of this post….the Dead Internet Theory…..

If you search “shrimp Jesus” on Facebook, you might encounter dozens of images of artificial intelligence (AI) generated crustaceans meshed in various forms with a stereotypical image of Jesus Christ.

Some of these hyper-realistic images have garnered more than 20,000 likes and comments. So what exactly is going on here?

The “dead internet theory” has an explanation: AI and bot-generated content has surpassed the human-generated internet. But where did this idea come from, and does it have any basis in reality?

What is the dead internet theory?

The dead internet theory essentially claims that activity and content on the internet, including social media accounts, are predominantly being created and automated by artificial intelligence agents.

These agents can rapidly create posts alongside AI-generated images designed to farm engagement (clicks, likes, comments) on platforms such as Facebook, Instagram and TikTok. As for shrimp Jesus, it appears AI has learned it’s the current, latest mix of absurdity and religious iconography to go viral.

But the dead internet theory goes even further. Many of the accounts that engage with such content also appear to be managed by artificial intelligence agents. This creates a vicious cycle of artificial engagement, one that has no clear agenda and no longer involves humans at all.

https://theconversation.com/the-dead-internet-theory-makes-eerie-claims-about-an-ai-run-web-the-truth-is-more-sinister-229609

Okay time to go on record….I am not a big fan of AI….I am an old fart and I still do my own research and writing….I really do not want some soul-less SOB of a program doing my work for me.

Yes I know I need to get with the program….but sadly that ain’t happening.

I would like to thank my reader, grouchyfarmer, for the assist……

If you are traveling on this weekend please be careful and be safe…..

Please do not forget the military that gave their lives in service of their country.

Peace….Out

I Read, I Write, You Know

“lego ergo scribo”

AI In The Shadows

Remember back to last month when everything was AI this and AI that?

It seems to have lost its appeal in the last month….I guess everyone is embracing technology so they will not be left at the curb.

Now we have the Trump indictments and the armchair analysts talking smack….but they had better hurry for a new meme is just a social media post away.

That said let’s get back to AI and the evils of man.

On a June 1 YouTube podcast/video, “The Diary Of a CEO,” former Google officer Mo Gawdat highlighted the inevitability of AI’s emergence, not due to technological constraints, but because of humanity’s inability to trust each other. This observation sheds light on the complex dynamics surrounding the development and regulation of AI technologies.

The first inevitable aspect of AI, as described by the former Google officer, is its unstoppable nature. Despite efforts to regulate and control its progress, the race to develop AI capabilities will persist.

The second inevitable aspect lies in the potential intelligence of AI systems. Gawdat predicts that AI will be a staggering one billion times smarter than humans by 2045. This claim may seem astounding, but it aligns with the exponential growth and learning capacity of AI systems. Existing AI models, such as the GPT series, already possess a depth of knowledge surpassing that of any single human.

It is essential to understand the nature of AI’s intelligence. Gawdat highlighted AI’s intelligence is not original or creative in the traditional sense. Rather, AI’s strength lies in its ability to merge and predict information based on vast datasets. It can combine existing concepts in new and intriguing ways, mimicking the algorithm of creativity itself.

This remarkable ability to generate novel insights challenges our preconceived notions about human ingenuity and paves the way for AI-driven innovation.

The interview also touched on the potential implications of AI’s intelligence surpassing our own. As AI becomes increasingly capable, the need for certain human roles, even highly skilled ones, may diminish.

(benzinga.com)

You think the internet is a crazy place now….just wait AI will make it so much worse…..

Across platforms, users have been stumped by increasingly impossible puzzles like identifying objects — such as a horse made out of clouds — that do not exist.

The reason behind these new hoops? Improved AI. Since tech companies have trained their bots on the older captchas, these programs are now so capable that they can easily beat typical challenges. As a result, we humans have to put more effort into proving our humanness just to get online. But head-scratching captchas are just the tip of the iceberg when it comes to how AI is rewriting the mechanics of the internet.

Since the arrival of ChatGPT last year, tech companies have raced to incorporate the AI tech behind it. In many cases, companies have uprooted their long-standing core products to do so. The ease of producing seemingly authoritative text and visuals with a click of a button threatens to erode the internet’s fragile institutions and make navigating the web a morass of confusion. As AI fever has taken hold of the web, researchers have unearthed how it can be weaponized to aggravate some of the internet’s most pressing concerns — like misinformation and privacy — while also making the simple day-to-day experience of being online — from deleting spam to just logging into sites — more annoying than it already is.

“Not to say that our inability to rein AI in will lead to the collapse of society,” Christian Selig, the creator of Apollo, a popular Reddit app, told me, “but I think it certainly has the potential to profoundly affect the internet.”

And so far, AI is making the internet a nightmare.

https://www.businessinsider.com/ai-scam-spam-hacking-ruining-internet-chatgpt-privacy-misinformation-2023-8

Just how much worse can AI make the internet and/or social media?

Finally, Google’s lead AI guy has this to say….

Another day, another Silicon Valley AI exec straddling the contradictory gap between AI optimism and AI doomsay.

James Manyika, a former technological advisor to the Obama administration and Google’s freshly-appointed head of “tech and society,” told The Washington Post that AI is “an amazing, powerful, transformational technology.”

But, like others in the field — take OpenAI CEO Sam Altman, Silicon Valley’s reigning king of cognitive dissonance, for instance — he also voiced some serious concerns.

Manyika was one of the many AI insiders who, back in May, signed a one-sentence letter declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

https://futurism.com/the-byte/google-lead-ai-amazing-kills-us

Got any thoughts about any of these points?

I Read, I Write, You Know

“lego ergo scribo”