Lots has been written in the past months about AI….some positive and some negative.
I have noticed that in the past month or so the posting on AI has declined to the point that it is barely mentioned….so does that mean that it has become acceptable?
Since I am not AI’s biggest fan I still read stuff about what is going on with the social monster.
One of the most popular platforms is ChatGPT…..and what’s going on with it…..
If you think AI platforms like OpenAI’s ChatGPT seem dumber than before, you aren’t alone.
In a blistering opinion column for Computerworld, writer Steven Vaughan-Nichols says he’s noticed that all the major publicly-accessible AI models — think brand-name flagships like ChatGPT and Claude — don’t work as well as previous versions.
“Indeed, all too often, the end result is annoying and obnoxiously wrong,” he writes. “Worse still, it’s erratically wrong. If I could count on its answers being mediocre, but reasonably accurate, I could work around it. I can’t.”
In a Business Insider article that he flagged, users posting to the OpenAI developer forum had also noticed a significant decline in accuracy after the latest version of GPT was released last year.
“After all the hype for me, it was kind of a big disappointment,” one user wrote in June this year.
https://futurism.com/the-byte/ai-dumber
With that said….is AI slowly killing itself?
AI-generated text and imagery is flooding the web — a trend that, ironically, could be a huge problem for generative AI models.
As Aatish Bhatia writes for The New York Times, a growing pile of research shows that training generative AI models on AI-generated content causes models to erode. In short, training on AI content causes a flattening cycle similar to inbreeding; the AI researcher Jathan Sadowski last year dubbed the phenomenon as “Habsburg AI,” a reference to Europe’s famously inbred royal family.
And per the NYT, the rising tide of AI content on the web might make it much more difficult to avoid this flattening effect.
AI models are ridiculously data-hungry, and AI companies have relied on vast troves of data scraped from the web in order to train the ravenous programs. As it stands, though, neither AI companies nor their users are required to put AI disclosures or watermarks on the AI content they generate — making it that much harder for AI makers to keep synthetic content out of AI training sets.
“The web is becoming increasingly a dangerous place to look for your data,” Rice University graduate student Sina Alemohammad, who coauthored a 2023 paper that coined the term “MAD” — short for “Model Autophagy Disorder” — to describe the effects of AI self-consumption, told the NYT.
https://futurism.com/ai-slowly-killing-itself
Since I am not lazy enough to use AI I ask if anyone here has any thoughts about these two reports?
Let us know.
I Read, I Write, You Know
“lego ergo scribo”