Lots has been written in the past months about AI….some positive and some negative.
I have noticed that in the past month or so the posting on AI has declined to the point that it is barely mentioned….so does that mean that it has become acceptable?
Since I am not AI’s biggest fan I still read stuff about what is going on with the social monster.
One of the most popular platforms is ChatGPT…..and what’s going on with it…..
If you think AI platforms like OpenAI’s ChatGPT seem dumber than before, you aren’t alone.
In a blistering opinion column for Computerworld, writer Steven Vaughan-Nichols says he’s noticed that all the major publicly-accessible AI models — think brand-name flagships like ChatGPT and Claude — don’t work as well as previous versions.
“Indeed, all too often, the end result is annoying and obnoxiously wrong,” he writes. “Worse still, it’s erratically wrong. If I could count on its answers being mediocre, but reasonably accurate, I could work around it. I can’t.”
In a Business Insider article that he flagged, users posting to the OpenAI developer forum had also noticed a significant decline in accuracy after the latest version of GPT was released last year.
“After all the hype for me, it was kind of a big disappointment,” one user wrote in June this year.
https://futurism.com/the-byte/ai-dumber
With that said….is AI slowly killing itself?
AI-generated text and imagery is flooding the web — a trend that, ironically, could be a huge problem for generative AI models.
As Aatish Bhatia writes for The New York Times, a growing pile of research shows that training generative AI models on AI-generated content causes models to erode. In short, training on AI content causes a flattening cycle similar to inbreeding; the AI researcher Jathan Sadowski last year dubbed the phenomenon as “Habsburg AI,” a reference to Europe’s famously inbred royal family.
And per the NYT, the rising tide of AI content on the web might make it much more difficult to avoid this flattening effect.
AI models are ridiculously data-hungry, and AI companies have relied on vast troves of data scraped from the web in order to train the ravenous programs. As it stands, though, neither AI companies nor their users are required to put AI disclosures or watermarks on the AI content they generate — making it that much harder for AI makers to keep synthetic content out of AI training sets.
“The web is becoming increasingly a dangerous place to look for your data,” Rice University graduate student Sina Alemohammad, who coauthored a 2023 paper that coined the term “MAD” — short for “Model Autophagy Disorder” — to describe the effects of AI self-consumption, told the NYT.
https://futurism.com/ai-slowly-killing-itself
Since I am not lazy enough to use AI I ask if anyone here has any thoughts about these two reports?
Let us know.
I Read, I Write, You Know
“lego ergo scribo”
I think as more people try it, the less they’ll fear it.
I agree because basically American are lazy and it will do their work for them,. chuq
It is either the wave of the future or a bust and burnout– we shall see….
It is the future….people are too goddamn lazy for it not to be used. chuq
The lack of news coverage is not really an indication of people becoming more accepting of it. I think there are a couple of things going on.
For years now the news media has been driven by algorithms, not by what’s important for us to know. At least with the large media organizations. Everything in mainstream media is about getting eyeballs on tv or computer screens because the more eyes on their content, the more money they make. And they believe that they need to stay “fresh”. If they keep saturating the airwaves or internet with the same stories day after day they’re afraid people will get bored and start tuning them out. So they’re constantly moving on to TNBT (The Next Big Thing). Basically reporting on AI has become, oh, stale. People are used to it now and aren’t following it as closely as they did, traffic stories about AI has shrunk, so the media stops covering it, moving on to something they think will attract attention.
The media also has another problem in that they don’t really know what to write or say about it any more. There isn’t anything new to report, basically.
As for comments about AI somehow becoming worse, that is less capable than it was, that’s actually true. It is. There are two things going on there.
First is that AI needs to be trained, and the only way to train it is to feed it massive amounts of information. Up until recently, AI developers were shoveling everything into these AIs with total disregard for things like legality, copyright, etc. Needless to say real writers, artists, photographers, etc. were not pleased to have their work basically stolen to train AIs. And several lawsuits have now gone through the courts which have been on the side of the human writers and artists. So AI developers now have to be much more cautious about what they train these things with, and even have to pay money for the material they use for training. Various technological techniques are even being tried to “poison the well”, so to speak, that inserts fake data into the material, isolate data sources from AIs, etc.
The second thing is that after seeing how badly these things can be abused, the developers have been forced to build restrictions into their models to prevent them from being used for illicit purposes, such as generating CSAM, images that put the faces of real people into bizarre or even criminal situations, etc. This has had some unintended consequences. Especially when some of the AI developers have gone totally overboard with some of these precautions. It’s put such strict restrictions on the output that it’s hampering legitimate uses.
The media is driven by ad revenue…..so taylor swift gets more ink than say 40,000 dead in Gaza….so sad.
AI is still the anti-Christ for me….I think it makes people lazy and I am old and want my mind to work as long as it can….but that is just me. chuq
Oh, I agree. AI could be very useful, but people being people, there is a 100% chance that it will be used in ways that are harmful to us. I love being able to do things in Photoshop like circle an ugly power pole in an otherwise lovely photo and just type “remove pole” and it’s done rather than having to spend time trying to do it the old fashioned way. But I wonder if the potential harm is going to be worth it.
AI has its uses just I do not need it…..others will use…..chuq