It seems the new trend on the internet to take the place of the crypto craze is that of AI, on which many words have been typed.
So I shall jump on this bandwagon because I missed the ‘New Coke’ thing *that is humor ya’ll)…..
From around the ‘Net on AI….
BuzzFeed is now using AI…..
The struggling media company BuzzFeed told investors this week that its readers spend 40 percent more time with its AI-facilitated quizzes than traditional ones, Bloomberg reports.
While we have yet to see a more detailed breakdown of the numbers — the company would obviously be incentivized to present the stats in as flattering a way as possible — it is interesting to see it doubling down on AI after shutting down its entire Pulitzer-winning news division last month, laying off around 120 of its 1,200 total employees.
Earlier this year, BuzzFeed announced it would be letting human employees create quizzes that made use of AI chatbots — which, to be fair, was kind of a fun idea.
Despite those early promises, it soon turned out that BuzzFeed was using AI to generate more than just quizzes. Dozens of SEO-driven travel guides started appearing on the site that made heavy use of hackneyed writing and repeated phrases. In a statement to Futurism at the time, BuzzFeed said it was “continuing to experiment with AI to ‘enhance human creativity,'” and “trying new formats that allow anyone (with or without a formal background in writing or content creation) to contribute their ideas and unique perspectives on our site.”
Is there rudimentary consciousness?
With AI chatbots’ propensity for making things up and spewing bigoted garbage, one firm founded by ex-OpenAI researchers has a different approach — teaching AI to have a conscience.
As Wired reports, the OpenAI competitor Anthropic’s intriguing chatbot Claude is built with what its makers call a “constitution,” or set of rules that draws from the Universal Declaration of Human Rights and elsewhere to ensure that the bot is not only powerful, but ethical as well.
Jared Kaplan, a former OpenAI research consultant who went on to found Anthropic with a group of his former coworkers, told Wired that Claude is, in essence, learning right from wrong because its training protocols are “basically reinforcing the behaviors that are more in accord with the constitution, and discourages behaviors that are problematic.”
Will it actually work in practice? It’s tough to say. After all, OpenAI’s ChatGPT tries to steer away from unethical prompts as well, with mixed results. But since the misuse of chatbots is a huge question hovering over the nascent AI industry, it’s certainly interesting to see a company confronting the issue head-on.
Only the tip of the iceberg….
Books almost entirely generated by AI are flooding Amazon’s marketplace, The Washington Post reports, a trend that’s turning out to be a huge headache for human authors.
It’s a growing problem, making it more difficult to distinguish real authors from AI-generated bylines of non-existent writers.
One publisher identified by the WaPo lists dozens of books on Amazon on surprisingly niche topics, with suspicious five-star reviews propping up the operation.
And AI-generated books on Amazon are only the tip of the iceberg, with other AI content flooding the rest of the internet with dubiously sourced material as well, which could easily trigger a pandemic of misinformation.
These are just a few of the articles I found this morning….I am sure there are many more….I will try to keep my readers updated as much as possible…..
I Read, I Write, You Know
“lego ergo scribo”