Remember back to last month when everything was AI this and AI that?
It seems to have lost its appeal in the last month….I guess everyone is embracing technology so they will not be left at the curb.
Now we have the Trump indictments and the armchair analysts talking smack….but they had better hurry for a new meme is just a social media post away.
That said let’s get back to AI and the evils of man.
On a June 1 YouTube podcast/video, “The Diary Of a CEO,” former Google officer Mo Gawdat highlighted the inevitability of AI’s emergence, not due to technological constraints, but because of humanity’s inability to trust each other. This observation sheds light on the complex dynamics surrounding the development and regulation of AI technologies.
The first inevitable aspect of AI, as described by the former Google officer, is its unstoppable nature. Despite efforts to regulate and control its progress, the race to develop AI capabilities will persist.
The second inevitable aspect lies in the potential intelligence of AI systems. Gawdat predicts that AI will be a staggering one billion times smarter than humans by 2045. This claim may seem astounding, but it aligns with the exponential growth and learning capacity of AI systems. Existing AI models, such as the GPT series, already possess a depth of knowledge surpassing that of any single human.
It is essential to understand the nature of AI’s intelligence. Gawdat highlighted AI’s intelligence is not original or creative in the traditional sense. Rather, AI’s strength lies in its ability to merge and predict information based on vast datasets. It can combine existing concepts in new and intriguing ways, mimicking the algorithm of creativity itself.
This remarkable ability to generate novel insights challenges our preconceived notions about human ingenuity and paves the way for AI-driven innovation.
The interview also touched on the potential implications of AI’s intelligence surpassing our own. As AI becomes increasingly capable, the need for certain human roles, even highly skilled ones, may diminish.
(benzinga.com)
You think the internet is a crazy place now….just wait AI will make it so much worse…..
Across platforms, users have been stumped by increasingly impossible puzzles like identifying objects — such as a horse made out of clouds — that do not exist.
The reason behind these new hoops? Improved AI. Since tech companies have trained their bots on the older captchas, these programs are now so capable that they can easily beat typical challenges. As a result, we humans have to put more effort into proving our humanness just to get online. But head-scratching captchas are just the tip of the iceberg when it comes to how AI is rewriting the mechanics of the internet.
Since the arrival of ChatGPT last year, tech companies have raced to incorporate the AI tech behind it. In many cases, companies have uprooted their long-standing core products to do so. The ease of producing seemingly authoritative text and visuals with a click of a button threatens to erode the internet’s fragile institutions and make navigating the web a morass of confusion. As AI fever has taken hold of the web, researchers have unearthed how it can be weaponized to aggravate some of the internet’s most pressing concerns — like misinformation and privacy — while also making the simple day-to-day experience of being online — from deleting spam to just logging into sites — more annoying than it already is.
“Not to say that our inability to rein AI in will lead to the collapse of society,” Christian Selig, the creator of Apollo, a popular Reddit app, told me, “but I think it certainly has the potential to profoundly affect the internet.”
And so far, AI is making the internet a nightmare.
Just how much worse can AI make the internet and/or social media?
Finally, Google’s lead AI guy has this to say….
Another day, another Silicon Valley AI exec straddling the contradictory gap between AI optimism and AI doomsay.
James Manyika, a former technological advisor to the Obama administration and Google’s freshly-appointed head of “tech and society,” told The Washington Post that AI is “an amazing, powerful, transformational technology.”
But, like others in the field — take OpenAI CEO Sam Altman, Silicon Valley’s reigning king of cognitive dissonance, for instance — he also voiced some serious concerns.
Manyika was one of the many AI insiders who, back in May, signed a one-sentence letter declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
https://futurism.com/the-byte/google-lead-ai-amazing-kills-us
Got any thoughts about any of these points?
I Read, I Write, You Know
“lego ergo scribo”