(LifeSiteNews) — In a Monday night interview with Fox News’ Tucker Carlson, billionaire tech mogul Elon Musk said he was labeled a “speciesist,” a pejorative he gladly accepted, when he responded to a Google co-founder’s ambition to create a “digital superintelligence,” e.g. a “digital god,” by expressing concern about the risks posed to humanity.
In the exclusive sit-down interview with Carlson that aired Monday at 8:00 p.m., Musk said he had previously been “close friends” with Google co-founder Larry Page and had numerous conversations with him about artificial intelligence safety.
The tech entrepreneur, who said he “unfortunately” had “played a critical role in creating” OpenAI, which has since launched the artificial intelligence chatbot ChatGPT, said he feared Page “wasn’t taking AI safety seriously enough.”
According to Musk, Page wanted to create a type of “digital superintelligence. Basically, ‘digital god,’ if you will, as soon as possible.” Musk added that Page has “made many public statements over the years that the whole goal of Google is what’s called AGI, Artificial General Intelligence, or artificial superintelligence.”
As referenced by Elon, here’s a clip of Larry Page saying Google’s endgame is AGI https://t.co/26bdxHzITM
— Alt Man Sam (@mezaoptimizer) April 18, 2023
In his conversation with Carlson, Musk recounted an incident in which Page called him a “speciesist” for raising his concerns about making sure “humanity’s okay here” in the face of growing superintelligent technological systems that can easily outsmart humans.
“Did he use that term?” Carlson asked, laughing.
“Yes, and there were witnesses, I wasn’t the only one there when he called me a ‘speciesist,’” Musk responded. “I was like, ‘okay, that’s it. Yes, I’m a speciesist, okay? You got me. What are you? Yeah, I’m fully a speciesist. Busted.’”
Musk told Carlson that OpenAI has gone from being an open-source non-profit to a closed-source for-profit company that’s become “closely allied with Microsoft.” Meanwhile, he pointed out that Google has acquired a separate AI company known as DeepMind.
“In effect, Microsoft has a very strong say if not directly controls OpenAI at this point, so you really have an Open AI Microsoft situation and then Google DeepMind are the two sort of heavyweights in this area,” Musk said.
The tech entrepreneur said he plans to launch an alternative to OpenAI and DeepMind that would be “pro-human.”
“I’m going to start something which I call TruthGPT, or a maximum truth-seeking AI that tries to understand the nature of the universe,” he said, adding that he believes it “might be the best path to safety in the sense that an AI that cares about understanding the universe is unlikely to annihilate humans because we are an interesting part of the universe.”
Musk has warned about the civilizational threats of AI for years, comparing it with the development of nuclear weapons. In a 2018 interview he said the “danger of AI is much greater than the danger of nuclear warheads by a lot.”
“Mark my words: AI is far more dangerous than nukes, by far,” he said. “So why do we have no regulatory oversight? This is insane.”
Musk, who also founded brain-chip company Neuralink, has argued that developing technology to merge the human mind with computers may be necessary to compete with artificial intelligence, a theory that’s given rise to serious concerns centering on ethics and personal liberty.
“I cannot imagine a scenario in which there would not be an endless number of governments, advertisers, insurers and marketing folks looking to tap into the very biological core of our cognition to use it as a means of thwarting evildoers and selling you stuff,” Christopher Markou of the University of Cambridge wrote in 2017.
“[W]hat if the tech normalizes to such a point that it becomes mandatory for future generations to have a whole-brain implant at birth to combat illegal or immoral behavior (however defined)?” Markou asked.