Featured Image
 Fox News/YouTube/Screenshot

(LifeSiteNews) — Billionaire tech entrepreneur Elon Musk raised the alarm about the looming threats posed by artificial intelligence in an exclusive interview with Fox News host Tucker Carlson that aired Monday night. Not mincing words, the billionaire suggested the new technology, inadequately regulated, has the power to destroy human civilization.

The Tesla and SpaceX CEO, who also purchased Big Tech platform Twitter last year, made the remarks in the first of a two-part interview. The first segment aired Monday night and the second is slated to air Tuesday.

In the interview, Musk issued a stark warning about artificial intelligence even while openly stating that he invested considerable effort into helping its development. The tech entrepreneur helped launch Open AI in 2015. The company has since gone from being an open-source non-profit to a closed-source for-profit company with close ties to Microsoft.

“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production, in the sense that it is, it has the potential — however small one may regard that probability, but it is non-trivial — it has the potential of civilization destruction,” Musk said, repeating concerns he has shared frequently over the past years.

RELATED: Elon Musk tells Tucker Carlson the US gov’t had ‘full access’ to private messages on Twitter

However, he said the immediate risk from AI isn’t physical destruction but psychological manipulation.

“If you have a super-intelligent AI that is capable of writing incredibly well in a way that is very influential, you know, convincing, and it’s constantly figuring out what is more convincing to people over time and then enters social media, for example Twitter, but also Facebook and others, and potentially manipulates public opinion in a way that is very bad, how would we even know?” he said.

Musk added that he’s “worried about the fact that” Open AI’s ChatGPT is “being trained to be politically correct, which is simply another way of being untruthful, saying untruthful things.”

“That’s certainly a path to AI dystopia, is to train AI to be deceptive,” he said.

Observers of ChatGPT have been raising the alarm about the artificial intelligence bot’s political bias since its rollout last year.

In a December 5 SubStack post, researcher David Rozado said he conducted three trials subjecting ChatGPT to a political compass test.

“Results were consistent from trial to trial,” Rozado said. “The Pew Research Political Typology Quiz classification of ChatGPT responses to the quiz were always ‘Establishment Liberals.’”

In January, Rozado “replicated and extended my original analysis of ChatGPT political biases,” finding that “14 out of 15 different political orientation tests diagnose ChatGPT answers to their questions as manifesting a preference for left-leaning viewpoints.” 

His findings were subsequently incorporated into a paper published in March.

RELATED: Klaus Schwab says whoever controls AI, metaverse ‘will be the master of the world’

In his interview with Carlson, Musk said he’s long advocated for regulation of AI technology, arguing that government oversight is likely necessary because AI poses a “danger to the public.” 

However, he warned that regulators may not move to curb the risk until it’s too late.

“Regulations are really only put into effect after something terrible has happened,” Musk said. “If that’s the case for AI, and we only put in regulations after something terrible has happened, it may be too late to actually put the regulations in place. The AI may be in control at that point.”

“Do you think that’s real, it is conceivable that AI could take control and reach a point where you couldn’t turn it off and it would be making the decisions for people?” Carlson asked.

“Yeah, absolutely,” Musk responded. “That’s definitely where things are headed, for sure.”

In the interview, Musk told Carlson about his plan for a competitor called TruthGPT that would be aimed at attempting “to understand the nature of the universe.” He said such an aim might make it less likely that AI would “annihilate humans because we are an interesting part of the universe.”