News
Featured Image
 CSPAN/Screenshot

WASHINGTON, D.C. (LifeSiteNews) — The CEO of a major Artificial Intelligence (AI) company told lawmakers during a U.S. Senate hearing Tuesday he was worried that his field could potentially cause “significant harm to the world” through unregulated development of AI systems

During the hearing, the AI executive addressed questions concerning the impact of the rapidly emerging technology on jobs and its potential to wreak havoc during an election cycle.

“We have tried to be very clear about the magnitude of the risks here,” said Sam Altman, CEO of Open AI, the artificial intelligence corporation that spawned the well-known predictive text system ChatGPT. “My worst fears are that we cause significant — we, the field, the technology industry — cause significant harm to the world. I think that could happen in a lot of different ways.”

“I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that,” he said.

Billionaire tech mogul Elon Musk, who helped found Open AI in 2015 and has frequently warned about the potential for harm from unregulated AI, responded to Altman’s admonitions by tweeting simply: “Accurate.”

During the Tuesday hearing, Democratic U.S. Sen. Richard Blumenthal of Connecticut, who said his “nightmare” is that AI could have a destructive impact on jobs, asked Altman to weigh in on a 2015 blog post in which he had previously argued that the “[d]evelopment of superhuman machine intelligence … is probably the greatest threat to the continued existence of humanity.”

“Like with all technological revolutions, I expect there to be significant impact on jobs,” Altman responded. He acknowledged that he thinks that Chat GPT4, which launched March 14, “will, I think, entirely automate away some jobs,” though he posited it will also “create new ones we believe will be much better.”

READ: The simple reason why AI will never be smarter than human beings

But senators weren’t just concerned about the potential impact of AI on jobs.

Republican U.S. Sen. Josh Hawley of Missouri cited an April MIT paper that found that artificial intelligence systems were capable of predicting public opinion, and asked Altman whether he believed such technology could have an outsized impact on voters in presidential elections.

“It’s one of my areas of greatest concern,” Altman responded, going on to flag the “more general ability of these models to manipulate, to persuade, to provide sort of one-on-one interactive disinformation.”

Ahead of the upcoming 2024 presidential contest, Altman recommended that government regulations be implemented to ensure that people “know if they’re talking to an AI,” or whether “content they’re looking at might be generated or might not.”

Another witness who took part in the Tuesday hearing, New York University professor emeritus of psychology and neural science Gary Marcus, echoed Altman’s concerns, noting that Hawley’s example of AI’s ability to predict human behavior is just the tip of the iceberg.

For Marcus, the more disturbing possibility is that AI could actually “manipulate” people’s political beliefs rather than simply predict them.

“One of the things that I’m most concerned about with GPT4 is that we don’t know what it’s trained on,” Marcus said. “What it is trained on has consequences for essentially the biases of the system.”

The concerns expressed by Altman and Marcus mirror similar worries previously and repeatedly expressed by Musk, who has warned AI could pose an existential threat to civilization.

“If you have a super-intelligent AI that is capable of writing incredibly well in a way that is very influential, you know, convincing … and potentially manipulates public opinion in a way that is very bad, how would we even know?” Musk in an April interview with then-Fox News host Tucker Carlson.

Musk added that he’s “worried about the fact that” Open AI’s ChatGPT is “being trained to be politically correct, which is simply another way of being untruthful, saying untruthful things.”

“That’s certainly a path to AI dystopia,” he said.

READ: Elon Musk warns of AI’s power to cause ‘civilizational destruction’ in interview with Tucker Carlson

Other observers of artificial intelligence have similarly been raising the alarm about its potential to cause significant harm since the rollout of its earlier version last year.

This month, AI pioneer Dr. Geoffrey Hinton told The New York Times he felt it would be “hard to see how you can prevent the bad actors from using it [AI] for bad things.”

Critics have also pointed out that the predictive text system exhibits a political bias, something Altman has previously acknowledged. In a December 5 SubStack post, researcher David Rozado said he conducted three trials subjecting ChatGPT to a political compass test and found that its responses to quiz questions were consistently associated with an “Establishment Liberal” viewpoint.

During the Tuesday hearing, Altman repeatedly told senators he believes government regulation will be necessary to rein in “increasingly powerful models,” and that Open AI as a corporation wants “to work with the government to prevent” serious harm from occurring.

In Musk’s comments to Carlson, however, he worried it might be too late to impose meaningful restrictions on AI.

“Regulations are really only put into effect after something terrible has happened,” Musk said. “If that’s the case for AI, and we only put in regulations after something terrible has happened, it may be too late to actually put the regulations in place. The AI may be in control at that point.”

“Do you think that’s real, it is conceivable that AI could take control and reach a point where you couldn’t turn it off and it would be making the decisions for people?” Carlson asked.

“Yeah, absolutely,” Musk responded. “That’s definitely where things are headed, for sure.”

Democratic U.S. Sen. Dick Durbin of Illinois said Tuesday’s hearing was “historic” and that he couldn’t recall another time when corporations urgently called on the government “to regulate them.”

4 Comments

    Loading...