Featured Image
Dr. Geoffrey HintonYouTube/Screenshot

(LifeSiteNews) — The “Godfather” of Artificial Intelligence (AI) warned about the potential dangers of the technology after he retired from his position at Google. 

Dr. Geoffrey Hinton, a pioneer in the AI field, said in a New York Times interview that “It is hard to see how you can prevent the bad actors from using it [AI] for bad things.” 

“This is just a kind of worst-case scenario, kind of a nightmare scenario,” Hinton told the BBC 

“You can imagine, for example, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own sub-goals,” like “I need to get more power.” 

According to the New York Times, Hinton’s immediate concern is that AI-generated videos, images, and texts will make it so that average people will “not be able to know what is true anymore.” 

Hinton also expressed concern that AI could disrupt the labor market by making many jobs obsolete and eventually becoming more intelligent than humans. 

“Maybe what is going on in these systems is actually a lot better than what is going on in the brain,” the 75-year-old scientist said. 

“The idea that this stuff could actually get smarter than people – a few people believed that,” Hinton stated. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.” 

“Right now, what we’re seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has, and it eclipses them by a long way,” Hinton told the BBC. “In terms of reasoning, it’s not as good, but it does already do simple reasoning,”  

“I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have,” Hinton said, according to the BBC. 

“We’re biological systems, and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world.” 

“And all these copies can learn separately but share their knowledge instantly. So it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.” 

The former Google executive also expressed his concern about the competitive race in developing AI technology between Microsoft and Google without adequate regulations in place. 

“I don’t think they should scale this up more until they have understood whether they can control it,” he said. 

Hinton, currently a professor of computer science at the University of Toronto, is often referred to as “the Godfather of AI” because he invented the “neural network” in 2012, which was the intellectual foundation of today’s AI systems. In 2018, he and two of his colleagues received the Turing Award for their work on AI, an honor often dubbed “the Nobel Prize of computing.”  

Hinton joins a growing number of technology leaders who have warned about the dangers of AI. In March of this year, more than 1,000 of them, led by Elon Musk, signed an open letter calling for a temporary moratorium on the development of AI due to “profound risks to society and humanity.” 

READ: Elon Musk warns of AI’s power to cause ‘civilizational destruction’ in interview with Tucker Carlson 

Yoshua Bengio, a Canadian computer scientist who won the Turing Award alongside Hinton in 2018, also recently warned that “we need to take a step back” on AI development because “what is very dangerous – and likely – is what humans with bad intentions or simply unaware of the consequences of their actions could do with these tools and their descendants in the coming years.” 

While AI is certainly a tool that can be used for evil, thinkers like the Catholic philosopher Edward Feser have argued that “Artificial Intelligence” is in fact not “intelligent” in the way that humans are and therefore cannot possibly become smarter than human beings. 

“It’s the user of a slide rule who does the calculations, not the instrument itself,” Feser wrote in his review of the book “The AI Delusion.” “Similarly, it’s the designers and users of a computer who understand the symbols it processes. The intelligence is in them, not in the machine.” 

He stressed that while computer programs can, for instance, detect the word “betrayal” in a text, they lack “the concept of betrayal.” 

“Similarly, image-recognition software is sensitive to fine-grained details of colors, shapes, and other features recurring in large samples of photos of various objects: faces, animals, vehicles, and so on,” Feser wrote. “Yet it never sees something as a face, for example, because it lacks the concept of a face. It merely registers the presence or absence of certain statistically common elements.” 

“AI might end up being dangerous for the same sorts of reasons that other technologies can be dangerous,” Feser stated. “For example, we might become too dependent on it, or it might become too complex to control, or there might be glitches that lead to horrible accidents, and so forth.” 

“However, it will not become dangerous by virtue of becoming literally more intelligent than us, because it is not literally intelligent at all.”