Opinion
Featured Image
 Shutterstock

(LifeSiteNews) — The recent hype around “Artificial Intelligence” (AI), caused in large part by the popular ChatGPT software, has many people worried that the machines are soon going to become smarter than us and take over the world.

The recent warning by the “Godfather of AI,” Dr. Geoffrey Hinton, that AI can be used by bad people to do bad things is certainly addressing a legitimate concern. However, mixed in with these warnings is often the fear that we will soon reach the so-called “singularity,” a point where a General Artificial Intelligence (AGI) will become a super-intelligent “digital god” that is far more intelligent than human beings.

READ: Elon Musk says Google co-founder wanted to create a ‘digital god’ in Tucker Carlson interview

We tend to believe that such a scenario is possible because what AI can do seems very impressive to us and we have likely watched too many science fiction movies like Will Smith’s “I, Robot,” where the machines develop a will of their own.

However, as Gary Smith, author of the book The AI Delusion, explains, AI programs do not actually understand how the world works at all.

AI does not understand anything

Computer programs are very useful for things like spellcheck because they can compare words that we write with words in a dictionary, but they do not understand what these words actually mean.

This is illustrated perfectly by the example of Nigel Richards, the greatest Scrabble player of all time. Smith uses Richards’ technique of memorizing words in a dictionary to show how AI works.

After winning countless Scrabble championships in his native English language, Richards began to memorize over 380,000 words from a French dictionary and subsequently went on to win Scrabble championships in French, without speaking or understanding the language at all.

This is exactly how a computer program works. “They can put letters together, and they can spell check whether they are in a dictionary, but they have absolutely no idea what the words mean,” Smith said.

In his review of Smith’s book, Catholic philosopher Edward Feser noted that while computer programs can scan texts for the word “betrayal,” they cannot recognize the concept of betrayal in a story if the word is not used.

“Due to the rough correlation that exists between contexts in which the word ‘betrayal’ appears, and contexts in which the concept is deployed, the computer will loosely simulate the behavior of someone who understands the word — but, says Smith, to suppose such a simulation amounts to real intelligence is like supposing that climbing a tree amounts to flying,” Feser wrote.

“Similarly, image-recognition software is sensitive to fine-grained details of colors, shapes, and other features recurring in large samples of photos of various objects: faces, animals, vehicles, and so on,” he continued. “Yet it never sees something as a face, for example, because it lacks the concept of a face.”

This leads to funny results, like the AI program not being able to identify a person because he is wearing large, strangely colored glasses. Even as facial recognition software becomes better and better, the point remains that AI is just analyzing pixels and makes “no effort whatsoever to understand what it sees,” as Smith put it.

AI will never be intelligent

Many will argue that these glitches will be resolved over time through further refinement and machine learning. It is certainly possible that facial recognition software will become much better in the future, but the point of showing these problems in AI programs was to demonstrate that computer programs do not, and cannot, perceive objects the way humans do.

“The software doesn’t grasp an image as a whole or conceptualize its object but merely responds to certain pixel arrangements,” Feser wrote. “A human being, by contrast, perceives an image as a face — even when he can’t make out individual pixels.”

Furthermore, contrary to a common misconception, computer programs, even very impressive ones, do not work like human brains.

As Feser explains in his blog, AI programs work through so-called logic gates that are “designed by electrical engineers in a way that will make them suitable for interpretation as implementing logical functions.”

In contrast, no one is programming the neurons in our brains to do something like that.

Furthermore, Feser explains that a computer and its software are metaphysically different from a human brain. While computer software is a man-made artifact, “a brain (or, more precisely, the organism of which the brain is an organ) is a substance, in the Aristotelian sense.”

“A substance has irreducible properties and causal powers, i.e. causal powers that are more than just the sum of the properties and powers of its parts,” the philosopher wrote. “Artifacts are not like that. In an artifact, the properties and causal powers of the whole are reducible to the aggregate of the properties and causal powers of the parts together with the intentions of the designer and users of the artifact.”

Therefore, an AI computer program will never be able to perceive and understand the world as we do, since it works fundamentally different than the human brain.

AI is dangerous, but not because it’s smarter than us

AI can certainly be dangerous and can be used for bad things, just like a knife can be used to stab someone or a gun can be used to commit mass murder.

AI might indeed become too complex to control and fully understand, or there may be glitches in the system that lead to catastrophic results. And here also lies the danger of believing that AI is smarter than us.

“The real danger is that we think computers are smarter than us and we let them make important decisions that should not be made,” Smith stated.

BlackRock, the largest asset manager in the world, letting its computer software Aladdin make investment decisions may be an example of this dangerous trend.

READ: Everything you need to know about BlackRock, the company that owns the world

There are many scenarios where things can go terribly wrong with AI, and the warnings about it may very well be justified in that sense.

However, we do not have to fear that AI will become more intelligent than us because it is really not intelligent at all.

For a more in-depth explanation of why AI is not actually intelligent, I recommend reading Edward Feser’s blog post “Artificial intelligence and magical thinking.”

6 Comments

    Loading...