News
Featured Image
Facial recognitionShutterstock

Big Tech is censoring us. Subscribe to our email list and bookmark LifeSiteNews.com to continue getting our news.  Subscribe now.

STANFORD, California, January 21, 2021 (LifeSiteNews) — A group of researchers from California’s Stanford University have published a paper in which they claim it is possible to teach a computer to recognize a person’s political leanings, purely from scanning their face. In the same paper, they warned of the possible application of the technology to real life.

The research team was led by Michal Kosinski, who famously created a program in 2017 that purported to endow machines with the ability to accurately determine heterosexuality and homosexuality, again from facial cues.

Building upon the alleged success of his prior machine learning program, Kosinski “used an open-source facial-recognition algorithm,” this time to determine a person’s political affiliation through “naturalistic facial images.” To teach his computer how to make distinctions between conservatives and liberals, Kosinski ran over 1 million images of individuals, along with details of their political leanings, into the system. Kosinski’s team were able to gather this information freely from dating websites and through Facebook.

The paper, published in Nature Research’s Scientific Reports, claims that the machine correctly predicted political orientation 72% of the time, which is “remarkably better than chance (50%), human accuracy (55%), or one afforded by a 100-item personality questionnaire (66%).” The machine makes its determination by converting “facial images into face descriptors,” corresponding to 2,048 features which the computer can then classify by comparison with the million face data sets already stored in its memory.

Some of the common descriptors attached to conservatism are hardly subtle: “white people, older people, and males are more likely to be conservatives,” the report states. However, when searching for distinctions in political leanings within this group, the system still maintained a high success rate, around 68% accuracy, indicating that “faces contain many more cues to political orientation than just age, gender, and ethnicity.”

According to Kosinski’s notes, the facial cues that give away one’s political beliefs include “[h]ead orientation and emotional expression.” He also found that liberals “tended to face the camera more directly, were more likely to express surprise, and less likely to express disgust.” Kosinski did not divulge any more details on what facial identifiers the system relied on.

The researchers presented their findings with a note of caution, flagging their concerns regarding use of the program for nefarious means: “[O]ur findings have critical implications for the protection of privacy and civil liberties. Ubiquitous CCTV cameras and giant databases of facial images, ranging from public social network profiles to national ID card registers, make it alarmingly easy to identify individuals, as well as track their location and social interactions.”

“Moreover,” they warn, “unlike many other biometric systems, facial recognition can be used without subjects’ consent or knowledge.”

The implications, nevertheless, are far-ranging, for it seems that “[f]acial images can be easily (and covertly) taken by a law enforcement official or obtained from digital or traditional archives, including social networks, dating platforms, photo-sharing websites, and government databases,” to at least one of which most people will, at some point, have uploaded an image of themselves.

“They are often easily accessible; Facebook and LinkedIn profile pictures, for instance, are public by default and can be accessed by anyone without a person’s consent or knowledge. Thus, the privacy threats posed by facial recognition technology are, in many ways, unprecedented.”

While the authors admit that many won’t feel that the reported accuracy of the system poses a privacy threat, they add that their “estimates unlikely constitute an upper limit of what is possible,” meaning that the system could continue to learn and hone its current “abilities.”

The researchers warn that “even modestly accurate predictions” can bear great effects when applied widely enough: “[E]ven a crude estimate of an audience’s psychological traits can drastically boost the efficiency of mass persuasion. We hope that scholars, policymakers, engineers, and citizens will take notice.”

Kosinski added his own personal disclaimer, asserting that his research is merely bringing to light the possibilities of artificial intelligence. “Don’t shoot the messenger,” he said. “In my work, I am warning against widely used facial recognition algorithms. Worryingly, those AI physiognomists are now being used to judge people’s intimate traits.”