Blogs
Featured Image
 Shutterstock.com

June 7, 2019 (LifeSiteNews) – In a recent report on how pornography is turning high schools into a “battlefield,” I noticed one particularly disturbing testimonial from a male student named William, aged 17:

People just talk rubbish about other people online. But then banter turns into something worse. Like Photoshopping certain people’s mothers into poses. Then the pictures go round the whole school. You’ve got no chance of stopping that. People just do it and the school has no authority over it because it’s what goes on in your personal time. Parents can’t stop it. That’s not realistic. They don’t know what’s going on, so it doesn’t really matter to them.

If you shudder to imagine a photoshopped pornographic image of someone you love making the rounds, you should know that things are about to get much, much worse in the world of digital pornography. As the UK’s Metro pointed out recently, “With the advent of deepfake porn, the possibilities have expanded even further, with people who have never starred in adult films looking as though they’re doing sexual acts on camera. Experts have warned that these videos enable all sorts of bad things to happen, from paedophilia to fabricated revenge porn.”

Deepfakes, for those of you who haven’t yet heard of them, are images and videos that utilize “deep learning AI to forge something not actually there.” This can be used in all sorts of ways, including the creation of nonexistent speeches that feature politicians or others saying things that they did not say (Bloomberg created an example of this to show how easy it is)—or to create pornographic videos featuring people who had nothing to do with the sex acts being performed in them. This has already happened to a number of female celebrities, with Selena Gomez, Emma Watson, and Scarlett Johansson all have their faces edited into porn videos. As it turns out, this isn’t tremendously difficult to do:

Using a generative adversarial network – or GAN. This is a type of AI that has two parts; one which creates the fake images, and one that works out how realistic it is, learning from its past mistakes Autoencoders are another way to create deepfakes. These are neural networks that can learn all the features of a given image then decode those features so they can change the image These methods vary in efficacy and quality, with GANs giving less blurry results but being trickier to train. Samsung recently created an AI that was able to make deepfake videos using single images, including the Mona Lisa and the Girl With A Pearl Earring. We saw these iconic paintings smiling, talking, and looking completely alive. In recent weeks, there has been an explosion of face swapping content, with Snapchat and FaceApp (among others) releasing realistic filters that allowed you to see your looks as the opposite gender, as well as previous ageing filters going viral once more. 

Experts already say that it is becoming increasingly difficult to distinguish between a deepfake video and a genuine one. Additionally, most revenge porn laws only ban the distribution of explicit images, rather than the creation of fictional images, leaving those who will be the targets of deepfake porn with no legal recourse for the time being. Experts are already asking: what happens if a deepfake porn video starts to circulate, and everybody believes that it is genuine? How can an ordinary person without the resources of, say, a high-profile actress, fight back?

Especially considering the ugly misogyny of online culture and the propensity of trolls to resort to sexual harassment, it seems likely that this has the potential to become a commonplace practice, with people targeting their opponents with deepfake porn imagery. This was already done to a prominent female journalist in India, who vomited when she saw the video and said, correctly, that she was being targeted in order to “silence” her. For many, simply the fact that there are people using these sorts of videos for sexual purposes is profoundly disturbing and can feel like a violation all on its own. Experts are saying that governments should make pre-emptive moves to head off this incoming trend by passing laws dealing specifically with online sexual harassment.

As technology is increasingly recruited to feed our society’s insatiable sexual appetites, things are going to get worse. Sexting and the distribution of intimate images are already destroying a generation of young people, and many of them have made decisions that will haunt them for the rest of their lives before they can even drive or vote. I agree with the experts on this one: Everything within the legal realm of possibility should be done to prosecute, fine, and penalize those who create and distribute deepfake porn imagery as well as revenge porn. This has gone too far, and we will see suicides if something is not done.

Featured Image

Jonathon Van Maren is a public speaker, writer, and pro-life activist. His commentary has been translated into more than eight languages and published widely online as well as print newspapers such as the Jewish Independent, the National Post, the Hamilton Spectator and others. He has received an award for combating anti-Semitism in print from the Jewish organization B’nai Brith. His commentary has been featured on CTV Primetime, Global News, EWTN, and the CBC as well as dozens of radio stations and news outlets in Canada and the United States.

He speaks on a wide variety of cultural topics across North America at universities, high schools, churches, and other functions. Some of these topics include abortion, pornography, the Sexual Revolution, and euthanasia. Jonathon holds a Bachelor of Arts Degree in history from Simon Fraser University, and is the communications director for the Canadian Centre for Bio-Ethical Reform.

Jonathon’s first book, The Culture War, was released in 2016.