Featured Image

October 2, 2020 (LifeSiteNews) – Big data expert Kalev Leetaru accused social media platforms like Facebook and Twitter of being “Orwellian walled gardens” in which “ever-changing rules of ‘acceptable speech’ dictate what Americans are permitted to say and see.” 

For Leetaru, Big Tech in Silicon Valley has “become the de facto Ministry of Truth.” 

In an opinion piece for RealClearPolitics, the senior fellow at Auburn University’s Center for Cyber and Homeland Security explained that social media companies, “in the absence of governmental regulations requiring greater transparency about their operations, … are able to shield their growing power from public scrutiny.”

“In response to questions, the companies answer with either silence or carefully worded statements that avoid transparency,” Leetaru wrote September 30, adding, “These companies increasingly operate today as black boxes.”

As a remedy to the social media platforms’ actions, he suggested Congress could “convene annual external review panels chosen by bodies such as the National Academies of Science, Engineering and Medicine — without input from the companies — to review a randomized sample of the social media platform’s moderation and fact-checking actions, including for unconscious racial and cultural bias.”

He also mentioned the option of establishing “a centralized ethical review board under the National Academies or similar body that would be required to review and approve each major research initiative like Twitter’s ‘Healthy Conversation’ effort or Facebook’s new elections study, ensuring genuine external review without exemptions for ‘pre-existing datasets.’”

The big data expert emphasized that the public doesn’t even know about the exact size of Facebook and Twitter, let alone any critical information.

“When these companies do release actual statistics, they are carefully worded in carefully ways that can cause confusion,” he continued, stressing the lack of transparency in communication coming from Facebook and Twitter.

Watch our new series on Big Tech Censorship:

The implementation of fact-checking programs, while widely regarded as an improvement in weeding out false content going viral, was similarly criticized for its lack of transparency.

Leetaru pointed out that a spokesperson for Facebook said last May that third-party “fact-checking partners operate independently from Facebook and are certified through the non-partisan International Fact-Checking Network. Publishers appeal to individual fact-checkers directly to dispute ratings.”

However, “last month, Fast Company magazine reported that ‘Facebook may intervene if it thinks that a piece of content was mistakenly rated, by asking fact-checkers to adjust their ratings, a spokesperson acknowledged to Fast Company.’”

Asked about the apparent discrepancy between the two statements, “a spokesperson clarified that the company doesn’t actually change the ratings itself and that publishers can appeal directly to fact-checkers to dispute ratings. Yet the spokesperson added a third acknowledgement missing from the company’s previous responses: that Facebook may also ask fact-checkers to change their ratings when the company believes they are not in keeping with its definitions.”

For Leetaru, this shows that Facebook was hiding key information while not technically lying.

“Both times (Facebook) was asked whether it had ever intervened in a rating, it didn’t deny doing so; it merely issued statements that disputes are at fact-checkers’ ‘discretion’ and that publishers must appeal directly to the fact-checkers in disputes,” he wrote. “It simply left out the fact that there was a third route: the company requiring a fact-checker to change its rating. Facebook’s notable omission offers a textbook example of how the company’s silence and carefully worded statements allow it to hide its actions from public scrutiny.”

Another instance of lacking transparency, the big data expert wrote, is social media companies’ conduct in relation to studies.

“Take Facebook’s 2014 study in which it partnered with Cornell University to manipulate the emotions of more than 689,000 users,” Leetaru explained. “When the researchers submitted it for publication, the journal — Proceedings of the National Academy of Sciences — was initially ‘concerned’ about the ‘ethical issu(e)’ of manipulating users’ emotions without their consent, until it ‘queried the authors and they said their local institutional review board had approved it’ and thus the journal would not ‘second-guess’ the university.”

“Only after public outrage erupted,” he continued, “did it emerge that the university’s review board had determined that since only Facebook employees had access to raw user data, with Cornell University’s researchers having access only to the final results of the analysis, that ‘no review by the Cornell Human Research Protection Program was required.’”

He also mentioned another study currently being conducted, with Facebook actively manipulating volunteers’ accounts in the lead-up to Election Day.

Leetaru argued that “what appears at first glance to be an open, ‘transparent’ academic research initiative turns out to be just as opaque as every other Facebook effort. Indeed, by choosing a private university as the ethics review board for its pre-election effort, Facebook is even able to shield the project from Freedom of Information Act requests.”

Going beyond mere suggestions to improve the situation, the U.S. Department of Justice recently proposed legislation aimed at reining in social media platforms’ censorship of lawful speech. A draft legislative text to reform Section 230 of the federal Communications Decency Act was sent to Congress in September.

Section 230, as it currently stands, immunizes websites from being held liable for the third-party content they host, such as posts or tweets by social media users. This provision has been credited with helping the internet thrive, and in recent years, it has been seen as a potential means of addressing the increasing control that internet platforms exert over conservative speech.

“The current interpretations of Section 230 have enabled online platforms to hide behind the immunity to censor lawful speech in bad faith and is inconsistent with their own terms of service,” the DOJ explained in a press release.

“To remedy this, the department’s legislative proposal revises and clarifies the existing language of Section 230 and replaces vague terms that may be used to shield arbitrary content moderation decisions with more concrete language that gives greater guidance to platforms, users, and courts.”

“The legislative proposal also adds language to the definition of ‘information content provider’ to clarify when platforms should be responsible for speech that they affirmatively and substantively contribute to or modify,” the statement continued.