If you’re tired of censorship, cancel culture, and the erosion of civil liberties subscribe to Reclaim The Net.
(Reclaim the Net) – The World Economic Forum (WEF) continues to beat the drum of the need to somehow merge “AI” [artificial intelligence] and humans, as a supposed panacea to pretty much any ill plaguing society and economy.
It’s never a sure bet if this Davos-based elite’s mouthpiece comes up with its outlandish “solutions” and “proposals” as a way to reinforce existing, or introduce new narratives; or just to appear busy and earn its keep from those bankrolling it.
Nevertheless, here we are, with the WEF turning its attention toward what’s apparently the burning issue in everybody’s life right now.
No – it’s not the runaway inflation, energy costs, and even food insecurity in many parts of the world. For how dedicated to globalization the WEF is, it’s strangely tone-deaf to what is actually happening around the globe.
And as people struggle to pay their bills and dread the coming winter, the WEF obliviously talks about “the dark world of online harms.”
The group seems to be hard at work squaring the circle of combating internet trolls, i.e., solving the broadly and vaguely defined problem of “online abuse.” It’s what you’d expect it to supposedly be about: “child abuse, disinformation, extremism, and fraud.”
Once a reader wades through the weeds of verbal and narrative smokescreens, though, the big takeaway from the article posted on the group’s website is that neither human censors, nor censorship resulting from “AI” (in reality, just machine learning algorithms) is enough any longer.
“By uniquely combining the power of innovative technology, off-platform intelligence collection and the prowess of subject-matter experts who understand how threat actors operate, scaled detection of online abuse can reach near-perfect precision,” says the WEF.
But what is this supposed to mean?
At some point toward the end, the WEF finally spits it out (but spoiler: it still doesn’t make a whole lot of sense): instead of relying on what throughout the article is continuously and erroneously referred to as “AI” – the WEF says it is proposing “a new framework: rather than relying on AI to detect at scale and humans to review edge cases, an intelligence-based approach is crucial.”
It’s well worth quoting the entire techno-bubble word salad that is supposed to be the sales pitch of the writeup:
By bringing human-curated, multi-language, off-platform intelligence into learning sets, AI will then be able to detect nuanced, novel abuses at scale, before they reach mainstream platforms. Supplementing this smarter automated detection with human expertise to review edge cases and identify false positives and negatives and then feeding those findings back into training sets will allow us to create AI with human intelligence baked in.
In this way – “trust and safety teams can stop threats rising online before they reach users,” writes the WEF.
Finally one can start discerning the argument here – once converted into “human readable” format – as being to simply pressure social networks to start moving towards “preemptive censorship.”
And if that’s true – what an argument it is.
UPDATE: As if on cue, and after apparently experiencing backlash for the content of the article, the WEF updated it with a message reading:
“Readers: Please be aware that this article has been shared on websites that routinely misrepresent content and spread misinformation. We ask you to note the following: 1) The content of this article is the opinion of the author, not the World Economic Forum. 2) Please read the piece for yourself. The Forum is committed to publishing a wide array of voices and misrepresenting content only diminishes open conversations.”
Reprinted with permission from Reclaim the Net