On Thursday, the New York Times — yes, THAT New York Times — said that an examination of “more than 1,400 pages” from Facebook’s rulebooks governing appropriate speech online revealed “numerous gaps, biases and outright errors” in the way moderators determine what speech is considered appropriate for the platform.

According to the Times report, the rulebooks were provided “by an employee who said he feared that the company was exercising too much power, with too little oversight — and making too many mistakes.”

The Times explained the methods used to create the byzantine rules it uses to censor speech around the world (Emphasis added):

Every other Tuesday morning, several dozen Facebook employees gather over breakfast to come up with the rules, hashing out what the site’s two billion users should be allowed to say. The guidelines that emerge from these meetings are sent out to 7,500-plus moderators around the world.

The closely held rules are extensive, and they make the company a far more powerful arbiter of global speech than has been publicly recognized or acknowledged by the company itself, The New York Times has found.

The Daily Caller summed it up this way:

Trending: Millennials Say Barack Obama Had Greater Impact on America Than George Washington — Poll

The Facebook employees, many of whom are young, attempt to distill complex issues into concrete yes-or-no categories. Much of the post-by-post moderation is outsourced to companies that enlist unskilled workers, the report states, citing documents from an employee who worried the rule book is too intrusive.

Moderators often use Google Translator for the mind-numbing work. They must recall countless rules and apply them to the hundreds of posts a day, with the cultural context largely stripped. They suss through emojis, smiley faces and sometimes innocuous comments to determine what is dangerous and what is not.

Facebook executives say it’s all part of the daily grinding of trying to ensure a safe place on the platform.

The results?  The Times gave a few examples:

Moderators were once told, for example, to remove fund-raising appeals for volcano victims in Indonesia because a co-sponsor of the drive was on Facebook’s internal list of banned groups. In Myanmar, a paperwork error allowed a prominent extremist group, accused of fomenting genocide, to stay on the platform for months. In India, moderators were mistakenly told to take down comments critical of religion.

We and others have reported on some incidents in which Facebook declared innocuous posts to be either hateful or violent — things like a portion of the Declaration of Independence or a picture of Santa Claus kneeling before the baby Jesus — all the while permitting actual hate and violent content from leftists.

Part of the problem is that there is no single guide or rule — only a “patchwork of rules” put out by different parts of the social media company.

The Times says each individual rule might make sense, “but in their byzantine totality, they can be a bit baffling.”  And moderators are expected to blindly follow them.

The report explains:

One document sets out several rules just to determine when a word like “martyr” or “jihad” indicates pro-terrorism speech. Another describes when discussion of a barred group should be forbidden. Words like “brother” or “comrade” probably cross the line. So do any of a dozen emojis.

There’s more:

The guidelines for identifying hate speech, a problem that has bedeviled Facebook, run to 200 jargon-filled, head-spinning pages. Moderators must sort a post into one of three “tiers” of severity. They must bear in mind lists like the six “designated dehumanizing comparisons,” among them comparing Jews to rats.

“There’s a real tension here between wanting to have nuances to account for every situation, and wanting to have a set of policies we can enforce accurately and we can explain cleanly,” said Ms. Bickert, the Facebook executive.

Though the Facebook employees who make the rules are largely free to set policy however they wish, and often do so in the room, they also consult with outside groups.

In short, Facebook has essentially become a digital government unto itself, something yours truly and American-Israeli Adina Kutnicki explained in our 2016 book, “Banned: How Facebook Enables Militant Islamic Jihad.”

Things have gotten much worse since that book was first published.  As we and others reported, Facebook yanked over 800 pages and accounts — many of them political in nature — right before the 2018 midterm elections.  The social media giant has also crippled conservative sites with their algorithms while helping more liberal outlets like CNN.

As a result of the ongoing censorship, many have turned to alternative outlets that support free speech online.  Even so, Facebook remains the proverbial 800-pound gorilla in the room, and with enough power to influence elections and control speech on a global level.

“Facebook’s role has become so hegemonic, so monopolistic, that it has become a force unto itself,” Jasmin Mujanovic, an expert on the Balkans, told the Times. “No one entity, especially not a for-profit venture like Facebook, should have that kind of power to influence public debate and policy.”

Which is why efforts like GOP Rep. Louis Gohmert’s bill and the Stop Social Media Censorship Act are critical and need to be passed.

The bottom line: We live in a dream world if we think our freedoms are secure merely because they’re engraved on a parchment somewhere in Washington, D.C. because they’re not.  It’s time Republicans in Congress and Republicans in state legislatures step up to the plate and do the right thing — before it’s too late.

Cross-Posted With Conservative Firing Line