Flagbot: The Censorship Protocol That Will Change YOUR Behaviour

Flagbots

Once upon a time, we all used to post on old-style, moderated forums. Forums were far from perfect, but one thing we could be sure of was that if we observed the forum rules, our accounts and profiles were safe. On most forums, moderators would even privately warn members if they were sailing close to a penalty of some kind. How different things are now that massive social media and publishing sites have become the hub of our online activity.

ENTER THE FLAGBOT…

Almost all large, user-generated content platforms now use ‘flagbots’ to moderate their sites. A flagbot is simply a piece of software designed to detect breaches of the Terms of Service, so a human moderator doesn’t have to do it. Great! It’s labour-saving, and that means the sites can spend their money on more important stuff, like… Well, spying on us and selling their findings to corporate thugs, obviously.

There are just one or two snags. The flagbots are engineered not only to detect ToS breaches, but also to actually take immediate, shoot-on-sight, punitive action. So if they get it wrong (and Snag 2 – flagbots have no sense, so they do get it wrong, a lot), masses of innocent users are punished for good behaviour.

Flagbot-autonomy is one of big tech’s most insidious methodologies. As a standard practice on very large publishing and social sites, flagbots will suspend, censor or lock out platform users purely on suspicion, before a human has confirmed the user actually did something wrong.

It’s then left to innocent users to jump through often time-consuming hoops in order to restore perfectly legitimate content, access or accounts.

In some cases, users’ profiles are not only disabled, but actually damaged, with processes that seek to diminish the audience they’ve already worked to build. So even when the case is reviewed by a human and the profile is restored, the user has been irreparably punished.

Censorship Bots

WELL-BEHAVED USERS ARE THE LAB RATS

Tech insiders, or former insiders, have suggested that flagbots are often deployed with a hugely over-zealous catchment of possible violation, and then trained to tone down and be more accurate through ‘machine-learning’. In other words, the burden of training flagbots, is placed on innocent site users, whom the tech companies know will submit a report every time the software gets it wrong. Nice, eh? This certainly appeared to be true when Tumblr recently introduced its adult content flagbot.

THE CREEPING MENACE

Punishing innocent users, and giving them the inconvenience of training flagbots to work properly, is bad enough. But there’s a greater menace. These automated suspension, censorship, shadowban and naughty-step programmes intimidate responsible users into avoiding legitimate topics of discussion. That’s right, we are being taught, by flagbots, to fear doing and saying certain things, and even associating with certain people, because we think we might be punished by a robot that doesn’t have any sense.

Worse still, we’re never actually told what we did to falsely trigger the flagbot’s wrath, because that might allow malicious users to circumvent the flag with genuinely bad behaviour. So we come to fear doing not only the thing that actually triggered the flagbot, but lots of other possible considerations besides.

Perhaps one of our keywords triggered the flag. But which one? We don’t know, and no one will tell us, so we start to avoid using all the keywords we think might have triggered it. And it may not have been a keyword at all. It may have been something else.

We may even come to assume that our browser is causing us to be flagged, because it’s in competition with the organisation that runs the site which flagged us. That’s the way a lot of us think in the absence of fact.

The result is that we assume, and we stop doing one or more things we know can be controversial. For example, if someone was auto-flagged and suspended after swearing, they may conclude they were suspended for abuse, even though they didn’t abuse anyone. Remember, we’re talking here about people who have NOT breached the ToS, and who thus get their account back with a “sorry”, but no explanation other than “mistake”. So those people are not being conditioned to worry about what they actually did wrong. They’re being conditioned to worry about what a piece of dumb software thinks they did wrong. We’re trying to second-guess robots.

In the user’s eyes, the bot saw them swearing, and interpreted that as abuse. So the user stops swearing. And stops saying other things that can feasibly be considered “abuse” by a piece of dumb software. That means they stop quoting people who swear. They stop quoting people who called someone a piece of worthless scum, even though they may have wanted to quote in order to disagree and support the abused party. They may also spread their fears to other people…

“Don’t say this or that, because you get suspended.”

I’ve seen people doing just that on Twitter.

Internet bans and censorship

A creeping menace indeed. Especially given that in actual fact, the false suspension could have been based on a suspicion of something entirely different. Maybe the person often uses a couple of keywords that are common with spammers, and was flagged as a spammer. When you have a system where people are being punished without reason or explanation, it plays dangerous mind games, which can change people’s behaviour in negative ways.

IMPACT ON FREE SPEECH AND PEER SUPPORT

It’s an enormous danger not only to responsible free speech, but also to peer support mechanisms (“I would have stepped in to help, but I don’t want to risk suspension“), and some sites’ own income. Sites that offer paid upgrades need to be especially careful with their flagbots. Would I ever pay an upgrade to a site that’s suspended me in error? Obviously not. Nothing petulant about it – it’s simply a lack of buyer confidence. What’s to say you won’t suspend me “by mistake” once you’ve taken my money?

In particular, comedy is a hazard, because bots do not have a sense of humour. I’ve had real problems with satirical projects, and innovative comedy in general is dishwatered by most of big tech. Google isn’t interested in it, and it’s extremely difficult to SEO comedy posts. But comedy was an instant hit for me on old-style forums, where the moderators are human beings, and the posts are much more visible than they ever would be on Google. In fact human moderators sometimes moved my comedy posts to a section where they would be more visible, even though it wasn’t technically the right section for the post.

Online suspensions and censorship

REMEDY?

Flagbots are in themselves a good thing. No one can expect human beings to actively monitor sites the size of Facebook, Twitter or WordPress. What’s not a good thing is awarding flagbots the autonomy to punish on suspicion only. That’s where the problem lies, and big tech should get to work sorting the problem out.

It’s a testament to the restrictive paranoia caused by big tech’s increasingly aggressive flagbots, that this year I have considered going back to posting on moderated forums for the first time in over six years. Years ago, I posted on numerous forums, and was never locked out, suspended or censored. Ever. All these things have, however, happened to me multiple times on flagbot-monitored sites – “by mistake”. Maybe it’s time for us all to start supporting more localised resources once more, before we’re reduced to posting about gardening products and the fluffy weft of bath rugs. I might have a bit of catching up to do on the old ‘rep points’, but returning to smaller communities may well be my remedy. Still got my logins, and I know I won’t get redirected to a “lock” screen when I type in my password.

Advertisements