Stemming the tide of Twitter abuse

The barrage of death threats and rape threats recently tweeted to journalist Caroline Criado-Perez and Labour MP Stella Creasy  – all because Criado-Perez dared to propose Jane Austen as the new face of the £10 note – would be absurd if it weren’t so malevolent. Just two recent examples in a long line of abominable trolling attacks against women in the public eye, these incidents drew the standard inadequate response from Twitter – Criado-Perez and Creasy were advised to report threats to the police, and use the online form on the Twitter site to complain about individual abuses.

Public anger seems to have spurred Twitter into more substantive action. They announced on Monday that their iPhone platform will now include a “flag abuse” function. They claim they will soon expand that to other platforms. Fine. But they need to make sure it has teeth.

Twitter is worth about $US11bn. The company made $288m in advertising revenue last year, a figure set to double this year and triple the next. Meanwhile, the US is at 7.6% unemployment, and the rest of the world isn’t doing so great, either.

Imagine Twitter hired an adequate staff, on fair terms and with proper training, to check abuse reports. A flagged tweet could first be run through algorithms checking for @ appellations, the names of frequently-targeted individuals, and a handful of vile keywords (including aster*sk variants). Messages caught by those algorithms could go to a priority response team, while the other staff dealt with the rest. Every flagged tweet would still need to be read by a real human (though those that fail the keyword algorithms might be automatically hidden, pending review). Managers with brains, decent salaries and high accountability would need to be on hand to field ambiguous cases. There would have to be an avenue for appeal, if someone felt that a harmless or justified tweet had been incorrectly censored.

Imagine if Twitter users who threatened, for example, to “plant a dick” in a woman’s mouth, or force her to fellate strangers at knife point, or who opined that she was a “fat, ugly cunt” whom “not even a prison escapee would rape,” saw their tweets taken down immediately. A copy kept in Twitter’s records, accessible to the target recipient on request. The accounts associated with such tweets summarily deleted. Imagine if trolls had to start an account from scratch, and tediously re-follow all their disgusting troll friends, every time they indulged in a casual threat of violence. Imagine the message that would send.

Twitter could cover the cost of all these measures for a fraction of its annual profits, restoring the reputation of its service while greatly increasing the quality of comment and debate hosted by the site.

If I’m wrong about the sheer scale of the moderation that would be required – and it’s hard to get clear data about social media moderation, because companies don’t want to draw attention to the volume of offensive material on their sites or the poor working conditions of their outsourced moderators – it would still be better to automatically hide all flagged tweets that fail the keyword algorithms, leaving it to the user to appeal, than to do nothing. Users found to be routinely making false abuse reports – anti-abortion activists who aimed to “flag” pro-choice tweeters out of existence, for example – could be suspended after three offences, and deleted after five. As brilliant recent campaigns like Everyday Sexism’s Facebook advertiser appeal have demonstrated, we don’t yet know what effective moderation of social media sites looks like. But it’s time we found out. Women cannot contribute to public debate on an equal footing until aggressively sexist trolling is brought to heel.

Site-led moderation is not the only solution that has been proposed. Some people have called for an end to online anonymity, arguing it is a temptation to anti-social behaviour. But in an era in which any online comment, photograph or interpersonal connection can be turned up in a moment with a keyword search, people might have any number of just reasons for keeping some distance between their online activities and their public identities. This is true not only for dissidents and activists but for anyone who doesn’t think that everything they do online is the business of their employer, their insurance company, their high school acquaintances or their government (good luck with that last one, obviously). Women and children accessing domestic violence services, people discussing embarrassing medical problems, artists testing out side projects and alter egos, young people who might not want their unsupportive family or school to know they are gay – any number of us might want to have a separate life online, and that’s alright. What’s not alright is the garbage these trolls are blurting at any woman who dares to put her head above the parapet of public conversation.

The hateful idiots making internet rape threats and death threats mostly aren’t contemplating carrying out those threats. That’s not how the violence of trolling works. It works by overwhelming the target with a thousand short sharp bursts of searing aggression, in an attempt to persuade them that they are small and vulnerable, and that they will not win. It’s a baying mob, without requiring anyone to put their pants on and drag the pitchfork out of the shed. But we need not make it so easy for them. We’re at a strange moment in history when a handful of clever individuals can build a popular communications platform out of nothing, grow it into a vast profit factory, and then sit back and watch it run. I realise nobody’s quite clear on what the rules are where new technology, big business and mass communications meet. But it seems obscene that the people who built Twitter, and the people who are making a fortune off it, are doing so little to intervene when they see it being used as a crude instrument of intimidation.

So. Abuse button. Given teeth with a proper staff of moderators. The immediate removal of comments hateful and abusive enough to land the speaker in jail if they’d been uttered face to face; and the ejection of offenders from the site. It can be done, and it should be done.

5 responses to “Stemming the tide of Twitter abuse

  1. Proper, effective moderation is definitely important. I wonder if there’s a greater role for social media platforms within their CSR strategies though – taking the lead in understanding and addressing bullying and abusive behaviour? After all they have an incredible data source at their fingertips, and are in a position to access and influence a lot of people from right across society. Perhaps this is happening already? I did a very brief search but couldn’t find anything.

  2. Really interesting point, Lily. I don’t think I’ve seen anyone suggest that sites like Twitter should actually develop a conscious strategy, not just about addressing bullying among their users, but analysing the behaviour. You’re right that this stuff is structural. Lindy West did a great piece over at Jezebel ( pointing out that trolls aren’t a few bad apples and their bile isn’t random; rather, they’re the thugs of the status quo. Suzanne Moore’s Guardian comment ( was interesting on this point, too. Gave me visions of the trolls descending on any instance of a woman thinking publicly like a crazed teen playing Wack-a-Mole. The internet allows huge numbers of people to be mobilised, at no cost, to achieve collective ends (see, e.g., Wikipedia), and it’s often a bit baffling as to why they give their time and energy so readily to an impersonal cause. The malevolent vigour that trolls bring to trolling is like the photo negative of that phenomenon. Anything that helps us understand that better would surely be useful.

  3. Great article that cuts straight to the point. Twitter is a huge company; they can afford a proper staff. In tandem with automated flagging systems and ultimate referral to the police if necessary, the job is surely possible.
    It’s also super-refreshing to read about the issue in terms of solid proactive suggestions. I’m tired of reading that normal codes of behaviour can’t be enforced online. In a shopping centre, if you’re behaving offensively, the security guards don’t ask for your ID and unneccessarily stress about your right to free speech before moving you off their turf – so why should it be different on the net? Companies need to protect their clients and work within the law. Yes, there are more people online than down at the shops, but they can also be analysed at near light-speed by filter algorithms. And their interactions are all less than 140 characters!
    This article in the Guardian by Tanya Gold also confused me. She seems to be acknowledging that making rape threats is illegal, but then debating whether the law should still be enforced when it happens online. She writes, “We cannot ask social networking sites to police our debate – there has been much talk of a report button on Twitter – and do nothing else. … We must be wary of cosmetic, or over-specific change.” Yes, but I’d counter it’s a false dichotomy: we can start with online policing of violent or hate speech, and *then also* tackle the long arduous process of societal change. Surely.

  4. I agree, the “lawless zone of the internet” thing gets overstated. Mary Beard has commented that online bullying uses much the same tactics as bullying in real life – humiliating people, trying to cut them off from any sense of support etc. Bullying is hard to deal with in real life, too. Changing the culture of turning a blind eye seems to be key. Doing nothing while others get barraged with vitriol is tacitly accepting that dickhead-troll-might is right.

  5. I know I know I know I shouldn’t do this, but I read through the comments people left on some of the threats Stella Creasy received. Someone commented: “Why feed the trolls? Wasting police time with Internet Losers”. Another accuses the person who had posted the abusive comment, calling his action just not very “helpful”.

    Whereas the internet is taken very seriously as a business platform (and, as you pointed out, makes very real money indeed) the abuse that takes place within that space then is not considered to be quite as real. The two comments above suggest that Creasy should ignore the threats (I can imagine they must be so very easy to ignore) and to shrug off abuse as just a little silly and unhelpful. It seems she’s told to look at the bright side (which would be what, exactly?) and get on with her life. Ignore the haters!

    I think this is why Lily’s point is incredibly important. Of course abusive internet behavior must be easy to report––and it is baffling to think that the very clear solution you pointed out has not been put in place already––but it must also simply stop. Taking down abuse goes hand in hand with twitter actively addressing the problem of bullying etc. Because even when we have the flag option in place, even when abuse can be deleted with one click, the potential pain and upset of having been exposed to such abuse remains.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s