The barrage of death threats and rape threats recently tweeted to journalist Caroline Criado-Perez and Labour MP Stella Creasy – all because Criado-Perez dared to propose Jane Austen as the new face of the £10 note – would be absurd if it weren’t so malevolent. Just two recent examples in a long line of abominable trolling attacks against women in the public eye, these incidents drew the standard inadequate response from Twitter – Criado-Perez and Creasy were advised to report threats to the police, and use the online form on the Twitter site to complain about individual abuses.
Public anger seems to have spurred Twitter into more substantive action. They announced on Monday that their iPhone platform will now include a “flag abuse” function. They claim they will soon expand that to other platforms. Fine. But they need to make sure it has teeth.
Twitter is worth about $US11bn. The company made $288m in advertising revenue last year, a figure set to double this year and triple the next. Meanwhile, the US is at 7.6% unemployment, and the rest of the world isn’t doing so great, either.
Imagine Twitter hired an adequate staff, on fair terms and with proper training, to check abuse reports. A flagged tweet could first be run through algorithms checking for @ appellations, the names of frequently-targeted individuals, and a handful of vile keywords (including aster*sk variants). Messages caught by those algorithms could go to a priority response team, while the other staff dealt with the rest. Every flagged tweet would still need to be read by a real human (though those that fail the keyword algorithms might be automatically hidden, pending review). Managers with brains, decent salaries and high accountability would need to be on hand to field ambiguous cases. There would have to be an avenue for appeal, if someone felt that a harmless or justified tweet had been incorrectly censored.
Imagine if Twitter users who threatened, for example, to “plant a dick” in a woman’s mouth, or force her to fellate strangers at knife point, or who opined that she was a “fat, ugly cunt” whom “not even a prison escapee would rape,” saw their tweets taken down immediately. A copy kept in Twitter’s records, accessible to the target recipient on request. The accounts associated with such tweets summarily deleted. Imagine if trolls had to start an account from scratch, and tediously re-follow all their disgusting troll friends, every time they indulged in a casual threat of violence. Imagine the message that would send.
Twitter could cover the cost of all these measures for a fraction of its annual profits, restoring the reputation of its service while greatly increasing the quality of comment and debate hosted by the site.
If I’m wrong about the sheer scale of the moderation that would be required – and it’s hard to get clear data about social media moderation, because companies don’t want to draw attention to the volume of offensive material on their sites or the poor working conditions of their outsourced moderators – it would still be better to automatically hide all flagged tweets that fail the keyword algorithms, leaving it to the user to appeal, than to do nothing. Users found to be routinely making false abuse reports – anti-abortion activists who aimed to “flag” pro-choice tweeters out of existence, for example – could be suspended after three offences, and deleted after five. As brilliant recent campaigns like Everyday Sexism’s Facebook advertiser appeal have demonstrated, we don’t yet know what effective moderation of social media sites looks like. But it’s time we found out. Women cannot contribute to public debate on an equal footing until aggressively sexist trolling is brought to heel.
Site-led moderation is not the only solution that has been proposed. Some people have called for an end to online anonymity, arguing it is a temptation to anti-social behaviour. But in an era in which any online comment, photograph or interpersonal connection can be turned up in a moment with a keyword search, people might have any number of just reasons for keeping some distance between their online activities and their public identities. This is true not only for dissidents and activists but for anyone who doesn’t think that everything they do online is the business of their employer, their insurance company, their high school acquaintances or their government (good luck with that last one, obviously). Women and children accessing domestic violence services, people discussing embarrassing medical problems, artists testing out side projects and alter egos, young people who might not want their unsupportive family or school to know they are gay – any number of us might want to have a separate life online, and that’s alright. What’s not alright is the garbage these trolls are blurting at any woman who dares to put her head above the parapet of public conversation.
The hateful idiots making internet rape threats and death threats mostly aren’t contemplating carrying out those threats. That’s not how the violence of trolling works. It works by overwhelming the target with a thousand short sharp bursts of searing aggression, in an attempt to persuade them that they are small and vulnerable, and that they will not win. It’s a baying mob, without requiring anyone to put their pants on and drag the pitchfork out of the shed. But we need not make it so easy for them. We’re at a strange moment in history when a handful of clever individuals can build a popular communications platform out of nothing, grow it into a vast profit factory, and then sit back and watch it run. I realise nobody’s quite clear on what the rules are where new technology, big business and mass communications meet. But it seems obscene that the people who built Twitter, and the people who are making a fortune off it, are doing so little to intervene when they see it being used as a crude instrument of intimidation.
So. Abuse button. Given teeth with a proper staff of moderators. The immediate removal of comments hateful and abusive enough to land the speaker in jail if they’d been uttered face to face; and the ejection of offenders from the site. It can be done, and it should be done.