CIO

Social media sites still don't do enough to combat abuse

Twitter, Google and Reddit's new policies to reduce abuse simply aren't enough to combat the exploitation they've let fester for years.
Curt Schilling

Curt Schilling

Every day, countless individuals and groups are victimized on social networks. The abusers, detached and cloaked in anonymity, often take on different personas as they shame, troll, incite and denigrate others with relative impunity. The ramifications can be devastating and, until recently, the majority of social media companies failed to acknowledge -- let alone confront -- the vulgarity and vicious threats that fly so freely on their platforms.

Twitter,Google and Reddit all recently introduced new policies, filters and tools designed to help block groups of abusive users, remove revenge porn from search results and prohibit other attacks and harassment on their platforms. Victim advocacy groups applaud the changes but also implore these companies to continue finding better ways to protect vulnerable users.

[Related News Analysis: Snapchat, Whisper promise privacy but fail (miserably) to deliver]

"There's no reason for people to be mean, but because they're anonymous and can hide behind a screen they feel, 'This is OK, I can say anything about politics, celebrities, anything I want,'" says Ross Ellis, founder and CEO of Stomp Out Bullying. "They wouldn't do that if they were face-to-face with these people."

Cruelty and anonymity go hand-in-hand on social sites

The Internet is a dangerous place, according to Ellis, and it's not necessarily the fault of Google, Twitter or Reddit if someone posts something hateful. However, these companies do have a responsibility to combat misuse of their services. "I think it's becoming a public health crisis at this point, and what they're not doing is providing experts ... so that if someone's in trouble they can get help," she says.

The communal benefits of social media are relentlessly challenged, sometimes usurped, by people who want to turn online communities, such as Twitter and Snapchat, into bullying playgrounds. Former Twitter CEO Dick Costolo admitted as much earlier this year when he took personal responsibility for what he called an "embarrassing" response to pervasive abuse, in an internal memo obtained by TheVerge.com.

"We suck at dealing with abuse and trolls on the platform, and we've sucked at it for years," Costolo wrote in February. "I'm frankly ashamed of how poorly we've dealt with this issue during my tenure as CEO. It's absurd. There's no excuse for it." Costolo also promised that Twitter would take a more effective approach to combatting abuse.

[Related News Analysis: Stars react on social media to iCloud photo leaks, Apple denies breach]

When major incidents of public abuse occur, such as the highly publicized hack of more than 100 celebrity iCloud accounts last September, critics often blame the victims and sling self-righteous proclamations about how people could have avoided the invasions of privacy. Similarly, social companies routinely place the burden on victims. Instead of working to protect users from abuse and ensure the most egregious attacks are quickly cut off, the major social platforms willingly distribute abuse.

The unfortunate reality is that no one is safe on social sites, and no amount of money, power or influence can change that. When former professional baseball player and current ESPN commentator Curt Schilling recently congratulated his 17-year-old daughter on Twitter for selecting a college and joining the softball team, the duo received threats of sexual violence, less than a month after Costolo promised that things would change. Schilling quickly responded with a bit of "doxing" of his own, by publicly identifying some of the individuals who had abused his daughter.

Whack-a-mole in the Internet's Wild West

If social companies are going to take responsibilities for what happens on their sites they need to offer more meaningful resources and simple solutions to help combat ongoing abuse, according to Ellis.

"It really is like the Wild West frontier because anything can happen," Ellis says. "[Abusers] think that Twitter is not watching, Google isn't watching and Reddit's not watching"

Katie Ray-Jones, CEO of the National Domestic Violence Hotline, says online abuse is particularly widespread among young people. While some social companies do take steps to combat bullying, hate speech and other abuse, it's a constant challenge to stay ahead of the people who want to use their technologies to attack others, she says.

"Until there are clearer definitions and laws around cybercrimes, there will be little recourse for victims," Ray-Jones says. "There is a lot of onus on victims to report and block for each instance of abuse." Social media companies could help victims by making it as simple as possible to report abuse and block violators.

[Related News Analysis: Social media giants fail in response to online abuse]

Ray-Jones says social companies could also push to pass laws that support and protect victims. "Working closely with legislators and law enforcement, helping to educate them on this issue, and expressing the seriousness and scope of online harassment and cyberstalking could go a long way toward helping create a more effective response on multiple levels."

Stomp Out Bullying's Ellis says she still hears from children who received no response or appropriate action after reporting incidents of abuse on social networks. "As wonderful as theses sites are they have to protect their audiences."