When Exactly Does Something On the Web Cross the Line From Being a Non-Offensive Word Or Picture Into a Hate-Crime? And Who Gets To Decide?

Steven Woolfolk
Steven Woolfolk Receives October 2016 Patriot Award
October 21, 2016
BORDC activists at a protest
Happy Birthday Patriot Act! For Fifteen Years You Have Done Your Best to Crush Democracy. But You Haven’t Won Yet.
October 26, 2016

Staycation, chillax and Brangelina are just a few popular neologisms that have found a home in America’s modern-day lexicon.  Perhaps the last one is on its way out, but most people know exactly it means.   And while it’s essential for the English language to remain flexible and evolve, some new terms might be more trouble than they are worth.   

For example, what exactly is cyber-hate?  A handful of civil rights and technology groups characterize it as using technology to “spread homophobic, anti-Semitic, racist, bigoted, extremist or terrorist messages,” but no commonly agreed consensus or definition exists.

Any time spent perusing posts or comment sections on some of the internet’s most popular sites, let alone what one finds in the dregs of the digital realm, will turn up plenty of angry, racist or insulting talk.  As if anyone who has been on Twitter or reddit needs to be reminded.

And when exactly does something on the web cross the line from being a non-offensive word or picture into a hate-crime?   Earlier this month, the Anti-Defamation League added Pepe the Frog, a cartoon frog that was created as an inside joke among college friends, to its Hate Symbols Database, which also includes the Confederate flag and the word HATE.  Loosely connected to controversial right-wing groups (for reasons that aren’t entirely clear), the cartoon frog first went viral as a popular and largely innocuous meme before becoming a symbol of white nationalism.  But can internet jokes and pictures, which often mutate as they spread virally on the web, be representative of any one thing, hateful or otherwise?

The internet by its very nature is an open platform that encourages free expression.  The First Amendment provides broad protection to offensive, repugnant and hateful expression, and political speech, which can be vulgar or misguided, is among the most protected forms of speech.

But there is an important distinction to be made between free speech, regardless of how profane it might be, and harassment.  Last year the Supreme Court ruled that an online diatribe, no matter how reprehensible, is not criminal speech unless the author intended it as a threat and understood that others would take it as such.

This means no matter how hateful or misogynistic an online rant is, the Supreme Court says it’s not illegal unless it was made and intended to be viewed as a threat.  While some free speech advocates applauded this ruling, the only dissenting opinion on the court Justice Clarence Thomas wrote that it, “throws everyone from appellate judges to everyday Facebook users into a state of uncertainty.”

Things can get even murkier when law enforcement starts monitoring or arresting people because of the words they use online.

In cities across the country, police departments increasingly turn to computer programs to monitor and track social media to find the ostensible connections between online speech and offline crime. In Baltimore city and in surrounding counties, police departments employ a private company’s service called Geofeedia to map out people’s posts from Instagram, Twitter, Facebook, YouTube, Flickr and other social media outlets to track their actions and predict potential criminal activity based on those posts.  That level of scrutiny would be right at home in many Philip K. Dick novels.

In Chicago, CPD says it’s mapping the online relationships among 14,000 gang members, and ranking how likely those people are to be involved in a homicide, either as victims or offenders.  CPD claims that social media played a major role in many of the reported 127 people shot during the first two weeks of 2016, but did not specify how.

“It’s the modern way of gang graffiti,” said CPD’s Interim Superintendent John Escalante, describing how police officers view online speech.

But why stop at gang members?  Various online posts created by people attending protests, parades and holiday celebrations were monitored and stored by Baltimore police, too. In advance of large sporting and community events, police in Huntington Beach, California used Geofeedia to scan the web for online text that included certain key words such as “fight,” “riot,” “gun,” “bomb,” “shoot,” and “drink”.

And it’s not just local police departments who are examining your Facebook posts for future crimes. The Department of Justice recently funded an $800,000 research project at Cardiff University in Wales England to create a program that scans social media to predict outbreaks in hate-crime. Over the next three years, the Financial Times reports, “the new algorithm will analyze the language used in Tweets referring to events such as the US presidential election, map them to city districts and cross-reference this with reported hate crimes on the streets.”

Peter Burnap, one of the project leaders at Cardiff, described how the program would work:  “It doesn’t always have to use derogatory words associated with racism: it could be much more nuanced…We are using natural language processing to identify cyber hate in all its forms.” (emphasis added)

While these social media-tracking programs don’t criminalize online speech, in the increasingly digital world we live in, they create an incredibly dangerous slippery slope.   It’s important that police don’t confuse actual online criminal or terrorist activity with the angry rantings of a Twitter user. And law enforcement has repeatedly demonstrated that it uses its expanding surveillance capabilities to harass and monitor law abiding activists.

Second, the emphasis on hate-crimes by the DOJ is not a coincidence. It is a not so clever way to open the door to criminalizing and harassing protected free speech activities.  This type of law enforcement doesn’t simply chill online free speech, but plunges it into a deep freeze. Who would feel comfortable expressing themselves or sharing their location if they know that special police bots are scanning their posts for “key words” or where they will be?

Furthermore, where is the evidence that any of this works? Lee Rowland, senior staff attorney with the ACLU’s Speech, Privacy & Technology Project, said the science behind social media monitoring by law enforcement has not been settled.

“There is absolutely no evidence that pervasive social media monitoring is effective at all,” she said. Rowland warned it can target religious and ethnic minorities disproportionately. “It floods agencies with information on innocent individuals and conduct which just makes it more difficult to identify and respond to actual threats,” she said.

We live in an age of social media where online behemoths such as Facebook and LinkedIn assiduously map our daily activities, pictures and friendships and use that knowledge to shape our preferences and behavior.  But the connections between online speech and future offline crime is hard to draw with certainty and consistency and shouldn’t be the focus of police departments.