Anti-LGBT hate groups are using social media to spread their message, and platforms such as Facebook and Twitter do little to rein them in — by design.

The American Family Association, Coral Ridge Ministries, Concerned Women for America and the Family Research Council, for example, all have active Facebook pages. The Southern Poverty Law Center, an organization dedicated to monitoring extremist activity, classifies them all as hate groups.

What makes them hate groups?
Mark Potok is an editor and senior fellow at the Southern Poverty Law Center.

Mark Potok is an editor and senior fellow at the Southern Poverty Law Center.

The Southern Poverty Law Center has some specific criteria for what constitutes a hate group. “First of all, it needs to actually be a group, not just a website,” said Mark Potok, editor and senior fellow at SPLC. “It can’t be an individual. We don’t base anything on criminality, not on violence, but strictly on the ideology of the group, so if the group says an entire group of humans are somehow less important simply based on their characteristics – that’s it.”Potok said the process of determining anti-LGBT hate-groups is a bit more refined. “We’re not going to base [it] on someone’s opposition to gay marriage or religious beliefs,” he said. “But we will base our designation on regularly putting out defamatory falsehoods about the LGBT community.”

Potok said SPLC has created an explainer, “10 Myths About Gay Men and Lesbians,” to help educate the public on the truth behind LGBT issues. The article discusses myths such as “Same-sex partners harm children” and “Homosexuals don’t live nearly as long as heterosexuals” by presenting the argument and debunking it with facts. SPLC prints this and other explainers as hard copy “as a way to arm activists and allies in battle,” Potok said.

Facebook, though, isn’t about to remove their pages. And Parry Aftab, executive director of WiredSafety, can explain why. WiredSafety is the oldest and largest online safety, education and help nonprofit organization. Aftab sits on Facebook’s Safety Advisory Board, which advises Facebook employees in formulating their policies.

The face behind Facebook’s abuse policy

Facebook has a huge department at dedicated to addressing reported abuse, said Aftab: “They’re all over the world, they’re people who work 24/7/365, they’re specially trained, they have background checks, they have very good oversight, and they’re located in most major time zones, so this is a big-big operation Facebook has.”

Parry Aftab is executive director of WiredSafety and a member of Facebook's Safety Advisory Board.

Parry Aftab is executive director of WiredSafety and a member of Facebook’s Safety Advisory Board.

Facebook’s Community Standards policy specifies that the company will remove content, disable accounts and work with law enforcement when it believes there is a genuine risk of physical harm or direct threats to public safety.

“When creating policy, it’s easy to look at it in the sense that ‘it’s offensive, it should be taken down,’” Aftab said. “But we need to look at this on a global spectrum. We don’t want to let governments around the world censor [their] citizens by reporting unwanted content on Facebook, but we also want to protect individuals who may be getting harassed. It’s very complicated.”

Aftab said millions of posts do get taken down, so if something violates terms of use and it gets reported to Facebook, Facebook’s pretty good at removing it.

“So much of what violates the terms [is] in context, and it’s very hard to deal with that, but when you’re dealing with a billion users, it’s pretty hard to get things right for every person in every place, and you know that’s the challenge,” Aftab said.

Simply being hateful isn’t enough

Posts targeting a specific individual could be taken down, but Facebook does not venture into the credibility of an article, Aftab said. According to the terms of use, simply being an anti-LGBT post, even from a hate group, even with inaccurate information, is not cause to be removed.

“It’s terrible, but probably will not violate the terms of service,” Aftab said. What happens is, that requires a lot more care to determine whether it is credible research or not, whatever is being posted. And that’s not a thing that a moderator can look at and say, ‘A-ha, this is clearly stupid and shouldn’t be up.’ That’s not what they do.”

For example, Aftab said, users “can say 2+2=7 and Facebook doesn’t care, and there may be things about a fake Holocaust and a lot of other issues that may or may not violate their terms of service at that time.”

“So it’s challenging and I know a lot of people would love to have Facebook step in and arbitrate, and say, ‘You know, this is ridiculous, take it down,’ but that’s not their role,” Aftab said.

She said if there is a direct threat or plan of attack on a specific individual, that qualifies as an instance of personal attack that Facebook can deal with.

“But if it’s just against one particular class, a general minority and doesn’t try get people to attack some person, that is just information, and it’s awful and it’s hateful and we want the world to not have that in it, but Facebook’s role is not to arbitrate that,” Aftab said.

Different tiers of customer service support operate within Facebook, Aftab said. “If there’s pornography that’s taken down, that’s something cut and dried that can be removed, but if there’s child pornography, that is handled in a very special way and they have very sophisticated technology to deal with that and they work with law enforcement.”

“Then if it gets more difficult and more serious, it moves up in a triage system that moves it to escalation teams that look at it to make some of the hard decisions and people who are expert in the legal content versus the inappropriate content etcetera,” said Aftab.

She said Facebook changes its Terms of Use only when absolutely necessary.

Twitter’s approach to faulty information

Twitter’s Terms of Service also work at balancing diversity with creating a safe environment for every user. In a specific section addressing offensive content, Twitter states it will take action only on

• Violent threats
• Abuse and harassment
• Self-harm
• Private information
• Impersonation

Twitter reserves the right to refuse or distribute any content, but is not obligated to.

So the hate groups are going to stay on social media platforms.

Where does this leave journalists?

Kelly McBride, a media ethicist at the Poynter Institute, said professional journalists aren’t getting taken in by the hate group’s social pages with their links to illegitimate research. Still, that doesn’t mean it doesn’t circulate.

“I have seen a lot of bad information spread on social media, but that is coming more from the general public rather than credible journalists,” said McBride.

Attendees at the NLGJA Convention can tune up their source-evaluation skills by attending Saturday’s session on fact checking, “Bulletproof Your Story.”

Bulletproof your story
Professionals in any field can always benefit from refreshing and updating their skills, and we journalists are no exception. There’s always room to tune up source-evaluating and fact-checking skills around LGBT and other issues.

Andrew Seaman of Reuters, Shane Allen of PersonalTrainerfood.com and Bob Connelly of National Geographic will show attendees how journalists can not only prevent major errors, but how to respond in the case someone has made one, and what tools are out there. Join other convention attendees in Elizabethan B from 2:45 to 4 p.m. Saturday to look into examples such as Rolling Stone’s much-criticized coverage of rape allegations at the University of Virginia.