Relax. An army of content reviewers some 10,000 strong is busy weeding out malicious posts and fake news from your favorite social media. Soon, it will double in size.
Feel better? You shouldn't. The social media platforms nearly all of us use teem with "bots," artificial intelligence modules that impersonate human beings and tirelessly fill our heads with propaganda. Facebook admits its pages may harbor as many as 270 million fake accounts. That's more than all the people in Brazil. Or Indonesia. Or Pakistan. It's more than the adult population of the United States.
Twitter, too, is rife with fakes. The short-message platform has proven ideal for pummeling targets with crude, virulent takedowns and disinformation.
"We're only beginning to grapple with this," says the University of California-Berkeley's Ming Hsu, who leads a team working on the challenge. "There's a whole world out there that we don't understand well." We'll return to Hsu later for some ideas on how to respond.
Trolls (online purveyors of hateful content) can vastly fortify their bile using bots. The comedian Leslie Jones closed her Twitter account in July 2016 after being barraged by vile tweets. The messages, which compared her to an ape, falsely imputed homophobic quotes to her, and made sexual threats, appeared to come from aggrieved fans of the film Ghostbusters who objected to a remake with a female cast that included Jones.
The dust-up foreshadowed a Twitter tornado during last fall's presidential election. According to the cybersecurity firm FireEye, tens of thousands of Russian Twitter bots worked tirelessly to push hashtags likes #WarAgainst-Democrats and #DownHillary into Twitter's trending zone. But there's worse. Much worse.
Recent Congressional hearings revealed that Russian operatives were able to foment street clashes in Texas by using fake activist sites to steer American Muslims and rightwing anti-Muslims to the same spot at the same time for--you got it--fake rallies.
Social media offer cheap and easy ways to stir mischief. Investigations led by ProPublica have shown that automated advertising tools on Facebook, Twitter, and Google can be used to target and inflame bigots, or to target or exclude whole races and ethnic groups. Starting with Facebook, ProPublica tested whether the social media giant would let them target "Jew haters." Their ad buy was approved in fifteen minutes. A similar test at Google found the search system offered up additional targeting suggestions such as "black people ruin neighborhoods" or "Jewish parasites." The companies all proclaim that such targeting violates their standards, but the artificial intelligence (AI) they deploy remains mostly blind to invidious content.
Among the ads that have since been traced back to Russian origins were many that used vicious stereotypes and scare tactics: they mocked gays, smeared immigrants, invoked the devil, and portrayed Hillary Clinton as in league with Muslim terrorists.
"The social media ads, posts and pages that have been revealed to come from Russian agencies or operatives in 2016 used explicitly anti-Black, anti-Muslim and anti-immigrant stereotypes to undermine the American electoral system, suppress voter turnout and fan the flames of racist hatred and violence," Malkia Cyril, executive director of the Center for Media Justice, said in a statement to The...