This year’s general election brings up new challenges, as lines between hateful speech and political rhetoric are blurred, especially with the proliferation of deep fakes - Guillaume de Germain photo via Unsplash
Is Social Media Fomenting Hate?
By Sunita Sohrabji
San Francisco, CA
Social media companies say they are actively fighting to ban content that promotes hate on their platforms, amid the rise of inflammatory speech in public discourse.
Giants of the industry have increasingly come under fire for their lack of oversight on content uploaded by users. Three weeks ago, the Senate Judiciary Committee held a hearing with the CEOs of Meta, Google, TikTok, and X — formerly known as Twitter — concerned about content that could be harmful to children and teens.
Artificial Intelligence
Of particular concern in a deeply divisive election year is the proliferation of “deep fakes,” artificial intelligence-created content used to misrepresent a candidate’s position. President Joe Biden is often the target of deep fakes. In one recent incident, his voice was used to generate robocalls instructing New Hampshire Democrats not to vote in the presidential primary. In another, he purportedly verbally attacked transgender women.
Last year, Biden issued an executive order which — in part — tasks the Commerce Department with developing standards to clearly watermark and label AI-generated content. On February 16, 20 companies — including Meta, Google, TikTok, Microsoft, Open AI, and Amazon — signed an agreement stating they would monitor their content to protect against deep fakes. The agreement, which was unveiled at the Munich Security Conference, stopped short of calling for a ban on artificially-derived content.
Hate Crimes Summit
At a recent hate crimes summit here, spearheaded by the Justice Department, senior counsels for Google and Meta said their employees were actively monitoring all content posted to their portals. “In 2023, we were able to catch 93 million pieces of hate content that we took down before anyone saw it,” said Cynthia Deitle, Director, Associate General Counsel of the Civil Rights team at Meta, the parent company of Facebook, Instagram, and WhatsApp. “We have hundreds of employees all over the world to flag down offensive content,” she said, noting that Meta is the only prominent social media company with a civil rights division.
Deitle, a former FBI agent, said she is working with law enforcement to ensure there is not implicit bias in the content they upload to their websites. “Are you putting up too many photos of Black suspects?” she asked the room packed with law enforcement officials, including local police chiefs, district attorneys, and FBI agents.
YouTube
More than 500 hours of content are uploaded every minute to YouTube, said Michael Maffei, senior security counsel at Google, which owns YouTube. “It is impossible to identify all bad content before it is viewed,” he said. YouTube has a strictly-enforced policy of not allowing hate speech on its platform. It prohibits hate-driven content that attacks individuals of a certain race, gender or non-gender, nationality, ethnicity, age, disability, and other protected categories.
“We will remove hateful content targeting specific groups,” said Maffei, noting that similar policy exists for violent or extremist content. Offensive content uploaded by users from outside the US is sent to Interpol for further investigation.
Paul Cheung, chief executive officer at the Center for Public Integrity, questioned whether companies were truly invested in canceling out hate speech on their platforms. “Hateful content gets the clicks. It gets monetized,” he said.
EMS’ Stop The Hate initiative is made possible with funding from the California State Library (CSL) in partnership with the California Commission on Asian and Pacific Islander American Affairs (CAPIAA). The views expressed on this website and other materials produced by EMS do not necessarily reflect the official policies of the CSL, CAPIAA or the California government. Ethnic Media Services