While Facebook has routinely maintained that its artificial intelligence (AI) can police problems like violent content and hate speech on its platform, a new report suggests the tech isn’t that advanced.
The Wall Street Journal (WSJ) reported Sunday (Oct. 17) on a review of internal Facebook documents that show that the social media giant’s AI can’t consistently identify subject matter like racist rants and first-person videos of shootings.
The documents also show Facebook employees have estimated the platform removes just a fraction of posts that violate its rules, according to the report. When the algorithm can’t determine if content breaks Facebook rules, it just shows the material less often, but does not delete the post or sanction the accounts that posted it.
The documents also show Facebook cut the time human reviewers spent dealing with hate speech complaints two years ago and made other changes that reduced the number of complaints. This caused Facebook to be more dependent on AI and inflated the success of the technology, at least from the public’s view, the report stated.
The documents show that the Facebook employees charged with keeping offensive or dangerous content off the platform said that the company has a long way to go before it can effectively screen against that content, according to the report.
“The problem is that we do not and possibly never will have a model that captures even a majority of integrity harms, particularly in sensitive areas,” wrote a senior engineer and research scientist in mid-2019, per the report.
The engineer estimated the AI removed posts that generated just 2% of the views of hate speech that violates Facebook rules, the report stated.
“Recent estimates suggest that unless there is a major change in strategy, it will be very difficult to improve this beyond 10% to 20% in the short-medium term,” he wrote, according to the report.
Last year saw Facebook get a lot of unwanted attention over its handling of news about the pandemic and social unrest throughout the U.S. The company also faced boycotts from advertisers over what critics said was a lackluster approach to removing hate speech.