TikTok is rushing to build teams around the world to moderate its streams of viral videos, after increasing concerns that its young users are being exposed to the same sort of toxic content that has plagued YouTube and Facebook. The Chinese-owned app, which has amassed over 1bn monthly users in just three years, will also outsource all decision-making about what videos are acceptable to its local teams in the US, Europe and India after running into criticism about whether it is censoring content along.
In the US, where TikTok has topped the download charts at times this year, responsibility for content has been fully localised and there are plans to add more “subject matter experts” in 2020, said Eric Han, its US head of safety. But the company is severely lagging behind its Silicon Valley rivals, who have had more time to grapple with how to protect young users from disturbing and illegal posts. Working with third-party analysts, the Financial Times found evidence of violence, hate speech, bullying and sexually explicit content on TikTok, in some cases as part of trending topics with millions of posts. Several ByteDance employees and former employees told the FT that a lack of experience among the policy teams in particular means they are currently ill-equipped to deal with its moderation problems.
“It can be quite a dangerous place for [young people],” said Darren Davidson, editor in chief at Storyful, a social media intelligence agency. “Broadly, we are seeing what we saw on the traditional platforms four or five years ago.”