TikTok Allegedly Directs Children's Profiles to Explicit Material In Just a Few Taps
Per findings from a new study, the widely-used social media app has been found to direct children's accounts to explicit material after only a few taps.
How the Study Was Conducted
Global Witness created fake accounts using a date of birth for a minor and enabled the platform's content restriction feature, which is designed to limit exposure to inappropriate content.
Investigators observed that TikTok recommended inappropriate and adult-themed search terms to seven test accounts that were set up on new devices with no search history.
Concerning Search Suggestions
Keywords recommended under the "you may like" feature contained "extremely revealing clothing" and "very rude babes" – and then progressed to keywords such as "hardcore pawn [sic] clips".
Regarding three of the accounts, the adult-oriented recommendations were suggested immediately.
Quick Path to Pornography
After a "small number of clicks", the study team found pornographic content ranging from exposure to explicit intercourse.
Global Witness reported that the content sought to avoid detection, usually by showing the content within an innocuous picture or video.
For one account, the method took two interactions after logging on: one click on the search feature and then another on the recommended term.
Compliance Requirements
The research entity, whose mandate includes investigating big tech's impact on human rights, reported performing two batches of tests.
The first group occurred preceding the activation of child protection rules under the British online safety legislation on July 25th, and additional tests after the regulations took effect.
Serious Findings
Researchers noted that two videos featured someone who seemed to be below the age of consent and had been reported to the child protection organization, which tracks harmful material involving minors.
The research organization alleged that TikTok was in breach of the Online Safety Act, which obligates social media firms to prevent children from encountering harmful content such as adult material.
Government Position
A spokesperson for Britain's media watchdog, which is tasked with overseeing the legislation, said: "We appreciate the research behind this study and will examine its conclusions."
The regulator's guidelines for following the act indicate that digital platforms that pose a significant danger of displaying dangerous material must "configure their algorithms to block inappropriate videos from children's feeds.
The platform's rules ban adult videos.
TikTok's Statement
The social media company stated that following notification from the organization, it had removed the problematic material and implemented adjustments to its recommendation system.
"Upon learning of these assertions, we took immediate action to investigate them, take down videos that breached our guidelines, and implement enhancements to our search prompt functionality," stated a spokesperson.