Children are targeted with graphical online content sometimes within hours of setting up social media accounts, a report revealed.
The researchers created avatars based on information from real teens ages 13-17, including who they follow and what posts they like.
But despite their age, it didn’t take long for fake profiles to receive an array of inappropriate content.
“We saw a lot of very graphic self-harm images, images of razors, of cuts,” said Abi Perry, a 24-year-old researcher at Revealing Reality, who did the work.
“They got to see content that promoted diets and saw a lot of very sexualized images.
“We were able to search for porn, for example, and click on content that showed explicit images.”
Many avatars were contacted by unknown adults within hours of signing up.
In one day, the fake profile of “Justin”, 14, had received three separate direct messages referring to sites offering paid pornography.
The experiment was commissioned by the 5Rights Foundation child safety group and the Children’s Commissioner for England.
They are calling for rules on the design of online services.
Tony Stower, director of external engagement at the foundation, said: “In the offline world, we put a whole range of protections in place for children, so that they cannot get into R18 movies. Of course, we don’t give them access to pornography, knives, and alcohol.
“But in the online world, these services are designed specifically to enable that.
“What we are asking is that these services put the same kind of protection in place in the offline world in these digital services, so that children are protected from the moment they go online.”
Research has also found that children are targeted with ads specific to their age, such as information about college courses, while making sexual or self-harm content available.
And “a kid who clicks on a diet tip, by …
More information about this article Read More
This notice was published: 2021-07-19 21:54:00