Making the Internet a safer place for children is not easy

The Kids Online Safety Act, a bill that would oblige any online platform or service used by children under the age of 16 to make changes to limit the potential harm to children on the Internet, was presented to the US Senate on February 16. This is just one of several bills on the subject being discussed at federal level in the United States: the demand for greater protection of minors online is one of the rare issues on which both Democrats and Republicans agree.

The topic of child protection online is almost as old as the internet itself, but it attracted new political attention following the revelations of former Facebook employee Frances Haugen in October 2021. According to Haugen, confidential documents she made public showed , among other things, how Mark Zuckerberg's company was aware of the negative effects of their platforms – especially Instagram – on the mental health and self-image of younger people.

The charge was dismissed by Meta, the company that owns Facebook, that Haugen's internal research has been decontextualized, and the platform also has very positive effects on teenagers. On the other hand, several experts point out that we do not have enough data to really understand the effect of contemporary technologies on younger people, for better or for worse.

“The problem is that researching the damage caused by the web is difficult; it is difficult to identify causality and to obtain good metrics, 'recently explained Professor Nejra Van Zalk of Imperial College London. “Online platforms are changing rapidly, making search quickly obsolete. It is also difficult to define what the specific harms are: the internet is a very big place, and talking about the harms caused by the web as a single phenomenon means grouping together disparate things like eating disorders, radicalization and filtering bubbles “.

Nonetheless, the documents published by Haugen and the media hype that followed prompted the US Congress to address the issue, first by questioning the man at the head of Instagram, Adam Mosseri, then with three different bills presented by parliamentarians belonging to both to the Republican and Democratic parties – which currently control both chambers.

– Read also: The effects of Instagram on the youngest

The fact that the specific need to protect minors online is a priority for both large American parties is not at all obvious: as Cecilia Kang wrote in the New York Times, normally “very bright dividing lines appear when it comes to writing rules. defining the amount of data that can be collected by platforms, whether consumers can sue sites for defamation or whether regulators can slow down the dominance of Amazon, Apple, Google and Facebook “.

On the other hand, a series of more visceral factors converge in concern for the safety of minors online. There is the moral panic sown by the QAnon conspiracy theory, quite widespread in the United States, which has made the fight against child pornography and human trafficking a top priority for many voters (because the totally invented accusation of child pornography to alleged progressive elites is among the foundations of the theory). Then there is the suspicion of adults towards digital spaces that they do not frequent and do not understand, but on which their children spend a lot of time. According to Professor Laurence Steinberg, “blaming Facebook for a teenager's malaise can become a convenient way to avoid other more inconvenient but equally plausible explanations, such as family dysfunction, substance abuse and school stress”.

And then there is a trust in tech companies almost completely eroded by years of scandals and errors, even specifically linked to the subject of the law. In 2017, for example, it emerged that YouTube Kids was unable to identify particularly disturbing videos for children in time, showing, for example, Mickey Mouse dead in a pool of blood or puppies catching fire after a car accident. In 2019, however, a bug appeared in the children's version of Facebook Messenger that allowed children to participate in group chats with strangers.

Since then – pressured by growing public scrutiny and by laws like the UK's Age Appropriate Design Code, which requires digital services that have underage users to comply with minimum privacy standards – companies have moved. Both TikTok (the most used application in the 10-19 age group) and Instagram, YouTube and Google have introduced new functions and guidelines to make the presence of young people on their platforms safer, minimize the circulation of child pornography material and limit content problematic. The new bills, however, call for more to be done.

The Kids Online Safety Act was introduced by Tennessee Republican Senator Marsha Blackburn and Connecticut Representative Richard Blumenthal, Democrat. The bill would introduce new obligations for companies that offer online services and have users under the age of 16. The platforms are asked to prevent the promotion of harmful behaviors – such as suicide, self-harm, substance abuse and eating disorders – but also to allow minors and their parents to “control their experience and personal data”, offering the possibility to opt out to content recommendations managed by algorithms, reducing the data that can be collected about them or even the time they spend on the platform.

Interested companies should also publish an annual report on the potential risks to minors of their products and make the relevant data more accessible to external researchers. The bill also asks the National Telecommunications and Information Administration, an organization that advises the President of the United States on telecommunications, to understand how platforms can best verify the age of their users.

The bill is waiting to be discussed in the Senate, and it is not the only one. Last August, Florida Democratic Representative Kathy Castor presented the Protecting the Information of our Vulnerable Children and Youth Act, which would update a 1998 law – the Children's Online Privacy Protection Act (COPPA), which protects the privacy of minors. 13 years online but has rarely been applied in the past 20 years – to extend it to all teens up to age 18.

– Read also: We should better study the effects of social networks on collective behavior

If it did pass, companies would have to get much more informed consent from young people before collecting their data and could no longer use it for targeted advertising. Despite the various legal subterfuges used by tech companies to avoid the consequences of COPPA, over the years the law has also been applied in some striking cases, such as when in 2019 the Federal Trade Commission fined YouTube for 170 million dollars, leading the site to modify its own practices relating to children's videos.

In September, Castor – along with two other Democrats, Ed Markey and Richard Blumenthal – had proposed the Kids Internet Design and Safety (KIDS) Act to the Chamber, attempting to ban the automatic reproduction of videos on sites and apps intended for children and adolescents. , push notifications on the profiles of the youngest and the amplification of content related to sex, violence, gambling and other materials intended for adults.
In recent weeks, the EARN has also been resumed IT Act, another bipartisan project initially presented in 2020 that wants to amend section 230 of the Communications Act of 1934, which among other things protects website operators from the legal repercussions of what users post on their site, to force technology platforms to act more proactively against materials depicting child abuse.

The EARN IT Act would do away with these federal protections for tech companies that “knowingly allow” their users to share child pornography on their services. However, according to most cybersecurity experts, the current wording of the law would discourage companies from using end-to-end encryption, which protects personal data and communications like no other encryption system.

In fact, communications protected by end-to-end encryption cannot be viewed even by the companies that host them or by governments requesting to consult them, and they allow everyone (including criminals) to communicate safely: for this reason, for years , several governments have opposed it. In January, for example, the UK spent over € 600,000 in a campaign to push Facebook not to use end-to-end encryption in its messaging service, saying that otherwise “14 million reports of alleged sexual abuse on minors online could be lost every year “.

When asked about it, Senator Blumenthal has long said that the EARN IT Act “has nothing to do with cryptography” and that “big tech is using it as a subterfuge to oppose the bill.” However, he recently refused to actively exclude the use of end-to-end encryption from the reasons a state could sue a company under this law.

– Read also: The US Congress against the “amplification” of social media

When it comes to protecting young people online, the issue of end-to-end encryption is just one of which the recommendations of digital rights and security experts and the priorities of governments, increasingly less inclined to believe in the capacity of platforms, conflict. to self-regulate and interested in showing that they are doing something to address public concerns. Some point out, for example, that the push for a more detailed and accurate verification of the age of users required by some of these bills – including the Kids Online Safety Act – represents a massive intrusion of privacy that opens the road to an even higher surveillance than the current one.

In some cases, it is feared that behind the apparent good intentions there are more worrying ulterior motives. This is the case of Russian President Vladimir Putin, who for years has been threatening to introduce new laws to force platforms to remove unwanted posts, and who in January ordered his administration to begin work on a new “register of toxic online content to protect minors “. But also of India, which has long been putting pressure on Western tech companies to remove critical government content and which now wants to promote “new international standards for social networks” that focus on child safety and content moderation. .

In other cases, experts express doubts about the very wording of the laws which, by attributing to platforms the responsibility to recognize and remove dangerous content by defining it only in a summary way, could lead to excessive censorship of legal content online. This criticism was recently raised by one of the committees of the British House of Commons regarding the Online Safety Bill, a bill presented in May 2021 and which is still under discussion, which would like to force platforms to limit content “legal but harmful. “.

“There are obviously threats to the safety of children, both online and offline. However, the conception of these threats is deeply situated in a historical, racialized and gendered panic over the perception of childhood innocence and safety, “says Professor Jacqueline Ryan Vickery, author of Worried About the Wrong Things: Youth, Risk, & Opportunity in the Digital World.

“Online safety conversations are often separated from offline safety conversations, when in fact the two are closely related. Children more vulnera offline tend to face most threats online too »continues Vickery.

“It's easier, politically, to talk about children's exposure to violence, predators, sexual materials, self-harm and so on online, relegating these Internet threats to public discourse, rather than having broader holistic discussions about spaces and places where children are most likely to suffer the most harm. But the data shows that adults they trust in places they trust are more likely to harm children than strangers or peers, both online and offline “.