Symposium | Democracy and Technology: Allies or Enemies?

Tech’s Danger to Teens and Children

By Stephen Balkam

Tagged DemocracyInternettechnology

When Frances Haugen revealed herself in 2021 to be the Facebook whistleblower who had disclosed thousands of internal company documents to The Wall Street Journal and the Securities and Exchange Commission, the long-simmering tech backlash boiled over. Her testimony on Capitol Hill, combined with a dramatic appearance on “60 Minutes,” brought social media’s effect on teens, and in particular young girls, into sharp relief. Here was an insider able to talk about internal research conducted by her previous employer on the deleterious effects that apps like Instagram have on young people’s mental health.

Congress was keen to give Haugen a platform, and her disclosures further fueled a growing interest in protecting young people from online harms. It was during this last Congress that bills proposed to address the harms that Haugen elucidated were advanced. These included the Kids Online Safety Act, the Children and Teens’ Online Privacy Protection Act (often called COPPA 2.0), and the Children and Media Research Advancement Act.

At the state level, the California legislature adopted the Age-Appropriate Design Code Act. The legislation, modeled after a UK regulation of the same name, will compel online platforms to proactively assess the privacy and protection of children in the design of digital products or services that they offer. The law takes effect in July 2024.

Last October, the EU passed the Digital Services Act (DSA), which among other things will establish new rules to regulate the monitoring and management of harmful online content such as disinformation, hate speech, and illegal content, including child sexual abuse material. And it is widely expected that the British Parliament will finally pass the Online Safety Bill, which will require platforms to undertake a “duty of care” toward their users, particularly children, and to introduce systems that will allow their users to better filter out the harmful content they don’t want to see.

In the meantime, back in the United States, a Texas state lawmaker has introduced draconian legislation that would effectively ban teens from being on social media altogether. This would be enforced through age verification using photo identification as well as by giving parents a way to report that their kids are on the banned apps.

On the regulatory and enforcement front, Epic Games, maker of the wildly popular game “Fortnite,” recently agreed to pay a record $275 million penalty to the Federal Trade Commission (FTC) for violating the Children’s Online Privacy Protection Act Rule (COPPA), the only federal children’s privacy law that has ever been enacted. The FTC has clearly signaled that it intends to hold tech firms accountable for ignoring COPPA requirements with a more aggressive enforcement approach.

So what does the growing tech policy agenda, both here and abroad, mean for kids and teens? How do existing and proposed laws protect minors, and what more needs to be done? And how do we balance keeping kids safe with recognizing their own rights to access and create content as well as to expect certain levels of privacy?

It would be good to begin by looking at the unique pressures that minors are under. To a large degree, young people don’t make a distinction between on and offline. Their lives are mostly spent in a continuous state of connectedness. Not only do they multitask through various apps, but most have multiple screens simultaneously open and available to them. It’s common for a teen to do homework on her laptop while watching TV and keeping in touch with friends on her phone, or to play a video game with remote “friends” while at the same time trash talking on Twitch. Life is lived through seamless transitions from one device to another and one platform to another, all the while coexisting “in real life.”

With that hyper-connectedness come considerable benefits—along with some serious concerns. On the plus side, this generation of kids and teens has unparalleled access to information, entertainment, and one another in ways unthinkable just two decades ago. They also have unprecedented means to record and post any conceivable idea, action, or movement they can think up. We wouldn’t know about Greta Thunberg and her climate-change advocacy without Twitter, Instagram, and YouTube. Kids creating content for kids and distributing it around the world has become mundane, though in earlier generations it would have been considered miraculous.

The recent experience of the pandemic and the early lockdowns demonstrated the vital role that digital technology plays in our children’s lives, from remote schooling to Zoom calls with grandparents. Broadband internet has, since the turn of the century, come to be seen as being as necessary as food, water, and shelter. There are massive government projects to reach those still not connected, so essential do we now see an internet connection to be for people young and old. To fully function and participate in an advanced society, having access to the web is vital. Schools increasingly rely on kids having access to the internet for classes, homework, projects, and even organizing sporting events. A disconnected kid is at a serious disadvantage on many levels.

However, even with all these benefits, being connected brings with it some serious risks and demonstrable harms. To take just one example, schoolyard bullying used to be restricted to a certain time and place. Now, a child can be hounded at home, at night, on the weekend, and even on vacation through social media, texts, and online games. Whereas a physical bully can be identified and stopped, it can be much more difficult to find out who is behind the hateful taunts in an Instagram post or a TikTok comment. All the leading social media platforms have created blocking and reporting mechanisms for teens to use, but many young people remain unaware of these tools or too afraid of retaliation from their peers to use them.

Overuse, or what some term “internet addiction,” is another big concern of parents and lawmakers. A recent study found that teens are consuming nearly nine hours of content on screens a day—and that doesn’t include time for school or homework. (The study counted each screen separately, meaning an hour of using two screens would be considered two hours total.) The rise in screen time may well be a product of the isolation so many young people felt during the pandemic, as well as the need to be online for classes and school projects. Whatever the cause, problematic use of devices remains a top concern. In the meantime, it behooves adults to put down their own phones and model the kind of behavior they’d like to see in their kids.

The issue at the center of Frances Haugen’s revelations concerned the negative impact that apps like Instagram have on teen girls’ body image and self-esteem. She described a rapid progression from posts about healthy eating to ones promoting anorexia, all fed by the algorithm to increase engagement and keep kids on the app. While Instagram does not directly cause kids to starve themselves, the constant stream of images and exhortations can contribute to a susceptible young person’s willingness to do just that. According to Haugen, Facebook was aware of this and did not do enough to change its lucrative business practices.

Yet another problem is the vexed issue of mis- and disinformation. Teens are vulnerable to messages that seek to convince them of untruths, whether they are YouTube videos “proving” that the world is flat, Facebook posts stating that COVID is a hoax, or tweets claiming the last election was stolen. While politicians of every hue agree something must be done to curb disinformation, there is little to no agreement on what legislative remedies must be used to deal with it.

Given that minors have mental and emotional developmental needs that differ from those of adults, lawmakers and tech companies alike have tried to create special privacy and safety measures for them. COPPA was enacted in 1998 to ensure that the personal information of kids under 13 was not collected online without a parent’s verifiable permission. There is an assumption baked into the law that children are more impressionable and have less impulse control than teens or grown-ups. Tech companies have adapted their services to create more teen-friendly privacy and safety controls that default young users to more restrictive and safer settings. And in recent years we have seen a growing number of online safety tools, as opposed to parental controls, that teens and young people use to block, report, or stay private.

In the field of digital and media literacy, there have been significant developments in the way children are taught to be more discerning as they consume content and thoughtful about what they post. Although grossly underfunded, media literacy programs have had some modest success in schools and libraries in teaching critical media skills and promoting digital civility. Far more needs to be invested in this space if we are to arm our kids with the knowledge and the means to filter out the worst of the web while contributing to healthy and productive online dialogues.

While the impulse to protect our kids from the harms of the internet is laudable and understandable, the issue of what rights children should have in this space is highly contentious. There are those, lawmakers included, who feel that minors do not have privacy rights and that they should be protected through a combination of laws and technology tools that give parents total oversight over their children. This line of thinking presupposes that a 17-and-a-half-year-old has the same right to privacy as a newborn child—i.e., none—and that only by reaching the age of majority does an 18-year-old obtain the rights of an adult.

There is another school of thought, particularly in Europe, that suggests that while an infant does not enjoy any inherent privacy rights, a child, as she grows, should be afforded a gradually increasing amount of autonomy. By the time they have reached high school age, this thinking goes, kids should enjoy more freedom from their parents and caregivers, particularly if they are exploring religious beliefs, sexual orientation, or other potentially controversial topics. The children’s rights movement, while well established in Nordic countries, is relatively new in the United States, where parental rights reign supreme. We need a robust debate about young people’s agency and their rights to explore and engage online that is not dominated by fear-based rhetoric or overbearing legislation.

It is certainly not easy to balance the competing needs of online safety and privacy, even for the adult population. Intractable issues such as the role of government, the need for regulation, and the tech industry’s own efforts all come into play. When kids and young people are involved, other contentious questions—such as different styles of parenting, from authoritarian to permissive—make the balance even more difficult to strike.

Efforts to promote “safety by design” are showing signs of promise. Child development experts are being brought in at an early stage in the design of apps, platforms, and services rather than trying to retrofit a product for safety after it’s been shipped. The UK’s Age Appropriate Design Code now requires companies to prioritize the best interest of the child when designing new products, including minimizing data collection, maintaining the strongest privacy safeguards, and having clear online safety tools available.

Unfortunately, in the United States, absent a national privacy framework, individual states are creating their own regulations. Thus, we are seeing a patchwork of laws and regimes that will make life much harder for companies large and small, leading to less innovation and less creation of rich experiences for kids. While the prospect of a federal bill recedes into the background, it cannot be stated enough that a national privacy framework is a prerequisite for designing the more specific online safety laws that might follow.

One encouraging development has been the passage last December of the Children and Media Research Advancement Act, which provides multiple years’ worth of funding for the National Institutes of Health to research the impact of digital media on children and young people. It is the type of longitudinal academic research that my organization has long championed and the results of which can be used to create enlightened public policy, based on factual evidence rather than the emotional clamorings of a New York Post headline. This research may well validate what Frances Haugen revealed or point to other areas of concern that have yet to be addressed.

Whatever the research finds, we will be in a much better position to craft laws, create new tech tools, and develop media literacy curricula to keep kids safe while empowering them to take control of their online lives.

From the Symposium

Democracy and Technology: Allies or Enemies?

next

Toward Stronger Data Protection Laws

By Margot Kaminski

10 MIN READ

See All

Read more about DemocracyInternettechnology

Stephen Balkam is the Founder and CEO of the Family Online Safety Institute, an international nonprofit organization whose mission is to make the online world safer for kids and their families. Balkam has worked in the online safety space since the mid-1990s and appears regularly in the media on issues related to public policy and good digital parenting.

Click to

View Comments

blog comments powered by Disqus