If you are, like me, a parent of children under the age of 18, you may well have been following the recent Wall Street Journal revelations of Instagram’s toxicity to teens, pre-teens and children with a growing sense of foreboding and dread. To recap — a recent investigation by the Wall Street Journal (WSJ) uncovered internal documents from Facebook that the company knew that its Instagram service was damaging to the mental health of teenage girls. “We make body image issues worse for one in three teen girls,” stated one slide from a 2019 presentation by researchers that was posted to Facebook’s internal message board. Among teens who reported suicidal ideation, 13 percent of British users and 6 percent of American users traced the desire to take their own lives to Instagram.
To be fair, we probably knew that it was going to be bad, given the precedent set by other egregious social network activities that have been uncovered over the years, but this could be considered the current high water mark for social media being disclosed (by its own findings, no less) to be a psychological and even physical threat to children.
And yet, crickets....
It may be that our attention spans are simply in outrage gridlock, or that collectively, the desire to see the size of Kourtney Kardashian’s engagement ring somehow dissipates our anger towards social media’s inability to protect children. Or, perhaps, just an overall resignation that we still can’t effectively police the internet.
The latter is mostly true. There are myriad age assurance and privacy protection regulations in place - COPPA, GDPR, AVMSD, and, most recently, AADC. And while they have proven to be punitive, the fines levied equate to a drop in the ocean compared to overall revenues for some of the key violators. For instance, TikTok’s parent company, ByteDance, more than doubled its annual revenue to $34.3 billion in 2020, a 111% YoY increase. A $5.7 million fine, as was levied in 2019 for collecting information on children is probably not going to cause much lost sleep. And while service providers provide corporate mantras such as “don’t be evil” and “build social value,” these can probably be considered more bumper stickers than actual tenets of these organizations. So, if regulation can’t provide the necessary guard rails, can technology?
There are a plethora of technologies that have attempted to provide an effective form of age assurance for children, to varying degrees of success. The challenge here derives from the rather specific and somewhat contradictory regulatory requirements of definitively knowing the age of children, but also preserving privacy by either zero data capture, or by gaining consent from an adult. There are probabilistic data that can be used to estimate age, such as behavioral biometrics, user profiling / data inference, and facial and voice analysis. While somewhat unobtrusive, they are flawed by being in many cases passive and therefore not gaining user consent for data capture. There are also deterministic data that can provide accurate information on the age of an individual such as driver’s licenses, passports and credit cards, but invariably overshare other forms of user PII such as their name, address, and more. There is also the significant problem that for many under-18 sub segments, such as under-13s, they simply don’t have these documents.
This leaves a third category -- self managed / self attested data. Historically, this has been the laughably weak self declaration test of clicking a button to confirm that you are old enough to enter a site. However, technology is becoming more nuanced, with capacity testing showing some potential. An example being a test of knowledge or aptitude that would be difficult for a child to complete in an allotted time, but that an adult would find relatively straightforward, not unlike the familiar CAPTCHA tests for proving human rather than machine intelligence. These would however still only provide an estimated age range and could exclude low capacity adults while letting high capacity children through the age gate.
So what’s left? There are some emerging solutions that could be powerful in providing robust age assurance along with privacy preserving consent, notably state issued electronic IDs (eIDs) and mobile drivers licenses (mDLs). Europe is currently leading the way with multiple state and regional initiatives that could provide definitive proof of age, but also protect other data attributes. There are also multiple self sovereign verifiable credential initiatives that could also prove powerful age assurance technologies. The problem -- none of these are at a level of adoption that would be meaningful in determining age for mass market services such as social media platforms.
For all digital citizens, the online protection of children cannot come at the sacrifice of their digital privacy. Further, any technology / methods implemented must not encroach upon data privacy rights for all. Nonetheless, protecting children is clearly not yet possible from a technological perspective -- here we are in late 2021, still unable to definitively know whether you’re a dog, a child, or a refrigerator on the internet, and even less able to determine age. That we are currently in a tenuous position of equilibrium between big tech and regulators does not augur well for an internet that meets the needs of all digital citizens, a third of which are, according to UNICEF, currently children.
Age assurance will be a core component of the future internet, and one that will be fundamental in paving the way for a whole host of services that enable the seamless fusion of digital and physical domains. However, it will be incumbent upon all stakeholders to make the internet a child safe environment. We have some work to do.
Stay updated with the latest news, data and insights from Liminal