Blog

The Privacy-Enhancing Technologies We Need Today

12/7/2022

By:Gilad Rosner

Since at least the mid-90s, practitioners, technologists, engineers and researchers have been discussing Privacy-Enhancing Technologies. Given that long history and highly varied group of interested parties, it’s unclear whether the term remains useful. For consider the image atop this blog post: the humble curtain on a voting booth is a privacy-enhancing technology, as are masks worn in public, and so is it beneficial to lump both of those in with homomorphic encryption under one heading? We shall see.

It should come as no surprise to all of you that the field of Identity Management has been in the vanguard of privacy for decades. That’s in part because of the special role that digital identities play in surveillance, or ‘dataveillance’ as some have called it. Academics and professionals saw the dangers of reusable, trackable identities immediately, and, fortunately, so did the standards and protocol designers, who built in strong capabilities for pseudonymity, the blinding of relying parties, and unobservable identity systems. But, that’s only one part of the story.

Ultimately, privacy is a social value that introduces friction into interactions. You’ve likely heard of the so-called ‘privacy paradox’ – that people express preferences for strong privacy but then behave in ways that would seem to show they don’t care. This idea, though, this paradox, is false, because the ability to express one’s preferences is linked to the affordances of a system: the breadth of choices presented, and the amount of cognitive burden involved. I know that most of you are aware that user interface design and defaults are inseparable from privacy behavior.

And while the American standard of the ‘reasonable expectation of privacy’ is a terrible one on which to base court decisions, people’s expectations are tied to a vital dimension of privacy: norms and values. Constant improvement in the way interactions present privacy control ideas will slowly, slowly, help people to see what’s possible and what they should be demanding. Not just users, consumers, citizens, or however we wish to speak about them, but also regulators. The whole notion of regulators is a fissure between US and European modes of governing technology, where the US is almost completely reliant on courts to safeguard privacy, and Europe relies heavily on data protection authorities. After years of studying the European model of regulation I can tell you that identity management is not on the minds of regulators. When they want to champion the fundamental right to data protection, they don’t immediately point to the powerful architectures of ID systems. Nor does the FTC, and nor do the states attorneys general.

To push the identity community’s ethos for privacy, the data protection authorities of Europe are underused allies. In the long run, I believe the UI and UX decisions of identity management technology will be crucial ways that people evolve their own privacy norms and values. The term ‘privacy-by-design’ hasn’t been all that helpful, but ID systems are by their very nature privacy-by-design, and both the regulator and privacy communities are simply not as familiar with - or as excited by - advanced ID architectures as they should be. For those communities, identity management remains an opaque and obscure art.

In discussing privacy, I have found great value in using Eve Maler’s framing of ‘positive’ and ‘negative’ privacy. Not positive and negative as in good and bad, but as in two poles. ‘Negative’ privacy is the kind we most hear about – confidentiality, secrecy, hiding identity information and people’s online activity. ‘Positive’ privacy is active control over the intentional sharing of personal data, architectures of permission, and delegation. These are two sides of the same coin, and are both captured in one of the most well-known definitions of privacy, from 1967: “Privacy is the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others.” I am also drawn to defining privacy as ‘boundary management,’ an idea that emerges from social psychology in the mid 70s. “Privacy is a boundary control process whereby people sometimes make themselves open and accessible to others and sometimes close themselves off from others. . . [T]he issue centers around the ability of a person or a group to satisfactorily regulate contact with others.” Human-computer interaction scholars would later observe, “Privacy management is not about setting rules and enforcing them; rather, it is the continual management of boundaries between different spheres of action and degrees of disclosure...” You can view all of these definitions through the lens of Eve’s Positive and Negative privacy separation – preventing others from knowing things about you, or actively determining what they know and for how long they can have the information. This is a good way to think about privacy-enhancing technologies:

Encryption appears on both sides because it enables both the hiding of information and the intentional release of verified attributes. In terms of consumer technology and, critically, regulator focus, there has been much more emphasis historically on the Negative Privacy technologies – and of those, mainly encryption and de-identification. But these positive privacy technologies are the forward-looking privacy-by-design solutions that move us from the secrecy model to the control model, towards the concept of informational self-determination, a kind of gold standard for data protection and privacy.

Consider what I call ‘the Airbnb problem’: You’re renting out your apartment or house through Airbnb. You have: 5 smart lightbulbs made by 3 different manufacturers. You have a smart thermostat, a smart lock on your front door, a Ring doorbell, a smart bathroom scale for some reason, a connected coffee maker, and a smart tv. You want to give access and some control rights to your Airbnb renters – they can change the house temperature but not turn off the heat, they can use the smart bathroom scale but not see everyone else’s weights, the doorbell should let them see the video and the smart lock should let them in and out until 5pm the day they are supposed to leave.

How do you do it? This isn’t easy. And yet, it’s clear that if smart homes mean anything we’ll need to be able to do this, right? And do it in a way where we don’t have concentrated gatekeepers determining everything that’s possible. So where do we go from here? 

The state of the market for this example is that individual manufacturers are driving the shape of social data sharing. This means Amazon, Google, Apple, Samsung, Mattel, Phillips, Fitbit and so on provide the front end, and therefore can determine how users can share the data that comes off devices, how much data, what user provisioning looks like, and the granularity of the control surface.

But, it’s insufficient, and people will not be able to have seamless experiences with devices, data and other humans unless selective disclosure, delegation and user control come together. I want to share with this person but not that person, my friend can see this but my boss can’t, I need to give temporary access to my mom’s medical record to my partner because I’m traveling, I want my running buddies to see the data from my smart socks, my kid can control the Roomba, and no one can see the video from inside my house but me. All of these interactions happen in a multiparty heterogeneous environment, so it’s a great place for the use of an open standard. And so for me an important privacy-enhancing technology is something like the User-Managed Access protocol, or UMA. It’s designed for this kind of messy sharing scenario that resembles how we actually live. 

And that’s the point I want to leave you on: The social life of humans has evolved over the last 100,000 years. Our electronic social lives have evolved over the last 60 years. During that period we’ve shoehorned our centuries of social expectations, norms and values into the digital world, and a lot of times it’s a bad fit. In order to shape our information society lives to look like our natural social lives, we need better tech, and the business cases to ensure that they are used. We need to raise the importance of positive privacy enhancing technologies to the same level of the negative ones. Those negative cases are vital: Pseudonymity and anonymity are critical to the functioning of society and democracy. The human body is becoming more and more machine-readable, and so bodily privacy relies upon redaction and de-identification. Data collected about children is an enormous issue, and they deserve the strongest protective technology we can devise. But, it’s only one side of the coin – privacy is not secrecy it is control. The identity industry has a critical asset in the form of advanced sharing architectures that resemble the way we fluidly control boundaries in our family, social, and work lives. These architectures of permission and selective sharing are the privacy-enhancing technologies the modern digital world very much needs. 

 

woman privacy

Subscribe to the
Liminal Newsletter

Stay updated with the latest news, data and insights from Liminal

explore more
Private Equity Due Diligence Jumio $150M

Case Study

Private Equity Due Diligence: Jumio Raises $150M to Fuel Automation

Chargeback Fraud

Articles

Safeguarding Businesses Against Payment Disputes as Friendly Fraud Rises

Filter by Content Type
Select all
Research
Podcasts
Articles
Case Study
Videos
Filter by Category
Select all
Customer Onboarding
Cybersecurity
Fraud and Risk
Go-to-Market
Growth Strategy
Identity Management
Landscape
Market Intelligence
News
Transaction Services