A person can be identified just as much by their face as by their fingerprints. This is immensely important for the nascent surveillance society.

Few things represent the age of social media better than posting a selfie. We share these ubiquitous self-portraits with such an urgency you’d think we’d cease to exist if we stopped producing them at a rapid and ongoing rate. Think about taking a trip to a gorgeous location. If you exercise “selfie-control” and don’t post a picture of yourself at a place like the beach, did the exquisite voyage really happen? 

Selfies have become a cultural obsession for many reasons. They’re fun. They’re easy to share. They’re self-referential. And, they’re mostly harmless. There is a hitch though, and it’s that every time we disseminate a selfie, we make it possible for a new crop of surveillance technologies to track us.

Until recently, concern over non-governmental use of facial recognition technologies was largely theoretical. Only a few companies could create the name-face databases large enough to identify significant portions of the population by sight, and these companies had little motivation to widely exploit this technology in privacy-invasive ways. Name-face databases are too valuable for companies to share widely with others, privacy regulators are generally suspicious of the technology, and the terms of use on most websites prohibit the kind of automated information scraping necessary to create a name-face database large enough to set-off most privacy alarms. 

Unfortunately, things are changing. The tech industry is sending signals that it’s poised to introduce new facial recognition products and hastily destroy an important concern over personal identification. For in addition to downplaying the important role biometrics play in modern data security schemes, industry is assuming away, if not explicitly denying, how valuable obscurity is to our day-to-day lives.

This problem is a root cause of why nine public interest groups recently walked away from multistakeholder talks over what should go into a voluntary code of conduct that places restrictions on facial recognition software in the United States. These talks have been occurring since 2014, and are part of the larger push to implement more uniform and omnibus privacy protections in the United States, akin to broad privacy protections offered by European countries. 

The sticking point for privacy advocates is that tech companies and lobbyists are not in favor of a general rule (subject to exceptions) requiring that companies get consent before people’s faces are scanned and linked to an identifying name. This expectation isn’t new. Back in 2012, regulators released a report on best practices that discussed a hypothetical app that could use facial recognition to identify strangers. It recommended restricting the app’s use to people who have chosen to use the service.

According to Alvaro Bedoya, Executive Director of the Center on Privacy & Technology at Georgetown Law School and regular participant in the talks, economic self-interest is motivating industry to take an uncompromising stance. 

“I think a lot of companies see an upside in using facial recognition to serve targeted ads at people based on their age, gender, and ethnicity. Retailers are also using it to identify VIPs, known shoplifters, and other undesirables – like ‘known litigious individuals,“ he said. “They have a financial interest in keeping facial recognition in an unregulated, law-free zone … I think that these financial interests were behind industry resistance in the talks."

In short, the surveillance battles waging in the United States demonstrate how a free-market orientation can encourage an unhealthy appetite for exploitative digital technologies. 

So far, only a few states in the U.S. require disclosure and consent before companies can collect and use biometrics such as facial identifiers. The rules for facial recognition in Europe are more robust. Yet given the shifting sand of modern privacy laws, facial recognition is still a bit of a grey area under both legal systems. 

It’s important, therefore, for the public to have a clear sense of how to assess the claims in the version of the code that ultimately gets drafted. As we see it, one question should be prioritized. Does the code carefully address the problem of diminished obscurity – the personal and social repercussions of dramatically reducing the effort and expense required to determine who someone is based on how he or she looks? If not, it isn’t oriented toward protecting the public good and should be treated accordingly.

The tech industry will be tempted to sidestep the issue of obscurity. We imagine their case for permissive and widespread use of facial recognition will rely on the fact that your name and face are the most public things about you. In the U.S. and Europe, most people show their faces whenever they go out in public. Sure, there are exceptions: burkas, ski masks, and children’s play masks. But those aren't the norm. 

And when talking with others in public, people regularly say both first and last names. Of course, this doesn’t always happen. Sometimes you can chat without ever explicitly saying whom you are talking with. At other times, nicknames will do. But, still, unless the situation is unusual, nobody will bat an eye if you say, “Hi John!” or “Hello Jane!”

So, on the surface, the two main units of analysis regarding facial recognition technology – names and faces – don’t seem to be private at all, especially when compared with Social Security numbers, which people carefully guard. And, let’s be honest, folks don’t just regularly broadcast these highly personal features in face-to-face settings. Plenty of people set up public online profiles that do the same thing. There’s LinkedIn, company directories, and so many other ways to show the world what a person looks like and what name he or she goes by. 

Since faces are unique, “significantly altering a face to make it unrecognizable is difficult,” and names are distinctive, why do many people seem unconcerned about their public dissemination? The answer is simple. The norms governing our attitudes towards the name-face connection developed during time periods when it was hard to identify most strangers. Human beings have limited memories and limited exposure to others. Indeed, we’ve come to rely on the fact that we can basically hide in plain sight in public, being protected by zones of obscurity. As a result, we’ve had little reason to worry that our presence will be translated into information that can be stored long-term, as well as quickly recalled and probingly analyzed.

Ubiquitous and unrestrained facial recognition technologies wouldn’t just alter this longstanding presumption. It would shatter it entirely. In the brave new world, we’d need to presume we’re being identified everywhere. As a result, two undesirable temptations would take over. We could sadly admit defeat and acquiesce to losing control of our signature picture and words. Or we would be pushed to pursue aggressive – possibly paranoid – risk management strategies. 

In order for industry to try to make a persuasive case and minimize pro-privacy backlash, we further suspect it will conflate two different things: your face and the faceprint that facial recognition technologies use. Your face is not scalable. But your faceprint is; a machine can read it. Indeed, once a face is converted to data points and made machine-readable, it ceases being a public-facing part of ourselves that we voluntarily expose to others. It becomes a resource that others control.

It’s important to differentiate face from faceprint because our faceprints are similar to two things that have high privacy value: passwords and beacons. 

We’re increasingly using data about our face to authenticate our identities to our smartphones and user accounts. That’s reason enough to be skeptical of widespread deployment of facial recognition technologies and the proliferation of name-face databases. Like passwords, faceprints can be compromised. They’re a data security risk.

But our faceprints, like fingerprints that are constantly on display, also can act like a beacon that leads watchers right to us, like a permanent trail of breadcrumbs that won’t wash away in the rain. This power can alter the bedrock conventions for relating to others in public. Often enough, we currently don’t remember the faces of people we sit next to in restaurants, on planes, and elsewhere. This gives us a degree of freedom to move to and fro, content that judgments about us remain snappy and ephemeral, and we retain significant power to shape what those around us know about our personal lives. To give but one example, once parishioners start attending Church because they’re worried about facial recognition outing their absences, we’ve really got to question just who is benefitting from these technologies and how.

It’s reasonable to skeptical about how industry will proceed with facial recognition technology. But pressure from the public, advocates, and lawmakers might force industry to confront the myth that showing your face in public is the same thing as being easily identifiable everywhere you go. People’s passwords and targeting beacons aren’t fair game to collect and deploy, and our faceprints deserve similar treatment.

About the authors:


Evan Selinger is a Professor of Philosophy at Rochester Institute of Technology, where he is also the Head of Research Communications, Community & Ethics at the Center for Media, Arts, Games, Interaction, and Creativity (MAGIC).  His book with Brett Frischmann, Being Human in the 21st Century, is under contract with Cambridge University Press.

Woodrow Hartzog is an Associate Professor at Samford University’s Cumberland School of Law. He is also an Affiliate Scholar at the Center for Internet and Society at Stanford Law School. His book, Privacy’s Blueprint: The Battle to Control the Design of New Technologies, is under contract with Harvard University Press.

Illustration: Jan Buchczik