How to Use Facial Recognition Technology Responsibly and Ethically

December 15, 2020

Contributor: Manasi Sakpal

Shift the narrative around facial recognition technology by implementing a strong ethical framework.

Facial recognition technology is used daily by many to access their mobile phones, but acceptance of facial recognition doesn’t always extend beyond personal use. Many jurisdictions have put this technology “on hold,” as it raises complex ethical dilemmas.

“There is a strong negative sentiment against the use of face recognition technology. It is being seen as an invasion of privacy and a step toward mass surveillance,” says Frank Buytendijk, Distinguished VP Analyst, Gartner. 

What is more important, safety or privacy in the streets and in public buildings? The appropriate use of facial recognition technology depends on the prevailing culture, ethics, legislation and practices.

Read more: How to Prevent AI Dangers With Ethical AI

Currently, there are no widely used or accepted regulations governing facial recognition, which means data and analytics leaders need to turn to digital ethics to use facial recognition technology responsibly. 

Battle issues of bias and false positives by making facial recognition more reliable

Facial recognition is far from perfect. Training bias means that facial recognition technology isn’t always equally accurate for all types of faces. For example, some facial recognition technologies may not be able to identify the race or gender of individuals correctly. This inaccuracy leads to people being misidentified.

Additionally, it is very easy for facial recognition to misinterpret nuanced facial expressions. For example, an expression that conveys a polite greeting in one culture may indicate confirmation or agreement in another. 

Consider the reliability of facial recognition technology before deciding to make it operational. Take the time to develop sufficient countermeasures or verification procedures to battle the issues of bias and false positives.

Read more: 3 Ways to Embrace Proactive Data Ethics

Establish proportional use of facial recognition by evaluating less invasive technologies

Proportionality is a very important ethical concept. In a technological context, it means that an organization should use technology powerful enough to solve a particular problem, but not much more powerful. It’s important to understand why an activity is being undertaken and question the accompanying technological deployment and subsequent data creation and usage.

Consider whether the end goal can be achieved with a less invasive technology. “Finding the appropriate use and focusing on gathering only necessary data is key to striking the right balance,” says Buytendijk. 

For example, for a retail outlet, having security camera surveillance that can prevent instances of shoplifting seems useful, but definitely an invasion of privacy. In this scenario, a standard video-recording security camera is sufficient. 

Restrict use of facial recognition data by establishing purpose boundaries

Data should preferably be processed for specific, deliberate, predefined purposes. Ethical issues often arise when data use crosses the originally stated purpose boundaries, also known as the “lineage of intent.”

For instance, facial recognition results used outside a parking lot to open the barriers and facilitate quick entry and exit of vehicles could also be used by car dealers as business leads. But that would be problematic because users did not agree to share the facial recognition data to facilitate business for car retailers.

For any data collected via facial recognition technology, it’s critical that data and analytics leaders explicitly determine and document its lineage of intent and restrict its use to only that predefined purpose.

Respond to jurisdictional differences by expanding the rights of people identified in images

The ownership of facial recognition data is a point of contention in many jurisdictions, as it is often perceived as an invasion of privacy by governments. Data and analytics leaders need to ask these important questions: “Who owns the image of your face?” “Who owns the image of the expressions you make in public?” “Are emotions conveyed correctly in the public domain?” “Does the party that creates, measures and holds the data own that data?”

On the one hand, facial expressions that are made in a public place are potentially available for everyone present to see, so they’re not completely private. On the other hand, facial expressions are often made subconsciously, and they are transient. They are simply not meant to be systematically captured, stored and analyzed.

Data and analytics leaders need to work with their legal teams to understand the intellectual property rights relevant to facial recognition images and analysis. Treat facial recognition data not from the perspective of the organization’s rights, but rather from the perspective of the rights of the people portrayed. 

Experience Data and Analytics conferences

Join your peers for the unveiling of the latest insights at Gartner conferences.

Drive stronger performance on your mission-critical priorities.