Think About Digital Ethics Within Continually Evolving Boundaries

When new technologies lift constraints, figuring out what you “can” do moves to determining what you “should” do.

The emerging discipline of digital ethics will probably always lack a set of universal rules, but the speed of technological advances, and an uneasy public, give business leaders no choice but to start defining their own positions in this area.

We’re close to the point of ‘algorithmic policing’ where machines protect us from ourselves with cars that won’t speed, and phones that won’t work while driving. But do we want our employer or insurer to know everything about us? Or should we be allowed the privacy to eat ‘badly’ and, to some extent, browse the Web unmonitored?

During the Gartner Business Intelligence & Analytics Summit, I explained that technology can realize both the dreams and nightmares of mankind: facial recognition that could thwart a terrorist attack, keep your home secure, or keep your children safe, could be used to profile and discriminate in a police state.

Most people take the position that technology itself is not a moral agent. Yet relinquishing control to machines raises questions of accountability when amoral drones take the decision to kill, or an autonomous car crashes.

Gartner analyst, Frank Buytendijk, explains the importance of digtal ethics during the Gartner Business Intelligence & Analytics Summit.

Draft rules around insurance for driverless cars are likely to state that whoever puts the car in motion – whether inside the vehicle or remotely – is ultimately responsible. How do you program a car to decide whether to swerve and kill a cat to avoid hitting a child? Is the programmer responsible for the loss of life, or the car manufacturer who employed them, the car’s owner, or even the passengers?

Once technology is released, the consequences assume a life of their own. Once an algorithm successfully replaces a human task, it can rapidly replace that task worldwide, for better or worse. To look at facial recognition again – racial profiling by one policeman is a problem, but as a global rule in a policing machine it’s terrifying.

Automated rules are repeatable, scalable, and efficient but the concern is they can cause the dramatic amplification of mistakes. At the same time, public mistrust in data security and how this information is used behind closed doors is also a fundamental problem for most organizations involved.

2021 Top Priorities for Application Leaders

Emerging trends, expected challenges and next steps for applications leaders in 2021

Download eBook

Organizations must establish digital ethics because people will judge them through a moral lens. Yet this is like wading in moral quicksand as unforeseen issues emerge and reactive regulation struggles to keep up. Society isn’t yet ready for the possible outcome that, at some point, smart machines themselves will become responsible for their actions – but it’s the task of digital ethics to start the discussion.

Gartner clients can read more in the report Digital Ethics, or How to Not Mess Up With Technology by Frank Buytendijk.

Get Smarter

Follow #Gartner

Attend a Gartner event

Explore Gartner Conferences

Gartner IT Roadmap for Cybersecurity: A Resilient Strategy

Gartner IT roadmap for cybersecurity based on unbiased research and...

Learn More


Get actionable advice in 60 minutes from the world's most respected experts. Keep pace with the latest issues that impact business.

Start Watching