The emerging discipline of digital ethics will probably always lack a set of universal rules, but the speed of technological advances, and an uneasy public, give business leaders no choice but to start defining their own positions in this area.
We’re close to the point of ‘algorithmic policing’ where machines protect us from ourselves with cars that won’t speed, and phones that won’t work while driving. But do we want our employer or insurer to know everything about us? Or should we be allowed the privacy to eat ‘badly’ and, to some extent, browse the Web unmonitored?
During the Gartner Business Intelligence & Analytics Summit, I explained that technology can realize both the dreams and nightmares of mankind: facial recognition that could thwart a terrorist attack, keep your home secure, or keep your children safe, could be used to profile and discriminate in a police state.
Most people take the position that technology itself is not a moral agent. Yet relinquishing control to machines raises questions of accountability when amoral drones take the decision to kill, or an autonomous car crashes.
Draft rules around insurance for driverless cars are likely to state that whoever puts the car in motion – whether inside the vehicle or remotely – is ultimately responsible. How do you program a car to decide whether to swerve and kill a cat to avoid hitting a child? Is the programmer responsible for the loss of life, or the car manufacturer who employed them, the car’s owner, or even the passengers?
Once technology is released, the consequences assume a life of their own. Once an algorithm successfully replaces a human task, it can rapidly replace that task worldwide, for better or worse. To look at facial recognition again – racial profiling by one policeman is a problem, but as a global rule in a policing machine it’s terrifying.
Automated rules are repeatable, scalable, and efficient but the concern is they can cause the dramatic amplification of mistakes. At the same time, public mistrust in data security and how this information is used behind closed doors is also a fundamental problem for most organizations involved.
Organizations must establish digital ethics because people will judge them through a moral lens. Yet this is like wading in moral quicksand as unforeseen issues emerge and reactive regulation struggles to keep up. Society isn’t yet ready for the possible outcome that, at some point, smart machines themselves will become responsible for their actions – but it’s the task of digital ethics to start the discussion.