Digital business creates conflicts of interest and disruption that are causing business leaders to think more about ethical concerns. However, applying abstract ideas to the real world can be difficult.
Frank Buytendijk, research vice president and fellow at Gartner, has researched more than 60 real-world case studies that show that the unintended ethical consequences of digital business are a challenge to every industry.
Ethical considerations should be built-in, not bolted-on
Many of these dilemmas are universal. Examples can be found across industries and in various parts of the world, although different cultures respond to them differently. However, there are a few guidelines that are useful in most situations:
- Use the golden rule. Ask yourself how you would like to be treated as a human being, citizen or customer.
- Embrace the positive. There are always unintended consequences in using new technology. Embrace the positive uses and block the undesirable ones.
- Exercise discipline and self-restraint. The successful use of new technology usually comes from these two behaviors rather pushing the limits.
Buytendijk shares four common ethical dilemmas and the potential implications for each.
Dilemma: Privacy or profit
A public transport organization is introducing electronic cards to enable access to a wide network of transport and convenient cashless payment options. These schemes are popular and increase margins, as well as the efficiency of the transport network, producing a large quantity of customer data. Should a public transportation organization sell this data to third parties?
Selling data is a best practice for many companies, and the proceeds could be used to reduce fares or improve services. If, however, it wasn’t made clear in advance to users that this would happen, expect a serious backlash. Data should be used for the stated purpose (travel). If that changes, users should be consulted.
Dilemma: Future options vs. appropriate use
Smart TVs routinely track viewing behavior and send it to the manufacturer to personalize advertising on the TV’s home screen. There have been instances of sending sensitive data from private files on USB storage devices attached to the TV. Is this acceptable?
In the worst case this is a legal breach and will incur heavy fines as well as reputational damage. The tracking of personal data on external devices was an unintended consequence of the design of the software, stemming from a failure during the design stages to weigh privacy issues with a need for future device features. Ethical considerations should be built-in, not bolted-on.
Dilemma: Get it out vs. get it right
Many have experienced digital assistants like Siri, Cortana or Google Now. This technology, like any, can be used in an immoral way. While you wouldn’t blame a car manufacturer for a drunk driver, you might blame a smart machine manufacturer, if its machine was instrumental in a bank robbery.
It’s hard to develop a set of fixed ethical rules that prevent all possible misuse, and regulators cannot keep pace with the speed of technological advancement. While pre-testing is vital, it’s also important to build in features that enable constant monitoring of smart machines for unintended consequences.
Dilemma: Needs of the many vs. needs of the few
Digital assistants simply provide information, but more advanced machines make huge decisions. Automatic trading software (crash the market but make my owner rich?); military drones (how much ‘collateral damage’ is too much?); and self-driving cars (kill the grandmother to avoid the child?) all make decisions that can wreck or end lives. Who bears the responsibility? The programmer? The manufacturer? The operator?
Everyone involved bears some responsibility. The duty is to test incredibly thoroughly and constantly monitor for unintended outcomes. With an operating system, nothing too bad happens if the first release is buggy. But when software controls life and death situations, there’s no room for errors, and the reputational cost of them could ruin companies.