The ethics of artificial intelligence: On professional, personal, and global responsibility

By Rob High

As engineers and technologists, we should always be mindful that there are consequences to our work. It is fundamental to the design of a product that its use and potential be understood, even before you create it. It’s as basic as not putting sharp edges on a child’s toy.

The ethics of artificial intelligence (AI) demand the same attention. Since AI is relatively new to all of us, we have to be especially diligent; there is not a lot of history demonstrating the different ways AI tools can or should be used.

AI opens up whole new fields of potential value, such as doctors being able to use it to improve their cancer treatment decisions. But with that comes uncertainty about unexpected consequences that could be introduced. Since AI is still emerging in our society, we need to be intelligent about how we create and enable it, how we control it and what we demand of it. This is why the ethics of artificial intelligence are so important.

AI: An extension of the human mind, not a replacement

It’s critical to remember exactly what artificial intelligence is. Like all useful tools within our modern-day society, AI is an amplification of human strength — representing, in this case, strength of the mind. As such, it is common that we mistake AI for an attempt to replicate that mind. There is much about our ability to think and to reason that is going to take a long time to recreate in a machine — but more importantly calls into question why we would want to in the first place. What economic value is there in replicating the human mind in a machine?

For that reason, it would be a mistake to look for that in AI. I prefer that we call it augmented intelligence — specifically, to emphasize that cognitive computing is about augmenting and amplifying our own human cognition. It can allow us to make better decisions, see alternative perspectives, or break through our biases — all with the purpose of making us better at what we do. That responsibility must be shared between the developers of AI, building tools that serve that purpose, and consumers of AI, demanding solutions that generate that type of result. For that reason, AI should never be described as a substitute for the human mind.

The governance of artificial intelligence

When it comes to any powerful advancement, the aim is to keep it out of the hands of those who would abuse it while ensuring those who foster constructive and positive uses for it are at the helm. Garry Kasparov, AI scholar, humanitarian and former World Chess Champion, made a valid point at a recent AI event regarding the ethical implications of advanced technology aiding wrongdoers: we cannot keep rogue nations from exploiting artificial intelligence to the disadvantage of their citizens, but we can do everything in our power to avoid it becoming a temptation.

The governance of AI and its utility to society rests on three constituents: providers of technology, consumers of technology and, to some extent, governments.

  1. Providers of technology have a responsibility to build in such a way that encourages positive use and discourages abuse. We need to do this while measuring ourselves along the way so we can ensure what we’re doing creates a precedence of positive, ethical use. And the technology created should be transparent, conveying confidence in its findings.
  2. Technology consumers have the responsibility to demand products that create a beneficial effect. We, as consumers, must reject technologies that are destructive. Everyone can blame major phone manufacturers for creating and nurturing our dependence on our smartphones, but we also have the responsibility to put our phones down and demand advancements that make our phones safer, such as settings that automatically disable communication apps while driving.
  3. To some extent, governments have a role in the governance of AI through regulatory practices. This certainly doesn’t mean that one country’s government can regulate what’s happening in another, but each country can impose a degree of influence on how they treat these new technologies.

The ethics of artificial intelligence

These artificial intelligence technologies are performing some type of reasoning. Most examples of AI on the market today use inductive reasoning — that is, they either do basic recognition tasks or infer answers from previously recorded knowledge. As we move deeper into deductive reasoning, we’re at the dawn of a new world where machines will produce new information unlike ever before. Rather than just being able to ask simple questions like, “how tall is Mount Everest?” we can start asking questions such as “is there a correlation between company growth and length of customer contact?” In both of these cases, the inferences have to be coupled with the main elements of the question itself, and we need to know how much confidence there is in the answer.

With the Everest example, when the algorithm looks up how tall Everest is, it will probably check across multiple sources and will give you an answer, the level of confidence in that result and the source of the information, which could be as simple as an identifying document. If the algorithm finds identical answers across multiple reputable sources, that confidence score would be high. But if there is conflicting information with few sources citing the same height, the confidence score will be low. This will come into play more frequently as AI is increasingly used in deductive reasoning. By referencing the source or knowledge, it enables people to both test the veracity of the conclusion and to adopt the conclusion of the system to their own understanding of the problem.

For instance, when Watson is able to identify an interesting treatment for a doctor’s consideration and cites the clinical papers relevant to that treatment, those papers may both provide some rationale for why that treatment is relevant, but they will also likely identify potential side effects of the treatment. That information helps doctors decide whether to leverage that treatment. When they decide it is the right treatment, they are also informed of what else they may need to do to mitigate any potential adverse effects.

With attention, diligence and thoughtful consideration, I am confident that we will identify and resolve any unexpected consequences of the technologies we create. These challenges behind the ethics of artificial intelligence will be the driving force of a new wave of innovation that will refine the utility these tools will have for us — and, ultimately, the world.

Written By

Rob High

IBM Fellow, VP, CTO Watson, IBM Academy of Technology

Rob High is the Vice President and Chief Technology Officer for the IBM Watson and Cloud Platform portfolio. High leads the technical strategy for Watson's cognitive reasoning, cognitive experiences, and alignment to analytics, content, media, edge computing, application programming,…

Other Articles by Rob High
See All Posts