CHAPTER
7

Appendix

AI is a popular topic in the mainstream media, so a lot of people know the
talking points of the objectors. Here are examples of common objections you
may encounter and ways to deal with them.

The “black box” objection

Many people have the impression that AI models are a “black box” which
mysteriously generates results. In the AI world this concern is called the
interpretability problem. People raise this objection when they lack good
frameworks for evaluating AI models.

Understand model interpretability

In traditional software applications we can explain exactly why every
decision is made by following the logic. These types of models are
deterministic—every action has a specific cause. Lawyers and regulators
like deterministic systems because they make it easy to interpret what
happened. Traditional software models are highly interpretable.

AI models, conversely, are probabilistic. We don’t have all of the information
necessary to assign the specific reasons why a decision was made, so we
have to assess a probability. AI models are thus less interpretable than
traditional software models.

Probabilistic doesn’t mean indeterministic. In other words, statements like
“we have no idea why AI makes decisions” are absolutely false. For example,
rolling two dice is a probabilistic model. We can’t explain exactly why a 7 was
rolled on a particular throw. We do know, however, that 7 occurs more often
than 12, and 6.2 never occurs.

AI is more like the real world

You can’t evaluate a complex probabilistic model the same way you can
evaluate a traditional, deterministic software application. Probabilistic
models have to be evaluated the way we evaluate complex decisions made
by organizations.

For example, consider airline security. Marilyn Hartman, also known as the
“serial stowaway,” has sneaked onto more than 20 commercial airline flights.22
Often without a passport or ticket, she manages to get through all airport
security and to board flights.

How is this possible? What decisions lead to TSA and gate agents allowing
her to pass? Hartman refuses to reveal her methods, and cameras capture
only parts of her escapades, so we have few clear answers to how she does it.

Like systems in the airline industry, AI systems are extremely complex. We
don’t know exactly why every event occurs, but we have tools for evaluating
the system’s decisions.

The black-box objection most frequently arises in highly regulated industries
like banking. After you identify the sticking points, addressing them might
entail educating legal advisors on AI or demonstrating the AI system’s
usefulness by training a simpler model on the same dataset.

Educate your audience about AI

As you now know, AI isn’t a newfangled, untested tool. Researchers have
been studying neural networks since the 1950s. Once your audience
understands the magnitude of AI’s potential, they’ll be willing to work to get
approval to use it.

Describe to your audience what you’re creating, how you’re building the
training data, and what happens with the results. Have your team show the
results of your test datasets. Listen, explain, and build trust.

Interpret with simpler models

A demonstration is often an effective way to show the potential of your
AI solution. In many cases you may be able to train a traditional machine
learning model (random-forest models are a common choice) on the same
dataset. For instance, you can identify the inputs that are most relevant for
making an output prediction. While the results probably won’t be as good as
they would if you were using your neural network, the simplified model can
help regulators understand how the AI version will work.

Social and economic fallout

AI will change our society in ways we cannot even imagine. Understandably
people are concerned about inevitable downsides such as job losses, dangerous
AI applications, and algorithms which capture undesired human biases.

Nobody knows the ultimate human costs of AI, and the experts debate the
issue. They even debate about debating. Andrew Ng argues that fearing AI is
like worrying about overpopulation on Mars.(23) Eliezer Yudkowsky claims this
type of complacency poses enormous risks for humanity’s survival.(24)

Both Ng and Yudkowsky make good arguments and know far more about AI
than I do—but all they have are opinions. Since the real experts can’t agree,
I don’t feel compelled to take a position about the societal impact of AI. I
simply acknowledge these concerns and agree.

Every significant technological advance in history has had a human cost, and
even if we haven’t always overcome the costs, we have learned to deal
with them.

Let’s Future Proof Your Business.