The media writes scary stories about everything that can go wrong with AI. A spreadsheet error creates $6B in losses for JP Morgan Chase. Content creators sue YouTube, alleging AI racial profiling. Law firms raise concerns about regulatory risk from AI’s “black box.” AI is going to take our jobs or kill us!

Entrusting your business decisions to AI does indeed create new risks, and you need new policies and procedures to mitigate them. But these sensational news headlines lead companies to take ineffective approaches that result in unnecessary delays and frustration. After helping some of the world’s largest companies develop model governance policies, I’ve learned a better way to manage the complexities of AI. In this issue of FeedForward, you’ll explore some of my top recommendations.

The path to frustration starts with good intentions

Let’s say that one day you realize your company needs to start creating or upgrading your model governance policy. You start by looking up “AI risks.'' After gleaning a few alarming headlines, you think, “We need new policies now to mitigate these risks!” In a search for best practices, you find cards, presentations, and articles about AI models. These leave you even more confused. You find it all overwhelming and don’t know where to begin. So you call a meeting.

Everyone in the meeting agrees on the need for new policies. A teammate creates a model-governance wiki page, and you start jotting down issues and open questions. You decide to form a model governance working group, and you schedule a weekly meeting.

After a few weeks, frustration starts to set in. You’re getting no closer to answering the questions that arise at each meeting:

  • Should our validation team be centralized or decentralized?
  • Should our model governance policy apply to nonproduction (end-user computing) models?
  • What’s the role of the existing risk team?
  • Do we have to revalidate models every time they’re retrained?
  • Does the policy cover just the models? Or does it cover all relevant production software, such as pipelines?

The list grows like the sand pile at the bottom of an hourglass. Attendance at your weekly meeting begins to fall, and you’re no closer to having a model governance policy.

Sound familiar? I found myself in this exact situation when I started creating model governance policies four years ago. My mistake was focusing on the policy instead of the people who will execute it.

Don’t start with a comprehensive policy

I learned a key lesson the hard way: starting with the policy only creates delays and frustration. Here are reasons why this approach will stagnate:

  • You’ll try to predict everything. Research and infrastructure options are changing too fast for good predictions. You don’t even know how you’ll deploy AI. Building a policy to cover the unknown is impossible.
  • You’ll prematurely seek an end state. Your policy will require continuous updates, and your first version will be inadequate regardless of how many meetings you have. You need to constrain the scope of the first version and get people using it.
  • You’ll avoid the real challenge—adoption. The hardest part is getting everyone to use and improve the policy. You need to educate people about machine learning and the novel risks it creates. People will need to commit to the process. A complex, long policy creates adoption barriers.

Start with the three P’s of model governance

The companies that successfully and quickly get their model governance programs on track take a different approach. They focus on getting answers to three critical questions. These questions address what I call the three P’s of model governance:

Let’s talk about each question.

What role will your existing governance teams play?

The first question is the hardest and most time-consuming. On some projects I’ve spent an entire month learning how the legacy risk management processes work. To answer the first question, start with people. Identify the teams that will play a critical governance role.

Your company already has policies and procedures to manage technology risk. Existing teams maintain those policies and provide oversight. Your first task is determining the role those teams will play in model governance. One of two likely scenarios may best apply to your situation.

Scenario 1: Your company has a model risk management practice

Companies in highly regulated industries like financial services and healthcare already have formalized risk management practices, and dedicated teams overseeing them. For example, insurance companies have formal processes for reviewing actuarial models because errors in forecasting premiums and payouts can have devastating financial effects. Banks have layers of risk management for regulatory purposes.

Your legacy risk management teams likely don’t understand how modern machine language models are built and deployed. They have been trained on traditional rule-based models and might not fully grasp the complexity of deep learning models and emerging machine learning operations (MLOps) infrastructure. They might try to shoehorn emerging machine learning practices into their existing processes and worldview—and it won’t work.

Scenario 2: Your company has a software governance practice

Although you might not have a model risk management practice, you almost certainly have a quality assurance (QA) team responsible for ensuring that best practices are followed when you deploy software.

Your QA team—and probably your company—will think of models like any other software library. They will assume that existing software lifecycle management policies will be followed after the data scientists do their magical data-sciency stuff and make a model. They will try to isolate the model libraries as an independent function and look for unit and integration tests like they do in any other software library.

This approach won’t work because a machine learning model is vastly more powerful than a traditional software library. The business risks cannot be isolated through traditional software testing techniques.

Your challenge: Helping your governance teams understand their job

Regardless of your scenario, your job is the same: identify the teams that will play a role. Get their help creating your model governance policy.

These teams will want to remain relevant and be part of model governance, but they won’t know what to do. They also won’t have resources to dedicate to the effort. Worse still, they will initially be intimidated by AI because of the sensational headlines they have read. If you let their initial resistance take root, organizational inertia will set in, and nothing will happen. Help these teams overcome this inertia and get comfortable with their changing role. These strategies have worked for me:

  • Identify their current mandate within model governance. For example, a model validation team might be responsible only for ensuring the model works as intended before it is used in business decisions. This mandate constrains the scope of their new responsibilities. They might need to update their processes to account for deep learning models, but they don’t need to handle production challenges such as model monitoring.
  • Bridge the terminology differences between them and data scientists. If their existing policies use the term inputs, use this term instead of features. Clearly define terms like holdout sets and use examples.
  • Get an experienced person to temporarily help drive the changes. Often the biggest challenge of governance teams is capacity. These teams tend to be maxed out just trying to keep the existing operation running. To address the lack of human resources, you could recommend a senior data scientist to help the team.

After you overcome the initial resistance, these governance teams will become great partners.

What new processes can mitigate model risk?

Many AI leaders are afraid to address governance processes. Perhaps they assume that colleagues will balk at new assignments. I’ve discovered that an effective way to address change anxiety is to develop a draft process.

A draft process lays out the key steps for getting a model approved. Creating one will significantly accelerate your model governance program and bring further clarity to roles.

After you define the basic roles, start creating draft processes and discussing them with stakeholders. For example, you can start by outlining a few simple steps for getting a new model validated. Here’s an example of a draft process:

This simple workflow clarifies the division of responsibility between the line of business (LOB) that will use the model and the enterprise model risk team.

The model risk team develops a playbook that describes the company’s model validation process. Before an LOB can deploy a new model, it must first stand up an independent model validation team. This model validation team creates and executes a model validation plan based on the playbook. The model risk team provides oversight.

Getting consensus for this process will accelerate your model risk program by clarifying next steps. This example reveals the need for an enterprise model risk team.

What models will go through the process first?

I cannot stress enough the importance of identifying the first models that will go through the process. I’ve seen companies waste a year or more developing mitigation strategies that have no relevance to what the data science teams are actually doing.

Put aside the sensational “what if” scenarios and focus on your priorities—the actual risks from the first models you will deploy.

In my experience data science teams are conservative when they handle new models that have a potential business impact. They start by deploying the model on the lowest-risk scenarios, or they include human review.

If you need to reduce the company’s anxiety about model governance, consider starting with initial models that have low business risk. For example, a lead-scoring model doesn’t create existential risks to the company. The biggest risk in this scenario is that the sales team won’t end up using the model.

Focusing on priorities will constrain the scope of your first model governance policy. Start with specific risks and add complexity over time.

Start the model governance process and then hand it off

Let’s be honest—model governance isn’t the most interesting problem you could be working on. It can become a huge time sink and a distraction from your bigger problems. (Some clients assign it to me simply because they don’t want to deal with it.)

But this is a job that needs to get done. And as an AI leader, you are the best person to move the company in the right direction. Once again, you need to be the AI translator for your company.

Most people in your company don’t understand AI. They have a hard time imagining a different future. If you start with the three P’s, you will create a safe space for people to ask questions and learn. Get the right people involved, draft up a process, and set priorities. When other teams see a clear path forward, they will take ownership. Soon enough, you’ll be on to more interesting challenges.

Kevin Dewalt
Chief Executive Officer & co-founder

More Ideas

AI Abundance:

Why you have only five years to prepare for the inevitable business extinction event.

download