The media writes scary stories about everything that can go wrong with AI. A spreadsheet error creates $6B in losses for JP Morgan Chase. Content creators sue YouTube, alleging AI racial profiling. Law firms raise concerns about regulatory risk from AI’s “black box.” AI is going to take our jobs or kill us!
Entrusting your business decisions to AI does indeed create new risks, and you need new policies and procedures to mitigate them. But these sensational news headlines lead companies to take ineffective approaches that result in unnecessary delays and frustration. After helping some of the world’s largest companies develop model governance policies, I’ve learned a better way to manage the complexities of AI. In this issue of FeedForward, you’ll explore some of my top recommendations.
Let’s say that one day you realize your company needs to start creating or upgrading your model governance policy. You start by looking up “AI risks.'' After gleaning a few alarming headlines, you think, “We need new policies now to mitigate these risks!” In a search for best practices, you find cards, presentations, and articles about AI models. These leave you even more confused. You find it all overwhelming and don’t know where to begin. So you call a meeting.
Everyone in the meeting agrees on the need for new policies. A teammate creates a model-governance wiki page, and you start jotting down issues and open questions. You decide to form a model governance working group, and you schedule a weekly meeting.
After a few weeks, frustration starts to set in. You’re getting no closer to answering the questions that arise at each meeting:
The list grows like the sand pile at the bottom of an hourglass. Attendance at your weekly meeting begins to fall, and you’re no closer to having a model governance policy.
Sound familiar? I found myself in this exact situation when I started creating model governance policies four years ago. My mistake was focusing on the policy instead of the people who will execute it.
I learned a key lesson the hard way: starting with the policy only creates delays and frustration. Here are reasons why this approach will stagnate:
The companies that successfully and quickly get their model governance programs on track take a different approach. They focus on getting answers to three critical questions. These questions address what I call the three P’s of model governance:
Let’s talk about each question.
The first question is the hardest and most time-consuming. On some projects I’ve spent an entire month learning how the legacy risk management processes work. To answer the first question, start with people. Identify the teams that will play a critical governance role.
Your company already has policies and procedures to manage technology risk. Existing teams maintain those policies and provide oversight. Your first task is determining the role those teams will play in model governance. One of two likely scenarios may best apply to your situation.
Companies in highly regulated industries like financial services and healthcare already have formalized risk management practices, and dedicated teams overseeing them. For example, insurance companies have formal processes for reviewing actuarial models because errors in forecasting premiums and payouts can have devastating financial effects. Banks have layers of risk management for regulatory purposes.
Your legacy risk management teams likely don’t understand how modern machine language models are built and deployed. They have been trained on traditional rule-based models and might not fully grasp the complexity of deep learning models and emerging machine learning operations (MLOps) infrastructure. They might try to shoehorn emerging machine learning practices into their existing processes and worldview—and it won’t work.
Although you might not have a model risk management practice, you almost certainly have a quality assurance (QA) team responsible for ensuring that best practices are followed when you deploy software.
Your QA team—and probably your company—will think of models like any other software library. They will assume that existing software lifecycle management policies will be followed after the data scientists do their magical data-sciency stuff and make a model. They will try to isolate the model libraries as an independent function and look for unit and integration tests like they do in any other software library.
This approach won’t work because a machine learning model is vastly more powerful than a traditional software library. The business risks cannot be isolated through traditional software testing techniques.
Regardless of your scenario, your job is the same: identify the teams that will play a role. Get their help creating your model governance policy.
These teams will want to remain relevant and be part of model governance, but they won’t know what to do. They also won’t have resources to dedicate to the effort. Worse still, they will initially be intimidated by AI because of the sensational headlines they have read. If you let their initial resistance take root, organizational inertia will set in, and nothing will happen. Help these teams overcome this inertia and get comfortable with their changing role. These strategies have worked for me:
After you overcome the initial resistance, these governance teams will become great partners.
Many AI leaders are afraid to address governance processes. Perhaps they assume that colleagues will balk at new assignments. I’ve discovered that an effective way to address change anxiety is to develop a draft process.
A draft process lays out the key steps for getting a model approved. Creating one will significantly accelerate your model governance program and bring further clarity to roles.
After you define the basic roles, start creating draft processes and discussing them with stakeholders. For example, you can start by outlining a few simple steps for getting a new model validated. Here’s an example of a draft process:
This simple workflow clarifies the division of responsibility between the line of business (LOB) that will use the model and the enterprise model risk team.
The model risk team develops a playbook that describes the company’s model validation process. Before an LOB can deploy a new model, it must first stand up an independent model validation team. This model validation team creates and executes a model validation plan based on the playbook. The model risk team provides oversight.
Getting consensus for this process will accelerate your model risk program by clarifying next steps. This example reveals the need for an enterprise model risk team.
I cannot stress enough the importance of identifying the first models that will go through the process. I’ve seen companies waste a year or more developing mitigation strategies that have no relevance to what the data science teams are actually doing.
Put aside the sensational “what if” scenarios and focus on your priorities—the actual risks from the first models you will deploy.
In my experience data science teams are conservative when they handle new models that have a potential business impact. They start by deploying the model on the lowest-risk scenarios, or they include human review.
If you need to reduce the company’s anxiety about model governance, consider starting with initial models that have low business risk. For example, a lead-scoring model doesn’t create existential risks to the company. The biggest risk in this scenario is that the sales team won’t end up using the model.
Focusing on priorities will constrain the scope of your first model governance policy. Start with specific risks and add complexity over time.
Let’s be honest—model governance isn’t the most interesting problem you could be working on. It can become a huge time sink and a distraction from your bigger problems. (Some clients assign it to me simply because they don’t want to deal with it.)
But this is a job that needs to get done. And as an AI leader, you are the best person to move the company in the right direction. Once again, you need to be the AI translator for your company.
Most people in your company don’t understand AI. They have a hard time imagining a different future. If you start with the three P’s, you will create a safe space for people to ask questions and learn. Get the right people involved, draft up a process, and set priorities. When other teams see a clear path forward, they will take ownership. Soon enough, you’ll be on to more interesting challenges.
The complete guide for understanding AI, identifying opportunities, and launching your first product and become an AI Company in 90 days.