Model Governance? Sounds boring. Why do I need that?
Well, in short, it’s about managing risk. Specifically model risk – the risk of adverse consequences resulting from reliance on a model that does not adequately represent that which is being modeled or that is misused or misinterpreted.
That covers a fair amount of ground. If we break it down into different kind of risks that can occur at various points within the modeling process, we might come to the following types of scenarios.
The Model is Wrong
All models are wrong, but some are useful
This quote is generally meant to communicate that no model can perfectly predict every aspect of the future. There are too many variables and unknowns. That is why we have a model to begin with. It’s a simplified view of what we’re trying to look at. That being said, there are varying degrees of wrongness. By the vary nature of it being a model, we know it won’t be 100% accurate, but we also need to ensure that we are modeling relationships, calculations, and behaviors as intended. Very simple mathematical mistakes in calculations can lead to wildly inaccurate results when projections are extrapolated out far enough.
The Wrong Model
The model is right, but used to answer the wrong question. Or stated another way, we’ve chosen the wrong model for the job.
This is the risk of using the model for a purpose other than its intended use. For example, we could have the best model ever created to model future stock prices. But if someone comes along and tries to use that model to predict, for example, next week’s weather, it probably won’t give very accurate or useful results. Does a bull market mean a higher chance of precipitation?
That’s a rather extreme example, but we could definitely get to more subtle cases. Consider a life valuation model intended to calculate expected capital requirement risk for an in-force block of business. What would happen if someone picked that up and tried to use it to price new business in the future? Would it give a useful result for that type of question? How would such a model handle future sales of new policies? Would it recognize the initial capital outlays required for sales and various management actions and controls that could increase or limit sales? Possibly, if the model was also designed to handle that situation. But chances are that a valuation model wouldn’t be inherently set up for that purpose.
The model is right, used for the right purpose, but not run properly. This is commonly referred to as Garbage In, Garbage Out.
If we have the right model and are using it for the right purpose, are we giving it all the correct input to do its intended job? Are we starting with current inforce data files. Have our assumptions been updated to current best estimate assumptions? Did we update the economic scenarios to reflect current conditions and current future outlooks? Complex actuarial models can have numerous tables, files, and assumptions values that need regular updating to ensure the model is going to give the intended answer.
The model is right, used for the right purpose, run correctly, but the results are misunderstood.
This is all about understanding the results that come out of a model and what they really mean. Even if everything about the model is done right, it is still quite easy for someone else to look at the results and completely misinterpret what comes out. This is a risk that comes out after everything has been locked down, checked, and validated. We’ve reached the point where we are ready to distribute results, and someone else who was not involved in the modeling process uses the results to come to invalid conclusions.
An example of this would be if we ran a valuation model which included various conservative assumptions, provided results to someone, and they tried to look at the results assuming that this conservative projection represented a best-estimate of future results. They might think that the future looks rather bleak, when the data was meant to just show the downside risk, and not any of the upside opportunity inherent in the projections.
That brings us to the need for model governance. In complicated models and modeling systems, there are a lot of points of failure. Some of which will be immediately apparent, and some which will not. Model governance is about reducing the risk of those failures and related errors. But it’s more than that too. It’s also about knowing those risks can occur. Because let’s face it, knowing is half the battle.
By implementing a robust model governance framework, we can develop a greater understanding of where the risks are; and from there, develop appropriate processes and controls to reduce those risks and ensure that our models are providing us with maximum value for the immense time and effort we put into them.
This is the first in a series of posts where we’ll be discussing model governance in depth. We’ll dive into topics around some of the specifics of implementing a robust Model Governance process, including: