Why We Model With Instrument Level Data
One of the conversations that we often have with prospects is a discussion about using instrument level data for modeling interest rate risk instead of aggregate data. This is an important distinction between models, and one that we feel is a strong selling point for Asset Management Group. However, making this distinction in such a context is difficult. We are clearly biased in this situation towards our way of doing it. But, the fact is, we chose to base our model on instruments instead of aggregates even though using aggregate level data is far easier from a modeling perspective. Yes, we are biased. But we have been biased from the design phase. So, why do we feel so strongly about this?
Years ago, most of the models measuring interest rate risk (especially for community banks) used aggregate level data. In fact, this data often came directly from call reports. Over the last decade, though, the trend has been for more and more of the models to use instrument level data that imports the details on individual loans, investments, deposits, borrowings, and general ledger accounts. There are many reasons for this, but generally, banks have simply gotten more sophisticated and more complex, and using broad categories and average characteristics no longer translates to accurate results.
Our insistence on using instrument level data, even though it increases our processing effort significantly, is based on a few major factors:
- Using aggregated categories does not allow for accurate modeling of cash flows, repricing, prepayments, or floors and ceilings. The issue of floors is especially important now, and when using aggregate data, you have to try to separate the loans with floors though rate codes, use averages for floors, and then assume how that category will "generally" behave when it reprices. This will always be far less accurate than using loan level data, as then you know exactly which loans have floors, at what level, based on which index, and at what spread over that index. You can accurately measure how much margin pressure there will be due to indexed rates not piercing floors. This same scenario is true for every repricing event; you know exact levels instead of average levels.
- Using aggregated categories does not allow for accurate forecasting of liquidity, for many of the same reasons. You must assume on average when a category of assets or funding will mature instead of knowing exactly which instruments will mature in which time bucket. Because of this AMG is able to include with its model a liquidity forecast that is stress tested to help meet the liquidity guidelines with regulators.
- Using instrument level data allows for much more in depth reporting. AMG is able to generate reports that analyze pricing deal by deal, as well as production reports by branch.
In summary, using this level of detail translates to a much more accurate measure of risk. Given that modeling inherently relies on several sets of assumptions, introducing unnecessary assumptions into the process simply is not worth the time and effort saved. Another set of assumptions to defend is the last thing bankers need. In addition, some regulators are asking tough questions about the accuracy of models that use aggregate data, and insisting that banks find models that match their level of complexity.
Some banks claim that they are not that complex, and that since they can often find aggregate data models that are slightly cheaper, they see no need to spend additional dollars. My first question would be, are you sure your balance sheet is really that simple? If you have amortizing loans that can be prepaid and you have non-maturing deposits, your balance sheet is fairly complex. You have a tremendous amount of optionality that must be accurately modeled. But more importantly, if you are spending money on this anyway, why pay for a solution that might be giving you the wrong results?