Paul Calem, Assistant Vice President for Retail Risk Analysis at the Federal Reserve Bank of Philadelphia, and Wei Chen, Head of Global Banking Risk Product Management at SAS, discussed effective stress testing bank models.
WEI CHEN: In the past few years, what changes have you seen in regulatory risk and stress testing best practices? What challenges are banks facing now?
PAUL CALEM: I am only offering my own non-official views. There have been challenges with scenario selection at institutions, but I think it’s improved over time. Guidance that was recently released helps in this, and a public document put out in 2013 describes the horizontal view of practices at institutions, which has also helped.
In tailoring the scenario to the institution’s risk profile, there has been a tendency to start with the Fed scenario and then adapt or modify it one way or another to fit the bank. Starting from the bottom up has been more of a challenge. Banks have to think about what special circumstances are not adequately addressed through the supervisory scenario. They also have to think about what kinds of scenarios can be conceived that would severely impact the bank under stressful circumstances. We really need this kind of introspection because the next crisis will probably come from a non-standard type of scenario that may be specific to a few banks in terms of the kinds of risks they’re undertaking. There’s a danger in always looking backward and choosing scenarios from historical experience. You have to be proactive, and in many cases, it’s up to individual banks to do that because they have a better idea of the new exposures and risks that will make each of them vulnerable and which scenarios they will need to conceptualize.
The guidance that was released has recommended that the bank be careful about employing offsetting assumptions – that is, offsetting some stress and losses by mitigating assumptions on the revenue side. Banks have to make sure that, when it’s all put together, they’ve not only stress tested individual portfolios but that they’ve also sufficiently tested the stress overall. In the past, there were some problems with that at some banks. Again, I don’t think it’s very common anymore, so that’s something that has improved over time. And, of course, the knowledge that’s been gained via modeling experience has really enhanced the quality of the models and the data. Hopefully, the supervisory side has been constructive in terms of pointing out problems and remedying them.
How do you see banks managing their commitment to their organizational and regulatory policy and procedures? What is a best practice?
From what I’ve heard – and this is mostly in retail – it’s been working fairly well over the past year. The response to the supervisory findings and recommendations has been very good. The banks have been staying in touch with regulators throughout the year in terms of what they’re doing to remedy the issues. There’s been good communication and good feedback on both sides. So, overall, it’s been positive.
In the longer term, I think the best practice is just to get a very strong, effective internal validation, then that will save everyone time and effort and make the whole regulatory process easier. Banks also have to be transparent about the assumptions and show clear documentation. It’s tough for examiners as well as for internal operations within banks when there are thousands of pages of documentation. That’s hard to follow, and you have to jump from one piece of documentation to another in order to cycle through it. Clear documentation, with perhaps some of the details provided in appendices, can really help the regulatory relationship as well as the internal process of approving the models.
There’s some confusion about how to justify judgmental overlays. The first step is to catalog the overlays: which are top-of-the-house, which are model-specific and which are assumptions rather than overlays? What issues are outside of the models that you have overlays for? What are some of the other issues that you don’t have overlays for? If you document these things, catalog them and organize them, that makes discussions with regulators a lot easier.
Sensitivity analysis is also helpful. If you can demonstrate through sensitivity analysis or benchmarking that the model results are robust, you can more easily make the case with examiners that you have a reliable process.
“You have to think about what special circumstances are not adequately addressed through the supervisory scenario.”
You mentioned that the overall process of regulatory stress testing is getting better and working well. Is that statement applicable to the DFAST banks as well as the CCAR banks?
I think it’s applicable to both, but the bar will probably be higher for the more complex banks with better data. Supervisors have to match their expectations to the complexity and systemic risk associated with the bank. Again, I’m only speaking from my own observations. I don’t know what the official view is, but I think it’s safe to say that the DFAST banks cannot be put to the same high standard as the more systemically important banks that typically have more extensive data.
When banks first start out, there’s more of an understanding that they’re participating in a learning process. However, as they proceed, they are expected to close the gaps and remediate the MRAs – that applies to the DFAST banks as well as the CCAR banks.
I do not have a first-hand knowledge of DFAST bank models, the models tend to be less sophisticated, which is understandable, and we shouldn’t expect the same level of detail. But they’re still expected to be adequately sensitive to a downturn to give credible results for stress, and the banks still need to come up with their own tailored scenarios for their stress.
When looking at the regulatory process, particularly stress testing, how do you think technology is helping banks address these challenges, especially in aggregating information?
Aggregation is an interesting process. That’s where you put all of the data together from all of the different work streams. What really distinguishes CCAR from previous versions of capital regulation is its complexity. In some respects, it is more complex than the Basel Advanced Approach, which is very complex. When you aggregate Basel risk-weighted assets, you at least have the formulas to do it. In CCAR, you also have the revenue side coming in, and there is a lot of accounting aspects to it, which the Basel II Advanced Approach tries to avoid. Also, Basel II at least attempts to be more of an economic loss concept. So CCAR can be more difficult because of the accounting issues.
I don’t think it’s possible to entirely automate the aggregation because of the overlays, and that’s true of the supervisory aggregation process as well as the bank aggregation process. We’re going to have overlays for risks that are outside of our models, whether it has to do with credit payment shock, credit risk or something else, and they’re going to vary from year to year depending on the emerging risks. That said, you can automate a big part of the aggregation and that will mitigate a lot of the model risk that comes from having a manual bean-counting process of adding things up.
“What really distinguishes CCAR from previous versions of capital regulation is its complexity.”
What do you consider to be the biggest priorities for banks in the next five years when it comes to managing regulatory risk in stress testing?
One big priority is for banks to strengthen internal review function and to use best practices for their own sake, not just to please the regulators. In some ways, some banks are ahead of us in terms of the sophistication of their models. That’s good. They have to be prepared as they break new ground in modeling, which creates some challenges in modeling of proving a concept. They should take on that challenge because it will lead to advances in risk management. They might have to do more work in terms of marketing it or explaining it to the examiners, but it’s worth it. If they innovate improvements, that gives them a more dynamic and cross-sectional risk view. They just need to have a strong internal review and documentation with scenarios that are specific to emerging risks, that will enable them to explain and justify the innovations to regulators and get over the hurdle of those approvals. It is not desirable for stress testing to become a stale process for both the supervisors and the banks, where everyone’s complacent with the models as they are.
Dr. Wei Chen leads the global management of banking risk products at SAS Institute Inc., where he is responsible for market, credit, liquidity and enterprise risk as well as advanced asset liability management. Dr. Chen has over 15 years of experience in risk analytics and technology in banking and insurance. He frequently interacts with practitioners and academic researchers and is active in risk methodology and technology research.
Dr. Chen is an Associate Editor of Journal of Risk Model Validation, has published papers in financial risk journals and conference proceedings and he is regularly invited to present by a number of professional and academic risk communities. He serves as adjunct faculty of the financial mathematics program at North Carolina State University and an elected vice chair for practice of the financial service section of the Institute for Operational Research and Management Science (INFORMS). He is also a director of the GARP Raleigh Chapter.
Dr. Chen holds a Ph.D. from the University of Iowa, where his primary research topic was econometric models of fixed income securities and credit risk. Dr. Chen holds the Financial Risk Manager (FRM) designation from the Global Association of Risk Professionals (GARP).
Paul Calem is assistant vice president and chief of the Retail Risk Analysis section in the Supervision, Regulation & Credit Department at the Federal Reserve Bank of Philadelphia. Previously, he was a senior economist in the Division of Banking Supervision and Regulation at the Federal Reserve Board.
Calem transitioned to banking supervision after several years in mortgage market modeling and analytics in the private sector, including LoanPerformance, and Freddie Mac, and prior positions in the Division of Research and Statistics at the Federal Reserve Board and the Research Department of the Federal Reserve Bank of Philadelphia.
He has a Ph.D. and an M.A. in economics from Brown University and a B.A. in mathematics from Duke University. His current responsibilities include implementing annual supervisory stress testing of large banks’ retail portfolios, quantitative support of bank examinations and policy analysis and research.