EBA has issued a comprehensive follow-up report concerning the utilization of ML within the realm of IRB models. This report is an outcome of feedback collected during a consultation on this subject (eventid=13583). It underscores that financial institutions are adopting ML techniques in a targeted manner, primarily in the risk differentiation phase of IRB model development. However, the intricacies of employing ML give rise to multifaceted challenges involving statistical concerns, skills requisites, and the interpretability of outcomes.
The report delves into the multifaceted employment of ML techniques in IRB models, spotlighting risk differentiation as a predominant utilization area. Institutions are harnessing ML models to refine predictive models, with a more limited application in aspects like LGD and EAD models. The validation phase also witnesses ML’s deployment, often through the creation of challenger models to assess the robustness of the primary model.
Complexity emerges as a recurring theme when discussing ML techniques, particularly with respect to statistical matters. Overfitting, a phenomenon where a model is exceedingly adapted to training data but underperforms with new data, remains a pivotal concern. The industry is focusing on counteracting overfitting through techniques like out-of-sample testing and cross-validation, along with embracing methodologies that aid in maintaining economic coherence.
In terms of human skills, the report emphasizes the demand for specialized expertise in ML techniques. Acquiring proficiency in ML methodologies, comprehending statistical nuances, and steering through model validation necessitates a unique skill set. Financial institutions have reported challenges in achieving a balance between enhancing model performance through ML and ensuring a comprehensible and interpretable decision-making process. To address this, the report highlights the importance of transparent documentation and the incorporation of interpretability tools.
The report additionally discusses the convergence of ML techniques with other legal frameworks, particularly the GDPR and the AI Act. It underscores the requirement for clarifications to minimize legal uncertainties and inadvertent ramifications arising from the AI Act.