The Bank of England has published a new Staff Working Paper (No. 1,038) titled „Deep learning model fragility and implications for financial stability and regulation“ dealing with the increasing use of deep learning models (AI) in the financial market and the concerns surrounding the trustworthiness of their results.
The paper examines the stability of predictions and explanations of different deep learning models, which differ only via subtle changes to model settings, and compares them with traditional, interpretable „glass-box models“. The authors found that deep learning models „produce similar predictions but different explanations, even when the differences in model architecture are due to arbitrary factors like random seeds“. In contrast, interpretable models maintain stable explanations and predictions.
The paper shows the importance of model robustness and stability in deep learning models, which are often used for internal and consumer-facing decisions in finance. The lack of robustness in deep learning models could lead to morally hazardous incentives and create problems for financial institutions‘ clients such as financial loan applicants who might be rejected for arbitrary reasons. Furthermore, deep learning models must be explainable and institutions must be aware of model weaknesses when deploying such technology, e.g. in credit default analysis or expected liability payments of insurers. Another key conclusion is that users of deep learning models must learn to interpret model outcome in the right way.
Finally, the authors conclude that financial market regulators – in their function as supervisory bodies – need to investigate a model’s working and ensure that it is consistent with fundamentals.
