In the first two parts of our blog series “Artificial Intelligence in the Financial Industry”, we discussed the topic of artificial intelligence (AI) or machine learning (ML) in securities trading and asset management. However, artificial intelligence can also be used in bank risk management. For example, the law requires banks to establish an appropriate and effective risk management system to ensure their risk-bearing capacity on an ongoing basis. This essentially involves identifying market, credit, insolvency or fraud risks in connection with trading decisions or lending, for example, and minimizing these risks. It is here that AI or ML can help identify new patterns and thereby contribute to risk mitigation. However, financial regulators do not approve individual algorithms. Rather, they examine the individual processes on a risk-oriented basis and on an ad hoc basis in their specific application in individual cases. However, BaFin has defined overall principles for the use of AI, which must be taken into account by financial institutions.
Management Remains Responsible for Artificial Intelligence and Its Deployment
Regardless of how sophisticated artificial intelligence is, the management remains ultimately responsible for the use of AI. Among other things, this means that the management must have an adequate technical understanding. If algorithm-based decisions are made, then risk management must also be adapted to these circumstances. This means, among other things, that the probability of damage occurring due to incorrect decisions by the algorithm is analyzed and the results are documented. The same applies to the extent of potential damage. Furthermore, a superordinate framework is to be set up that specifically addresses the algorithm-based decision-making processes and takes their interdependence into account. If applications are sourced in from external parties, the management is also responsible for ensuring that effective outsourcing management is established.
No Bias Shall Be Generated and Legal Requirements Shall Be Adhered to
When using AI, the systematic distortion of results (bias) must be avoided. Business decisions must not be based on bias. This should also eliminate the risk of reputational damage if, for example, individual customer groups are disadvantaged due to the bias. Companies are therefore required to use data of sufficient quality and quantity. In the development phase, financial institutions must therefore develop a data strategy, for example, that ensures the permanent provision of data. In doing so, current data protection regulations must be observed. To ensure that the algorithms and models can be checked both internally and externally, there is a documentation obligation for financial institutions.
Attorney Dr. Konrad Uhink