Leading international legal news service Law360 has interviewed Osborne Clarke's head of AI John Buyers about the European Union's Artificial Intelligence Act.

The article looks at how the draft will prevent finance firms from using automated systems to calculate the credit needs of customers when they can't explain how the decision making process works.

The article can be accessed by those with a subscription via this link.  John's long-form comments are included below and you can find more information on how AI and Big Tech can potentially transform financial services in this Insight.

The EU Regulation on Artificial Intelligence (or Artificial Intelligence Act), now in draft form, poses a threat to the continued use of deep machine learning based “black box” systems used by banks and insurers because they are generally currently unable to demonstrate the transparency standards which will be required by that Act, according to John Buyers, partner and head of AI at Osborne Clarke.

Such a record keeping obligation doesn't really work for black box AI.  Deep black box neural networks are essentially impenetrable – even the data scientists that create such AI models and train them don't really understand how or why they reach a particular decision in a given set of circumstances.

“Quite how this conundrum will be solved is unclear.  There are moves in the industry to create so called "explainer" AIs that sit over the core decision making AI but rather than providing clarity, that approach tends to introduce even more complexity as the explainer system will itself be subject to the same logging by design compliance requirement.

“A simpler, less opaque model to replace a black box with conventional technology or simpler classification based systems such as 'decision trees' might seem a solution but such systems have proven to be less effective and lack the power of a deep neural network.

“Deep neural networks that utilise machine learning - aka "black boxes" represent a step change in computing capability."

EU regulators needed to get the balance right between keeping black box machine learning systems and requiring appropriate levels of transparency, but so far were finding this difficult.

Firms may turn away from using particularly opaque or complex black box AI models within the EU because of the strict regulatory requirements of the draft AI Act."

U.K. firms with operations in the European Union will be caught by the AI Act.   If firms fail to comply with the regulation they could be fined up to €30 million or 6% of turnover.

“If financial institutions are to continue using black box AI in high risk areas, they will have to show to the customer that they have taken steps to mitigate risk, even if they cannot conclusively demonstrate how the system works. Amongst other steps, they will need to have empirically tested the system to ensure it is consistently reliable.

“Part of this risk mitigation will be to undertake an AI impact assessment, in much the same manner as a Data Protection Impact Assessment is required under the GDPR.  Indeed in the last compromise text issued by MEPs of the European Parliament on 9th January this year, it is clear that this requirement will be enshrined in the legislation.  The new draft text now calls for businesses to undertake what is referred to as fundamental rights impact assessments in relation to their AI use cases".

U.K. financial institutions should have already started evaluating all their artificial intelligence systems and determine if their usage is responsible –whether under future EU rules, if they operate in the bloc, or in the U.K., according to Buyers

Firms should be examining their algorithmic models and underlying training data sets to check there is no instance of bias in their system and should ensure humans understand and can override decisions made by the system.

There needs to be enough transparency to ensure users on their receiving end of AI decision making understand the relevant decision and the reasons for it, as well as having the ability to challenge it, according to Buyers.

“We’ve seen financial institutions across the world employ data privacy officers. As a next step they will need chief AI officers, with responsibility for data science, compliance and governance."

Follow
Corporate communications and press contacts

If you are a journalist and would like comment or background from our legal experts, we can help. Our team will put you in touch with the best person. View a full list of our international press contacts by jurisdiction here.

Connect with one of our experts