AI, Data and Transformative Technologies

Belgian legislative proposal to provide more transparency on the use of algorithms by the government

Published on 10th May 2021

Belgium seeks to address concerns about transparency in government use of artificial intelligence.

DT_mobile-laptop

Artificial Intelligence (AI) systems open up new opportunities for organisations, both in the private and public sector, to improve their services and develop new products. Big data analytics can lead to a better understanding of user behaviours and preferences, provide new information on economic activity and reveal trends and correlations within and across large data sets that would otherwise be unknown. The value of such AI systems, therefore, lies in their ability to not only process the data elements from large data sets, but also spot and correlate patterns in the data and learn from them.
Unsurprisingly, the use of AI systems raises several concerns. Not only are some concerned that the use of such systems could risk reinforcing and increasing inequality between individuals and groups in society, but AI systems can sometimes be perceived as "black boxes." Depending upon how the systems make use of an individual's data, AI can make it difficult for users to understand how they have arrived at a particular output, or what input factors or a combination of input factors have contributed to the decision making process.

Broadening freedom of information rights

An often-praised solution to overcome these concerns is to take an ethical approach and advocate for more transparency. In Belgium there is likewise a growing concern regarding the transparency of the government agencies using AI systems. In light of this, a legislative proposal was recently introduced to amend the law on access to government information (freedom of information). The proposal emphasises that governments can only reap the benefits of AI systems if they make transparency a top priority. Under this proposal, public authorities would be obliged to disclose the main algorithmic rules online, in particular when they are used for individual decisions. Furthermore, if administrative documents include individual decisions partly or entirely generated by algorithms, citizens have the right to receive more detailed information, in easy to understand text, regarding the extent to which and the manner in which the algorithmic processing contributed to the decision making process, the data processed and its sources, which input factors or a combination of input factors have contributed to the decision-making process and the operations carried out with the processing. Finally, public authorities would be required to conduct and disclose an impact assessment in accordance with article 35 of the General Data Protection Regulation (GDPR). These safeguards should confer additional benefits such as increase transparency and trust in AI systems. It remains to be seen whether this proposal can solve the transparency problem and of course whether the law will be adopted in the form proposed.

Belgium's proposal in comparison with EU initiatives

Belgium is certainly not the frontrunner on this. On the contrary, the European Commission has already taken several initiatives to establish a legal framework for AI:

  • In April 2019 the High-Level Expert Group on Artificial Intelligence (HLEG) presented ethics guidelines for trustworthy AI. For this group, trustworthiness of AI means maximising the benefits of AI systems while preventing and minimising their risks. Among others, the HLEG identified the need to ensure the ability to challenge and seek effective redress against decisions made by AI systems and by the humans operating them. Accordingly, it is important that the entity responsible for the decision can be easily identified and that the decision-making processes are explainable. In addition, AI systems' processes must be transparent, which means that the capabilities and purpose of AI systems should be openly communicated and decisions must be explained to those directly and indirectly affected. Otherwise, a decision involving AI systems could not be properly challenged. However, the extent to which decisions must be explained is based on a case-by-case assessment, depending on the context and the severity of the consequences.
  • In April 2021, the European Commission published its (long awaited) proposal for a regulatory framework for AI to foster trustworthy AI in Europe. The proposed AI regulation bans a limited number of applications. It also provides for strictly mandatory requirements for specified "high-risk" AI systems in relation to the quality of data sets used, technical documentation and record keeping, transparency and the provision of information to users, human oversight and robustness, accuracy and cybersecurity. High risk systems will be required to undergo appropriate conformance testing by national AI certification bodies to verify that they are in compliance with the EU's new regime. This will entail the publication of algorithms and datasets to the certification bodies. Further details of the proposed EU regime on AI are available in our Insight. The categories of high risk systems are focused on those that create potential for harm to health and safety, or for an adverse impact on fundamental rights. With regard to lower risk AI systems, the proposal does not provide similarly detailed requirements, however codes of conduct are envisaged and transparency rules will still apply. Hence, users will have to be notified about (i) AI systems intended to interact with natural persons (such as chat bots), (ii) emotion recognition or biometric categorisation systems, (iii) some machine generated content, such as images or videos, which resembles authentic persons or objects (deep-fakes). The proposed AI regulation, and more specifically the legal requirements for AI systems, are the result of two years of preparatory work, derived from the ethics guidelines of the HLEG.

Approach of UK

In the UK, there is not currently a plan to legislate on AI (and the EU proposals will not apply). However there is considerable regulatory interest in these issues. For example, the Competition and Markets Authority is conducting research and has issued a public call for evidence about the potential harm to competition and consumers that could be caused by the deliberate or unintended misuse of algorithms. Transparency is also a focus for the Information Commissioner's Office (the UK's data privacy authority) which has issued guidance developed jointly with the Alan Turing Institute on Explaining decisions made with AI.

Developments in other European nations

Other European countries have issued advice and suggestions to regulate AI, but currently, they have not gone as far as proposing concrete legislation.

Last November the Spanish data protection supervisory authority (AEPD) published a note on the use of new technologies (including AI) by public administrations. One key element of their note was the prohibition of public authorities relying on legitimate interest to process personal data.

In Germany, the use of AI by public authorities is currently not regulated by the Freedom of Information Acts. The federal and state authorities for data protection and freedom of information published a positioning paper in 2018, calling on the federal and state legislators to oblige public bodies to use algorithms and AI procedures in a more transparent and responsible manner and also suggested respective transparency regulations. However, there are no specific legislative plans made public.

The regulation of AI has also been a matter for the courts, as in the example of the Netherlands in February 2020. The District Court of the Hague took decisive action in terms of privacy rights in its ruling on the SyRI case (NJCM v. the Netherlands), concluding that the System Risk Indication System, a legal instrument that the Dutch government uses to detect fraud in areas such as benefits, allowances and taxes, violates article 8 of the European Convention on Human Rights (ECHR). This court ruling shows that AI systems need to embed effective safeguards against privacy and human rights intrusions.

In France, the 2020 Finance Bill created a three year-pilot program that allows French tax and customs administrations to collect and exploit data made public on online platforms (such as social media platforms). Pursuant to the position taken by the French DPA CNIL, which recognised that the purpose of detecting and addressing tax frauds was legitimate but that this program required strong guarantees, The Finance Bill provides that the data collected should be adequate, relevant and limited to what is strictly necessary, and its use should be proportionate to the intended purpose.

Osborne Clarke comment

Belgium's legislative proposal will require a high investment of resources in vetting, regulating and investigating the use of AI. One major question is whether the proposal is harmonised with the EU proposed legislation. It is becoming increasingly clear that the ubiquity of AI has led Belgian and European legislators to recognise that it is a phenomenon to be addressed with an eye to the fundamental rights of citizens. This legislation is an important symbol of Belgium's commitment to becoming more proactive in regulating and legislating around digital technologies.

Share

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?