Artificial intelligence

AI is a board issue: perspectives from Belgium

Published on 13th Dec 2023

How should boards of directors approach the risks and opportunities of AI?

Assembly robot

Artificial intelligence (AI) is a focus of rapidly developing activity in all sectors of the economy and at all points in the supply chain. The accessibility of advanced generative AI tools presents a significant opportunity for businesses to enhance efficiency and productivity across sectors. As boardrooms explore ways to leverage these opportunities, they need to be aware of potential risks.

Businesses that are looking to seize AI opportunities will then need to manage legal risks. They will need to understand what AI is and how it works, as well as the pitfalls and current legal framework in order to implement their effective AI-based solutions. In essence, the added value of AI for businesses lies in simplifying tasks that are difficult for humans but can be translated into mathematical principles that can be easily executed by a computer.

Risks and opportunities

AI can support the operations of a company in a number of ways. It can automate business processes and make them more efficient. It can increase analytical capacity and enable more informed decision-making. AI systems can go beyond mere automation of existing human tasks to perform functions which cannot be performed by humans.

Therefore, AI offers opportunities for competitive advantage and modernisation. However, AI also carries material potential risks.

Whereas IT-related issues used to be the responsibility of the company's IT department and possibly of the director in charge of these issues, recent advances in AI and its implications are such that the board of directors must now take an interest in these issues.

Strategy and decision-making

Businesses need to be aware that the landscape of AI is evolving rapidly, and the current AI systems represent only a fraction of the potential capabilities expected in the near future. Understanding the trajectory of AI advancement is crucial for businesses looking to integrate or develop AI systems.

Directors need to understand and assess the impact of AI on the company's strategy and decision-making. The board of directors should discuss with management how AI can be implemented and used to define and meet the company's strategic objectives and what changes need to be made to the company's business model to take advantage of the opportunities offered by AI; in particular, to gain a competitive advantage over companies in the same sector and to reduce costs.

The board of directors should also consider how AI can be used to help it in its decision-making. AI can increase the quality of information available to the board of directors on which board resolutions will be taken. AI can indeed be used to generate information on potential new competitors, produce economic predictions on the launch or sales of specific products and support directors in identifying certain business risks.

Risks and compliance

However, the use of AI may entail operational, compliance, financial, confidentiality and reputational risks. AI systems that rely heavily on data are not always reliable; they can, at best, deliver a statistically probable result, are prone to "hallucinations" – such as inventing historical events, books or quotes –  and can pose a risk of bias or inaccuracies with the data.

Hence, AI systems can lead to nonsensical answers or results. This can result in damaged reputation or liability for AI's wrong results, all with little to no control on why and how an AI model provided a specific outcome. In addition, AI models will often be offered "as a service" by cloud service providers based on licensing terms. This inevitably can lead to some risks, as businesses have little control over the training data fed to the AI model.

Strong reliance on data also creates the risk of disputes over data protection rights, intellectual property rights as well as privacy and cybersecurity breaches.

AI also raises legal compliance issues that need to be addressed by the board of directors. The board of directors must understand and stay informed of legal and regulatory developments relating to AI and ensure that the company complies with its obligations in this area. 

Audit and risk management

Ethical considerations in AI development and implementation are critical for businesses across sectors. Establishing oversight bodies, implementing policies and conducting audits, regardless of regulatory obligations, not only fulfil ethical responsibilities but also significantly contribute to fostering trust among customers and stakeholders.

For businesses that are integrating AI into their operations or products, creating clear policies on the use of AI becomes crucial. This involves understanding the nature of the business, its customer base, and the potential impact of AI on products, services or platforms. Maintaining alignment between the business's approach to ethics and that of its AI suppliers is also vital, often becoming a part of the procurement due diligence process.

As AI becomes more prevalent in various industries, businesses will need to recognise their responsibility in shaping the ethical framework governing its use. This framework should consider not only internal operations but also how AI might influence customer-facing aspects, affecting trust, reputation and broader ethical considerations.

Cybersecurity breaches

AI's heavy reliance on data can also lead to the risk of cybersecurity breaches. Cybersecurity involves protecting computers, servers, mobile devices, electronic systems, networks and data against malicious attacks from cybercriminals. In view of the increasing number of threats to cybersecurity and their potential impact on companies, the EU recently adopted an updated directive on the security of Network and Information Systems (NIS), known as the NIS 2 Directive.

It is critical for all companies, whether they are subject or not to NIS 2 Directive, to put cybersecurity measures and policies in place in order to be sufficiently armed to face threats and cybersecurity breaches.

These measures and policies should relate to, among others, incident notification and response, the impact of incidents on the continuity of the company's activities, data security and retention, access control to business' software and systems, monitoring of employees, and cybersecurity awareness through trainings for employees and management.

Clauses on security standards, data access and liability should be included in contracts with employees and service providers. Companies should also consider subscribing to appropriate cybersecurity insurance, including sufficient coverage for incident notification, response and remediation, losses and associated (legal) costs.

Regulatory compliance

The board of directors, in consultation with the management, should implement guidelines, policies and internal audits on how to approach and use AI tools to benefit from this technology while mitigating the potential risks and ensuring that the use of AI complies with legal and regulatory obligations.

There are currently no laws or regulations that apply specifically to AI in Belgium, but several existing laws or regulations could apply to aspects of AI development and use (for example, data and consumer protection laws). As regards the regulatory framework relating to AI, the European Commission submitted two proposals for specific AI acts. The EU AI Act and EU AI Liability Directive, with the latter aiming to facilitate damage claims against businesses that do not comply with the former. As reported here, on December 8th, 2023, a political compromise was reached so that the EU AI Act should be finalised in the coming weeks or months.

The need for the board of directors to take an interest in AI and technology is in line with the general trend, which is also illustrated in the area of cybersecurity, where the board of directors can no longer discharge itself of its liability for a strictly technical issue.

Replacement of directors?

While AI may contribute to the decision-making of the board of directors, it is currently not intended to replace human directors. There are today some human skills that technology cannot replicate, such as negotiation, managing a team and social interactions.

In 2014, Deep Knowledge Ventures (DKV), a digital venture-capital firm based in Hong Kong, however, appointed a robot called Vital (Validating investment tool for advancing life sciences) as a member of its board of directors. Since then, its vote has been taken into account when decisions are made about new investments. The use of Vital by DKV was motivated by a large number of failed investments in the biotechnology sector and the desire to avoid investing in companies likely to fail. From a legal point of view, however, Vital has not been formally appointed as a director, so its influence is purely factual and, moreover, limited to the context of investment decisions.

The non-formal appointment of Vital as a director of DKV is contrary to the approach that the EU wishes to promote: namely, AI used by humans and not substituted for them. Furthermore, under Belgian law, it would be impossible to officially appoint AI as a director because the involvement of a natural person (as a director in his own name or as the permanent representative of a "legal person" director) is required.

In addition, certain provisions of the Belgian Code on Companies and Associations are difficult to apply to a director who is neither a natural person nor a legal person. It would be difficult to reconcile an algorithmic director with the principle of collegiality – which involves taking decisions on behalf of the board and not on behalf of each director – and the deliberation it requires. In addition, the liability regime of directors cannot be applied to an algorithmic director because the concepts of fault or intent are difficult to transpose to an algorithm.

If AI cannot (yet) replace a physical director or representative, companies should consider using AI as part of the board of directors' risk and opportunity analysis and strategic orientations.

Directors' AI-use checklist

Board oversight is indeed critical to ensure that AI is used in a reasonable manner and in line with the company's strategic objectives and applicable laws and regulations. In this context, directors should consider the following:

  • Receive regular updates on AI technology, its use by competitors and how it impacts market segment in which it evolves.
  • Consider the potential opportunities and risks AI presents, including risks associated with algorithm and data bias.
  • Put in place AI-related guidelines, policies and audits in order to allow board supervision, avoid misuse of AI by the company's employees and management, and mitigate potential risks.
  • Establish clear guidelines regarding data that can or cannot be inputted into generative AI systems, bearing in mind confidentiality, personal data, trade secrets, etc.
  • Communicate and train its employees to the use of AI, taking into account its business's approach to ethical issues, reputation and risk exposures.
  • Consider the impact of the use of AI on its supply chains, customers contracts, terms and conditions, intellectual property, trade secrets and personal data.
  • Consider the impact of AI on its workforce and develop reskilling or shift traditional recruitment profiles to this disruptive technology.
  • Be regularly updated on the AI and cybersecurity-related legal and regulatory developments (for example through trainings) and ensure that controls are in place to support compliance with any relevant laws and regulations.

Osborne Clarke comment

AI is attractive. It can increase employees and company's performance and have an impact on the company's strategy. However AI is also flawed and may entail risks for companies. Its growing use by companies must, therefore, be monitored by the board of directors.


* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?