Artificial intelligence

Artificial intelligence: How can businesses seize AI opportunities while managing the legal risks ?

Published on 10th Oct 2023

Understanding both how AI works – its opportunities and pitfalls – and the current legal framework is critical for businesses implementing artificial intelligence solutions

Close up view on a man typing on a keyboard, working with a desktop and two laptops

If data is the new gold of our economy, AI is the new El Dorado. Like the mythical city, AI's magnetic aura seems to attract everyone, with businesses from every sector eyeing its latest developments and numerous authors pouring large quantities of (digital) ink on (online) papers. This article aims to unpack some of the questions raised by AI, with a practical angle and a focus on guidelines for a safe approach to implementing AI in businesses.   

Understanding AI

With the emergence of ChatGPT, AI became a hot topic. However, even with AI in the spotlight and everyone talking about it, very few really grasp its core concepts.

AI is a subset of computer science, where systems using mathematical algorithms, data and increased computing power can approach human tasks such as "learning", "making decisions" and eventually "performing tasks" (with the assistance of a robot).

It is important to understand that, while the basic concepts of AI were developed a while ago, it is only in recent years that AI systems started to flourish, thanks to the combination of massively available training data and improved hardware and data processing capacities.

AI systems can now process huge amounts of data at record speed, reading and analysing more data than any human could do in its lifetime, in just a matter of seconds. In order to be trained and start being able to produce significant outputs, AI systems need troves of data, which they can find notably through web-scraping techniques, crawling the internet to gather structured and unstructured data such as text,  images and videos. AI models will then break down each piece of training data into (sometimes billions of) small tokens and, via statistical analysis, establish patterns and probabilistic relationships between each token, on the basis of the chosen algorithms and models. The bigger the size of the learning base, the more individual parameters or settings the model uses to calculate the output to be provided.

AI models do not retain their training data nor refer to it, but merely provide a result based on a mathematical and statistical calculation and will try to predict the right output when confronted with an identical or similar question or task. As a more recent development in the science behind AI, machine learning algorithms, faced with new training data, will even self-adjust their settings, thereby creating iterations between the training data and the (intermediary) outputs that the system generates in order to improve the accuracy of its "predictions". Highly popular AI systems these days are foundation models: machine-learning systems that are able to perform a specific task such as image recognition, translation and text generation. Within that category, examples of generative AI models include ChatGPT, Midjourney and Bard.

Therefore, an AI system does not produce outputs or results based on (human-like) reflection: it does not understand the words (such as "cat images") or data it processes and analyses in the same way as humans think. However, because of an anthropomorphistic tendency of human beings and the sometimes frightening accuracy of AI models, our collective imagination lets loose, picturing humanity facing a fascinating digital intelligence able to go beyond its mechanistic origin.

This sense of wonder only increased with the wide availability of and public access to the latest versions of generative AI systems, such as ChatGPT and Midjourney. Thanks to the interface of these models, any individual without a technical background can be in direct contact with the outputs created by AI models, without filter or buffer. Chatbots like Snapchat's MyAI are released to a public largely composed of teenagers, who can mistakenly consider it as a "digital friend" while interacting with it on a daily basis.  

Opportunities of AI

From the business angle, the diminishing development costs and the scalability of AI systems mean that they can be built faster and made available to a wider range of users, thereby giving startups or established companies the often much-needed edge to distinguish themselves from competitors.

The opportunities that AI models may bring in, include obvious time-gain for businesses to process or analyse data, create texts or images based on prompts. AI models can benefit a wide range of sectors from education (helping writing syllabuses) to software programming (helping writing code) and health analysis (helping analysing X-ray images to identify potential tumours).

An AI system can add value to existing data sets or improve internal processing activities. It can help with recruitment activities, target prospects, predict market trends or detect risks of fraud. Generative AI models can also support businesses on a daily basis by translating a text (DeepL), creating adequate responses or articles based on any instruction given to it (Bard and ChatGPT) or even design realistic images (MidJourney), all with a fair amount of accuracy. Even the legal world is affected, with automated text generation or increased text mining capacities. Commenting on the most recent release of the famous Microsoft operating system, the press argue that AI will soon attend online meetings or summarise documents and presentations instead of humans…  

Risks for business

The adverse effect lies in the human tendency to allocate decision-making to AI systems, instead of merely using them as a tool. Taking AI outputs at face value or fully integrating an AI system without clear boundaries can potentially be dangerous given that AI systems can merely, at best, deliver a statistically probable result. This can lead to representational harms, where bias is reinforced. Think for instance of the pictures of Midjourney representing Barbies around the world, with the Barbie of South Sudan being depicted holding a rifle.

A chatbot's answers can also lack empathy – a notion which is non-existent for an AI model – or reflect systemic racism, antisemitism or misogyny. Indeed, discriminatory or faulty data input (such as data that is not representative enough) can result in a discriminatory or biased output.

AI models are also prone to so-called hallucinations, such as inventing historical events, books or quotes. A striking example is when ChatGPT is tasked with creating a summary of an article or a judicial decision, and will "invent" (wrongfully predict) the facts and decision with sufficient plausibility that humans could easily mistake the output for being truthful. You may have heard of the US lawyer, taking at face value a list of precedents (case law) that ChatGPT invented, submitting it to the US courts.

Less dramatic outcomes that may arise are nonsensical answers or results, because of the lack of "comprehension" of the AI system of the context of the question or task it is asked to perform.

All this means that AI systems cannot always be reliable. Businesses should be aware of the issues and proceed with caution. An additional point of attention resides in the challenge of explaining how AI systems reach their conclusions or produce their outputs. Some AI systems create, train and adjust their own model on the basis of sometimes trillions of individual settings, thereby creating a "black box" effect that makes it impossible for humans to grasp and follow the paths AI models use (the algorithmic and complex mathematical analyses an AI system does and the settings it puts in place) to give an answer to a particular question or an output to a specific task.

These pitfalls of AI systems could lead to disastrous outcomes for businesses: damaged reputations or liability for AI's wrong results, all with little to no control on why and how an AI model provided a specific outcome.

Specific AI legislation – Belgian and EU legal framework

In Belgium, there are currently no rules, laws or guidelines specifically applicable to AI. However, this does not mean that no thought is given to it.

The Belgian government launched the AI4Belgium project in March 2019, which aims at enabling Belgian businesses to seize the opportunities offered by AI and to position Belgium in the EU AI landscape. Recently, it issued a national convergence plan for the development of AI.

Other European countries lead the way in trying to understand, integrate and disseminate AI systems but, in general, all European Union Member States are currently waiting for an AI regulation to be enacted. In the meantime, some regulators have started flexing their muscles and some even issued injunctions against specific AI systems, such as the Italian Garante investigating ChatGPT out of concerns about the way in which it is using personal data to train itself.

The European Commission submitted two proposals for specific AI acts. The EU AI Act and EU AI Liability Directive, with the latter aiming to facilitate damage claims against businesses that do not comply with the former.

The EU AI Act is an EU Regulation taking the approach of regulating AI models similarly to the existing framework of product safety regulations, namely by defining a set of essential requirements with the possibility of further developing rules and specifications that would be adopted as technical standards. Many critics regret the absence of focus on fundamental human rights, which are cast in the shadows, but other regulations in force catch AI systems and can be applied to mitigate some of the risks entailed for privacy, non-discrimination or freedom of speech.

The text of the EU AI Regulation could be largely settled this coming autumn.

Risk-based approach

In its current state, the EU AI Act takes a risk-based approach, and lists AI models into three different categories.  

  • Low-risk AI models will only have to meet transparency and disclosure requirements. AI suppliers will have to disclose when people face AI content (such as realistic deepfakes), when people interact with an AI model or when an AI system is monitoring their reaction.  
  • High-risk AI models will be heavily regulated and will have to undergo a specific conformity procedure, which entails specific requirements such as data governance, extensive technical documentation and record-keeping, transparency, human oversight, accuracy and security. These AI systems will have to be certified and registered. The scope of AI systems that will fall within this category is not yet clear. It is expected that it will apply to AI models used by or incorporated into products sold by businesses in sectors covered by existing health and safety EU harmonisation legislation. This includes, for instance, the sectors of machinery, toys, recreational craft and watercraft, lifts, radio equipment, medical devices, civil aviation, motor vehicles, rail system and cable, marine equipment, appliances burning gaseous fuels, personal protective equipment, pressure equipment and equipment for use in explosive atmospheres. The status of so-called general purpose AI systems remains unclear at this stage.
  • Unacceptable risk AI models entail specific AI uses, such as real-time post remote-biometric identification (such as face recognition) in public spaces, social scoring of natural persons for general purpose by public authorities (not unlike the AI system currently used in China which allocates a social credit score to its citizens) as well as manipulative, exploitative and social control practices. These uses are deemed to be unacceptable and will probably be banned outright from the EU.

The EU AI Regulation also includes the principle of compliance by design. Indeed, retrofitting compliance is extremely complex – if not impossible – when building AI models.

Non-compliance could lead to fines of up to six per cent of a business's worldwide turnover.

The likely deadline to comply with the upcoming obligations is 2026, though this is subject to confirmation.

Following a pattern that is also seen in other regions of the world, the United Kingdom decided to approach AI differently, only issuing soft law regulations, in the form of a White Paper  proposing five high-level principles to be taken into account by existing UK (sector-related) regulators: (i) safety, security and robustness, (ii) appropriate transparency and "explainability", (iii) fairness, (iv) accountability and governance, (v) contestability and redress.

Mitigating risks for businesses

Audits and assessments

Because of the upcoming legislation and its complex and multiple requirements, it is prudent for a business to audit the AI it will develop, use or supply to understand in what category its AI system will fall.

Due to their size and scale, in particular regarding the massive amount of data AI systems process at a record speed, AI models require specific processors and hardware. As a consequence, rather than being built from scratch or copied locally, AI models will often be offered "as a service" by cloud service providers based on licensing terms. This inevitably can lead to some risks, as businesses have little control over the training data fed to the AI model.

Also, as businesses will often be the ones incorporating AI models into products, they can be held liable for damages caused to customers by the AI model in the products, per the currently drafted EU regulations.  

Therefore, as of now, conducting assessments about how businesses implement or use AI systems from third parties will help (i) define clear internal guidelines and (ii) with potentially amending existing contracts with AI suppliers or negotiating future ones.

Confidentiality risks

When businesses use AI models, an important risk is the disclosure of confidential information. This will mostly arise when using public versions of AI models, such as the free versions of DeepL, ChatGPT and many others. Indeed, most of the time, these versions retain the input provided by the user and used by the AI supplier, for the purposes of further adapting or improving the AI system. Whether this input is directly fed to the AI system as additional training data is most of the time unknown.

As a result, putting confidential data into public versions of AI systems could compromise business secrets, trade secrets or client confidential information. Another unwanted consequence is the possible breach of business contracts (in respect of their confidentiality obligations). 

Therefore, businesses should first and foremost refrain from using public versions of AI models and take business licences and should ensure that these licences contain appropriate confidentiality provisions. In general, carefully reading contracts with AI suppliers is highly recommended, in order to be aware of how, or whether, data submitted to the AI model will be used further by the supplier. Whenever possible, it is important to negotiate contracts and licences with AI suppliers to protect a business's interests as much as possible.

Businesses should have clear internal guidelines about how to approach and use AI tools, train their staff and educate them about the risks as well as develop acceptable use policies.

Intellectual property risks

Intellectual property risks can arise both regarding input and output data. This is an area where private litigation (either from individuals or businesses) is on the rise.

First of all, regarding training data, web-scraping data can include copyright protected content without the appropriate licences.

In the EU, there is an exception to copyright for text and data mining (web-scraping) conducted for any purpose, unless the rightsholders have expressly opted out. Most website terms and conditions include these opt-outs. It is expected that the EU AI Act will tackle this issue by including additional provisions. Currently, the EU AI Act includes a requirement to disclose a summary of the use of copyrighted material as input data.

As a consequence, in contracts with AI suppliers, confirmation should be sought that all necessary licenses to use the training data have been obtained.

The protection and ownership of output data raises other, sometimes more difficult questions. In the EU, the general consensus is that AI-generated content cannot be protected by copyright, in the absence of personal and creative choices. But this view is sometimes challenged, given that the individual giving instructions to the AI system is expending time and effort and may also eventually make selections and choices among the outputs produced by the machine. The patent landscape for AI-related inventions is also a minefield. While an AI system can generally not be qualified as an inventor, there is currently an increasing trend of filing AI patent applications and many practical questions arise accordingly.

That being said, content or inventions that are created with the assistance or support of an AI system can be protected in their own right. In such cases, the ownership will be assigned to the creator or possibly the employer or contracting party, on the basis of traditional rules.

As a general note, to safeguard the intellectual assets of a company, it is highly recommended to ensure that the AI system is merely used as a tool and does not reduce the human's role in the creative or inventive process.

Moreover, the risk exists that output data infringe the intellectual property rights of a third party, if the training data is not properly curated. It is recommended to secure, from the AI supplier, an indemnity against liability arising from output data infringing third-party rights.

Data protection risks

Another substantial issue is the potential for individual information to be contained in training data. In view of the web-scraping techniques used by the makers of AI models and systems, there is a high possibility that input data includes personal data, which is subject to the strict rules and principles of the GDPR (General Data Protection Regulation). Some EU data protection authorities have been active in querying the GDPR compliance of freely available AI systems: the Italian authority temporarily suspended the use of ChatGPT. 

To prevent GDPR fines, businesses should verify how training data is obtained and should, insofar as possible, either verify if they or their AI suppliers have a lawful legal basis to process personal data (and, in general, if they comply with the GDPR) or if the training data has been properly anonymised.

Cybersecurity risks   

AI models should be treated similarly to all IT systems and software regarding cybersecurity risks. This should be managed by the IT team as an operational risk, rather than a legal risk – although strict security obligations or legal warranties could be included in contracts where businesses make use of an external IT team.

Osborne Clarke comment  

Conducting AI audits as soon as possible will help businesses prevent – or at least know – the risks they face when using AI tools. This will help map the impact of AI systems on supply chains, workforce and customers. Risks consideration will depend on the AI model and how the business uses it.

In any event, developing a governance approach and conducting impact assessments should be prioritised, taking into account the business's approach to ethical issues, reputation and risk exposures. After these assessments, clear internal guidelines should be defined. These must clearly state that AI models should merely be used as a tool by employees or consultants, with the human having a predominant role in the creative or inventive process.   

Last but not least, when concluding contracts with AI suppliers, businesses should keep in mind the type of AI (and risks) involved, and include several provisions to limit their own liability or increase their supplier's liability in case of damage. Non-exhaustively, businesses should seek:

  • Appropriate confidentiality obligations (this will most likely not be possible for public versions of AI models and business licences should therefore be taken);
  • Confirmation that all necessary licences to use the training data have been obtained;
  • Confirmation that training data is representative enough, to avoid biases;
  • Confirmation that training data has been anonymised or that processing of personal data is compliant with the GDPR for the purposes of training AI models;
  • Securing an indemnity against liability arising from output data infringing third-party rights;
  • Strict security obligations in terms of cybersecurity.

In the next instalment of this series of articles, we will look into the wider implications of artificial intelligence for in-house lawyers, such as corporate strategy, risks and liability, workforce management and other issues

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?