Artificial intelligence

Generative AI-powered chatbots: how to guard your business against consumer protection risks in the UK and the EU

Published on 25th Sep 2023

What do businesses deploying AI chatbots need to do to ensure consumer protection compliance?

Green code on smartphone and laptop screens

Streamlined support, personable responses and an enhanced consumer experience: it is easy to see why generative AI-powered chatbots have captured the attention of businesses throughout the world.

However, enabling a generative AI large language model (LLM)-powered bot to interact with consumers raises a number of nuanced considerations and significant risks from a consumer protection law perspective. With the potential for significant penalties in the EU (and, increasingly, the UK), these are risks that should not be overlooked.

In many cases, an "AI-powered chatbot" used by traders in a consumer context will be based on a simple decision tree – in other words, the chatbot will be configured to "respond" to certain keywords with pre-determined or "canned" messages. In some cases, users will only be able to choose between a selection of suggested "inputs" rather than writing their own. The risks associated with such "decision tree" chatbots tend to be fairly contained.

In contrast, increasingly businesses are considering deploying chatbots created on the basis of a generative AI LLM  – that is, a deep neural network that learns context and meaning from training data, and provides generated "outputs" that appear to respond specifically to any query by the user. Some LLMs offered to business customers are able to be trained with selected materials that can be very detailed and specific to the business or sector concerned, configured to respond in a certain style, and then deployed to interact with consumers in a much more flexible and natural-feeling style than "decision tree" style chatbots.

The deployment of generative AI-powered chatbots raise a number of nuanced and complex legal issues including intellectual property, data protection and equality issues. Often overlooked is the fact that – as consumer-facing tools – such chatbots must comply with consumer protection legislation. The UK's Competition and Markets Authority (CMA) has recently published its first report on how consumer protection law should apply to foundation models, which as the name suggests, are often used as the "foundation" for generative AI systems. This suggests that the CMA will be expecting businesses to give proper consideration to consumer protection legislation – and may enforce against businesses which do not. In the EU, the AI Act is firmly focused on the consumer protection agenda with the EU commission considering specific bans on certain types of harm to consumer as part of its fitness check

Key considerations under consumer protection law

Unfair commercial practices regimes in the EU and in the UK apply to the whole lifetime of a consumer-trader transaction. So, if you use a chatbot to interact with customers (or indeed potential customers), you need to ensure that those interactions do not fall foul of the law.

Core principles

When it comes to chatbots, there are several core principles from a consumer protection perspective to keep in mind.

These include certain commercial practices that are banned outright in the EU and in the UK, for instance, "bait advertising" – that is, inviting consumers to purchase products where the trader has no reasonable grounds to believe that they will be able to supply such products (or equivalent products).

Omitting material information is also prohibited in certain circumstances, including where the omission means the average consumer takes a transactional decision that they would not otherwise take.

There is a general prohibition on "unfair" commercial practices: where the practice contravenes the requirements of professional diligence and results (or is likely to result) in a material distortion of the economic behaviour of the average consumer in relation to a product.

Generative AI issues

In the case of generative AI-powered chatbots, there are additional issues that may arise in a consumer protection context.

Because of the nature of the models that power generative AI chatbots, there is an inherent risk that they could generate answers that are inaccurate or – at times – completely invented (referred to as "hallucinations"). These hallucinations are highly problematic in the context of a consumer journey, since consumers may be misled into making a purchasing other transactional decisions they would not otherwise have made.

Generative AI-powered chatbots can also inadvertently create "dark patterns", which is when an online feature can push or manipulate consumers into making certain choices that are not in their best interests.

While the technology is constantly evolving, a generative AI-powered chatbot is unlikely to be appropriate for use in certain regulatory contexts. For instance, significant consequences could arise if a bot were used to provide consumers with information about the ingredients of a food product, or in other contexts where there is zero tolerance for error and/or potentially dangerous consequences from inaccuracy or omission.

Guardrails

When a business deploys a chatbot to interact with customers, that business will be on the hook for those interactions when it comes to consumer protection law. It is important for businesses to put in place appropriate guardrails that minimise the potential for the bot to mislead consumers or otherwise treat them unfairly. The CMA refers to this process as "grounding".

Seven steps to take to minimise the risks

Do your homework

The first step for sensible bot deployment is to carry out a comprehensive and use-case specific due diligence exercise when selecting an appropriate generative AI model to power your bot. As part of this you should identify if your chatbot is likely to be interacting with any consumers who might be considered "vulnerable" and therefore entitled to special consideration under consumer protection law.

This exercise should involve understanding the functionality and tools available to shield your business from consumer protection related risks. For instance, many providers of generative AI models will offer controls to "dial-down" hallucination and enable businesses deploying AI-powered chatbots to configure the model to avoid certain "high risk" topics or mandate specific answers to certain questions to manage risk. The CMA's recent report makes it clear that there is an expectation that businesses will do everything possible to minimise harm to consumers and therefore its expectation will be that such tools were used to the fullest extent available.

Engaging legal experts before expending money and resources to create your bot will highlight critical risks and controls at an early stage.

Maximise controls over hallucinations

Some cloud-based chatbot models include the ability to focus a chatbot's outputs on particular and specific content and to constrain answers to those ringfenced materials. Such content could be product information, ingredients or parts lists, safety information, corporate policies or anything else relevant to the interaction with consumers.

This can operate to minimise the risk of inaccurate or misleading hallucinations. From a compliance perspective, this type of functionality offers an important area to explore with potential providers particularly if your business is regulated by sector regulation which must be complied with in all instances.

Call a bot a bot

It can be tempting for businesses to present chatbots as a human assistant, including by giving the bot a human name or describing it in human terms. While there are sensible and risk appropriate ways to appeal to your customer base, it is crucial that the presentation and configuration of your bot does not mislead consumers into believe they are interacting with a human. The name and any description should be reviewed with this in mind.

Be transparent

While you cannot rely on a disclaimer to avoid your consumer protection responsibilities altogether, a clearly worded and conspicuous disclaimer goes a long way to educating consumers that the bot is AI (not human) and the risk of hallucinations. It therefore reduces the risk of consumers being misled or alleging unfair treatment. Indeed, this is a key feature of the emergent EU Artificial Intelligence Act regime.

The language and the presentation of a disclaimer should take into account your specific business context and the way in which your customer base is likely to perceive its interactions with the bot.

Avoid pressure

It is important that your generative AI-powered chatbot is not aggressive and does not unfairly influence or pressure consumers, for instance by using emotive or "guilt" language. Using language which emotionally pressures consumers to buy products is a dark pattern called social steering and such practices can amount to unlawful and aggressive practices under consumer protection legislation.  

Take advantage of controls offered by your bot provider to "blacklist" emotionally-charged wording and to avoid accusations of pressure tactics. It goes without saying that you must make certain that your bot cannot engage in even more extreme interactions which would be considered to be harmful, toxic or otherwise offensive to the user generally outside of the immediate consumer journey use-case.

Prevent pestering

When creating and training your chatbot, consider placing restrictions on how it communicates with consumers. Persistent messages to consumers may amount to "nagging" – a dark pattern that can be unlawful under unfair commercial practices regimes in both the EU and the UK. Bear in mind that you will need to balance these restrictions with functionality so as not to render the chatbot unhelpful.

Limit the use of chatbot pop-ups and tab flashes in order not to constantly interfere with the consumer's experience of your webpage.

Consider monitoring chatbot interactions to identify where your bot may be dealing inadequately with consumers who may be vulnerable.

Emergency brake

Generative AI technology is still evolving rapidly. Make sure you are regularly checking your chatbot's performance and be prepared to let the bot-dream go if you have evidence that it is causing issues for consumers or worse breaching consumer protection law.

Consider building in functionality to enable consumers to escalate a conversation to human support and/or to report a bot's response as problematic – particularly in the UK where the CMA has made it clear that it expects businesses to provide an effective form of redress for consumers who are harmed. 

Horizon scanning: what's to come?

When it comes to AI, the regulatory and enforcement position is quickly evolving in many jurisdictions, and the EU is no exception. By way of just a taster, the EU's draft AI Act is currently being negotiated between its institutions. Among other proposals, it proposes introducing transparency obligations to ensure that people know that they are interacting with an AI (such as a chatbot) and with AI-generated content. These provisions will apply regardless of whether an AI system is considered high risk and subject to the full AI Act regulatory regime. In addition, new provisions are expected to be negotiated to regulate foundation models which form the basis of generative AI tools.

Closer to consumer protection, the European Commission is also in the process of examining whether some of the core consumer protection law mentioned in this article is "fit for purpose" for a digital age. A recent targeted call for stakeholder evidence indicates that the Commission is focused on AI systems, touching not only on the use of chatbots but also AI systems that deploy "subliminal techniques" or dark patterns for commercial purposes.

From a UK perspective, in line with the UK's government AI white paper (which calls on regulators to assess their existing regulatory powers) the CMA has just published a review into UK consumer protection and competition law in order to form an "early view" on the likely consumer protection implications of the deployment of AI foundation models. This gives some indication of how the CMA anticipates that consumer law will apply to foundation models. More materials are expected from the CMA on how consumer protection will apply to AI in due course

At the same time, the UK Digital Markets, Competition and Consumers Bill has been making its way through the UK legislative process. Among a slew of other changes (including changes proposed by a recently launched consultation), the bill proposes significantly sharper teeth for the CMA when it comes to the enforcement of consumer protection law, including powers to issue administrative penalties of up to 10% of a trader's global group turnover. If it passes as intended, those enforcement powers will fundamentally shift the risk profile when it comes to the consumer protection concerns summarised in this article.

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?