Competition, antitrust and trade

CMA trains crosshairs on pricing algorithms and AI agents in the UK

Published on 10th April 2026

The regulator is ramping up scrutiny of algorithmic pricing, third-party data tools and consumer-facing AI agents

At a glance

  • The CMA identifies four categories of algorithmic and AI collusion risk and signals expanded enforcement capabilities in this area.

  • New guidance confirms businesses bear full responsibility for AI agents' consumer-facing conduct.

  • The penalties for failures in this context are severe, with fines of up to 10% of worldwide turnover, and the possibility of having to pay damages.

The UK's Competition and Markets Authority (CMA) has signalled a marked intensification of its scrutiny of businesses' use of algorithms, AI and third-party data tools in the context both of competition law and consumer protection. The development follows  the CMA's latest thinking on algorithmic collusion risk a live investigation into hotel industry data sharing and new guidance on the use of AI agents in consumer-facing operations.

Algorithmic pricing and collusion

Algorithms and AI are increasingly shaping a wide range of decisions and market outcomes. In a blog published on 4 March , the CMA set out its current thinking on the competition law risks arising from algorithmic pricing and signalled a significant step-up in its enforcement capabilities.

Those capabilities may be expanded further in the near future: the CMA  could acquire the power to request demonstrations of algorithms or changes to how information is displayed.

The technology can generate significant benefits but also carries important risks, including the potential for new forms of collusion. The CMA has identified four categories of algorithmic collusion risk of which businesses should be aware.

Classic collusion via algorithm

Rival businesses may explicitly agree to collude and then use algorithms to implement, monitor, and enforce the agreement. The CMA has previously pursued a case in which two online sellers agreed not to undercut each other on a leading online retail platform, giving effect to that agreement through pricing software. As with traditional forms of collusion, this conduct is unambiguously illegal.

Hub-and-spoke arrangements

Competing businesses may use the same algorithm or data hub to exchange competitively sensitive information indirectly, even unintentionally. This includes by delegating pricing decisions to the algorithm or receiving recommendations based on co-mingled data.

UK law recognises that indirect information exchange through third parties, such as algorithm providers, can constitute illegal, anti-competitive conduct. 

Predictable algorithm behaviour

Businesses may use algorithms that react to market events, including following price leadership and punishing deviations. Consequently, this could result in collusive outcomes without any human communication or agreement. Unilaterally following a rival's price is more likely to be acceptable under competition law; however, a group of companies using the same algorithm to set prices or other conditions of sale are more likely to be in breach of competition law. This type of arrangement can facilitate the exchange of confidential information such as price plans, market strategies stock levels or spare capacity.

Autonomous AI systems

It is possible that advanced AI systems given the objective to maximise profits may learn to reach coordinated outcomes even without human intent to collude. 

Hotel chains: a live investigation

These concerns are not merely theoretical. The CMA launched an investigation in March into suspected sharing of competitively sensitive information among competing hotel chains – Hilton, IHG Hotels and Marriott –through the hotel data analytics tool STR, owned by CoStar, with all four businesses under investigation.

The CMA has not yet reached a view as to whether there is sufficient evidence of an infringement for it to issue a statement of objections, and no assumption should be made at this stage that the law has been infringed.  Nevertheless, the case is a clear signal that the CMA will act where it considers that third-party data platforms serve as conduits for the indirect exchange of competitively sensitive pricing information.

AI agents and consumer law

Alongside its competition law focus, the CMA has published new guidance on how businesses can deploy AI agents in compliance with consumer law. The regulator's core message is that consumer law requires businesses to treat their customers fairly, regardless of whether they interact with a person or an AI agent. The CMA states that a business, trader or organisation is responsible for what an AI agent does in the same way as it is responsible for what an employee does, even if someone else designed or provides the AI agent on its behalf.

Transparency and disclosure

On transparency, businesses should consider the need to inform customers that they are interacting with an AI agent, according to the CMA. Consumer law requires businesses to provide consumers with the information they need to make informed decisions and not to mislead them.

If the fact that a consumer is dealing with AI rather than a person might affect their decisions, businesses should disclose this. Equally, the regulator warns that businesses should not overstate either AI's role in providing a service or its capabilities.

Training and testing 

The guidance identifies the need for businesses to train and test AI agents before deployment and consider the intended functionality and scope of an AI agent and evaluate the potential impact on customers. The guidance states that AI agents must be designed to respect customers' statutory rights and the terms of contracts, avoid misleading customers and properly obtain any necessary consents required by consumer law.

The CMA also notes that businesses should conduct appropriate testing as part of the AI training, including a performance evaluation. They should also assess the adequacy of any training conducted by third parties.  

Ongoing monitoring

Businesses are also advised to check regularly how well an AI agent is performing, verifying that it is delivering the right results, behaving as intended and complying with consumer law. They should also ensure that appropriate human oversight is in place to avoid models from misinterpreting data and "hallucinating".

Swift remediation 

Where an AI agent is not performing as expected and this results in non-compliant or potentially non-compliant outcomes, the CMA says businesses should act quickly to address the problem. This is particularly important where an AI agent interacts with large numbers of people or vulnerable customers.

The guidance further illustrates these principles by reference to practical scenarios, including the use of AI agents in marketing campaigns, processing refund requests, responding to customer service queries and providing services.

The CMA's central message is simply that customers, whether they interact with a human or an AI agent, are entitled to be protected by consumer protection laws. 

Osborne Clarke comment

The investigation into hotel chains demonstrates that the use of third-party data analytics and pricing tools is a live enforcement priority, not a theoretical concern. Businesses using shared pricing algorithms or data platforms will want to review whether those arrangements could facilitate the indirect exchange of competitively sensitive information with competitors, even unintentionally.

Contractual protections, audit rights and, where appropriate, technical audits of input data and algorithmic methodology will also need to be in place. The risk extends beyond explicit agreements: the CMA has made clear that hub-and-spoke information flows and predictable algorithmic behaviour are firmly within its sights.

On the consumer side, the CMA's guidance on agentic AI reinforces a principle that businesses should internalise: to own the conduct of AI agents just as they would own the conduct of their employees.

Businesses deploying consumer-facing AI should map their current deployments, establish formal governance frameworks covering training, testing, monitoring and incident response, and ensure that third-party supply chain arrangements include adequate contractual protections.

The consequences of non-compliance with consumer protection law are potentially significant. Businesses in breach may face enforcement action by the authorities, including the CMA, which is now empowered to impose penalties on businesses directly under the Digital Markets, Competition and Consumers Act 2024, without having to go through the courts. Businesses can potentially be fined up to 10% of worldwide turnover and may also be required to compensate affected consumers. 

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?