Physical AI and strict liability: What is the impact of the EU Product Liability Directive?
Published on 9th March 2026
Artificial intelligence (AI) has become an everyday commodity in software solutions: with technical progress, it is now moving into the physical world as a component integrated into physical products. Such AI-supported products can now perceive the physical world around them through images, videos, text, speech or sensors and interact with it by converting input into machine-executable actions. This concept of physical AI is the driving force behind products such as autonomous vehicles, robots, machinery or medical devices. Companies moving into the physical AI world must consider not only regulatory compliance requirements but also the enhanced liability exposure that comes with those AI use cases, in particular under the revised EU Product Liability Directive.
The withdrawn proposal for an AI Liability Directive
In 2022, the European Commission proposed addressing certain risks related to the use of AI through specific rules on fault-based liability for AI systems. The resulting AI Liability Directive (AILD) was supposed to harmonise fault-based liability and introduce mechanisms such as a rebuttable presumption of causality and enhanced access to evidence, thereby reducing the burden of proof for harmed individuals.
In February 2025, however, the Commission withdrew the AILD as no agreement could be reached with the EU legislative bodies. Consequently, the existing rules for fault-based liability apply to damages caused by AI – in particular, the burden of proof for damages suffered, a violation of a duty, causality between that violation and the damage, and the level of fault lying principally with the claimant.
Strict liability following the revised PLD
Despite the inability to reach a political agreement on fault-based liability for AI, the EU has agreed to revise the product liability framework by adopting the new EU Product Liability Directive (Directive (EU) 2024/2853 (PLD) with explicit references to AI. The PLD must be transposed into member state law by 9 December this year. The German federal government presented a draft revised product liability law in December 2025 to implement the PLD into German law,). The German Parliament discussed the draft law on the modernisation of product liability law on 4 March.
Physical AI under the PLD
The PLD provides for rules on strict liability if an individual suffers damages caused by a defective product. The definition of products under the PLD that may trigger such strict liability encompasses not only physical products but also software, including AI systems within the meaning of the EU AI Act, provided such products are placed on the market or put into service after 9 December 2026. As a consequence, the strict liability concept of the PLD may apply to harm caused by software-based AI systems as well as physical AI. There is a notable exception: free and open-source software developed or supplied outside the course of a commercial activity is not covered by the PLD. This exception is, however, narrowly construed: where open-source components are integrated into a commercial product or where the provision of open-source software is otherwise part of a commercial activity, the exception does not apply. Given the widespread use of open-source AI models and components in commercial AI products, manufacturers should carefully assess whether their use of open-source elements falls within or outside the scope of this exception.
If a product is defective thereby causing harm to life, body or health or damages to other goods or data – with the exception of the defective product itself or goods and data used for professional purposes – the harmed individual can claim compensation from the manufacturer (or potentially other actors in the supply chain, such as importers, authorised representatives or fulfilment service providers) without having to prove any fault by the manufacturer. Notably, the exclusion of damages to goods or data used for professional purposes may significantly limit the scope of PLD claims in business-to-business contexts; however, where employees are harmed by the physical AI, they may still claim compensation under the PLD.
The PLD’s recitals clarify that providers of AI systems should be considered manufacturers in the AI context. However, under article 25 of the AI Act, a mere deployer of high-risk AI systems may in certain cases become a provider and may then also qualify as a manufacturer under the PLD. Key triggers for such reclassification include placing one's own name or trademark on a high-risk AI system, making a substantial modification to a high-risk AI system, or modifying the intended purpose of an AI system in a manner that renders it high risk.
What does this mean in practice for physical AI? If AI-based machinery manoeuvres unexpectedly thereby harming an employee, an AI-based robot in a coffee shop pours hot coffee over a customer's laptop, an autonomous delivery truck damages parked cars or an AI-augmented medical device injures the patient all caused by the integrated AI, the strict liability framework applies.
Principles of the strict liability concept for physical AI
To claim damages against the manufacturer or, as the case may be, another actor in the supply chain the injured individual will be required to prove the defectiveness of the physical AI, the damage suffered and the causal link between defectiveness and damage. The PLD provides noteworthy rules on the establishment of defectiveness and causation.
Establishing defectiveness
Liability under the PLD arises if the damages were caused by a defective product. The cause for damages in the context of physical AI can, however, be wide ranging, such as inaccurate training data, insufficient testing and validation, or insufficient monitoring and controls on the part of the provider – but also inaccurate configuration by the user, disregard of safety instructions, or usage in an unapproved environment.
For a claim under the PLD, the defectiveness of the physical AI must be established as the cause. Physical AI is defective if it does not offer the safety required by or to be expected under German or EU law. Safety requirements may derive particularly from the EU AI Act as well as the Cyber Resilience Act. Additionally, aspects such as reasonably expected use of the physical AI or its capabilities to learn or acquire new functionalities after being placed on the market or put into service can also be relevant.
The manufacturer’s failure to provide updates or patches necessary for the safety of the product can also give rise to liability under the PLD, provided the manufacturer retains a certain level of control over the product. This is consistent with other EU legal frameworks, such as the consumer protection provisions for digital content and digital services, which require the manufacturer to maintain a product’s quality and safety by providing updates, and the Cyber Resilience Act, which imposes an obligation on the manufacturer to remediate known product vulnerabilities.
For physical AI – which may more easily qualify as high-risk AI systems under the EU AI Act – compliance with applicable regulatory requirements is therefore not only a compliance exercise but also a mandatory step to reduce product-liability risk exposure. Conformity assessments under these safety requirements will be key to any defence strategy.
Because it can be challenging for the harmed individual to prove that a product does not comply with the safety requirements, the PLD provides for an evidence disclosure rule. The claimant need only present facts and evidence sufficient to support the plausibility of the alleged defectiveness, whereupon the manufacturer is required to disclose the evidence at its disposal relating to the alleged defectiveness. The disclosure of evidence obligation is, however, reciprocal: the manufacturer may also request evidence from the claimant upon demonstrating the need for such evidence at the claimant’s disposal.
The PLD also provides for a presumption of defectiveness under certain circumstances. For instance, defectiveness shall be presumed where the manufacturer fails to comply with its obligation to disclose evidence as described above.
Establishing Causation
A causal link between the defectiveness of the product and the damage will be presumed by the PLD in particular if it has been established that the product is defective and that the damage is of a kind typically consistent with that defect.
Implications for physical AI
Manufacturers must take into account the strict liability framework of the PLD when pursuing new business opportunities relating to physical AI. The risk exposure for physical AI is at a different level compared to software AI. Potential damages caused by software AI are less likely to directly cause harm to life, body and health or to other goods, although certain use cases – such as AI in safety-critical applications – may present exceptions. Thus, in the context of software AI strict liability claims under the PLD are less likely to arise.
Given the technical complexity and often significant opacity of AI systems, damages caused by physical AI can be a significant risk factor. To address this risk, manufacturers should take particular care to ensure compliance with the various product safety regulations, such as AI Act and Cyber Resilience Act. Any risk-based approach must account for not only potential consequences under those product safety rules but also the implications from a strict liability perspective.
Comprehensive documentation on the implementation of product safety requirements can be essential for a defence under the PLD laws, as the manufacturer will not be held liable where the defect results from the product’s compliance with statutory requirements or the manufacturer can prove that, according to the objective state of scientific and technical knowledge at the relevant time, the defect could not have been detected. At the same time, any other actors in the supply chain who may be subject to strict liability claims under the PLD must ensure that they have access to all necessary third-party information required to exonerate themselves in proceedings.
Osborne Clarke comment
In summary, the PLD introduces a significant strict liability regime that is particularly relevant for manufacturers of physical AI. Companies developing physical AI should consider revisiting any risk-based approach taken for AI Act compliance, conduct thorough product safety assessments under the AI Act and the Cyber Resilience Act, and maintain comprehensive documentation of compliance measures to support any potential defence strategies.
Author: Julia Kaufmann, Lina Böcker, Florian Eisenmenger