General purpose AI systems and foundational models: future obligations for commercialisation in Spain

Published on 31st May 2023

These obligations will require substantial investments in regulatory compliance

Zoomed in view of plugged in cables

The recent launch of general-purpose AI systems (those systems that can adapt to multiple uses, including those for which they were not designed), which are experiencing massive adoption in the market, has significantly influenced   the future of European Regulation on Artificial Intelligence (the "Regulation"). If in December 2022 the Council had already introduced the concept of general-purpose AI systems, it is now the European Parliament that delves into these systems and foundational models, proposing new obligations for their introduction to the market.

The Council's proposal establishes that general-purpose AI systems that can be used as high-risk AI systems or integrated into such systems will be subject to specific requirements. To determine whether an AI system should be classified as high-risk or not, it is important to note that the Commission's proposal includes a list of critical areas and use cases that would entail such classification, although additional requirements are expected to be included in the final text.

Among the requirements applicable to the aforementioned general-purpose AI systems, the following stand out: (i) the adoption of a risk management system, (ii) compliance with training data set management standards, (iii) document retention - including system activity logs or records - and (iv) ensuring the system is explainable and secure and can be subject to human oversight.

Providers of these systems will need to comply with these requirements and will also be obliged to implement a quality management system, retaining relevant documentation for at least 10 years. A fundamental obligation will be to subject the AI system to conformity assessment, making it essential for independent experts to provide assistance, not only technically but also legally, regarding the impact these systems may have on security, fundamental rights, or the environment, among other aspects. Additionally, providers must inform and cooperate with the competent national authorities to ensure ongoing compliance with the Regulation.

The above-mentioned requirements and obligations would not apply if the provider explicitly excludes all high-risk uses in the instructions they prepare for their AI system. This exclusion would only be permissible if the provider has sufficient grounds to believe that the system will not be misused once commercialised. This has been criticised by some in the industry as it would not always be possible to exhaustively exclude all potential misuse that a third party could engage in, even if the provider had applied an appropriate level of diligence. In this regard, it is likely that during the final approval phase of the Regulation proposal, the European legislator will seek to delve into this issue.

On the other hand, the European Parliament has worked on a series of amendments introducing additional obligations and expressly regulating foundational models. The amendments focus on delimiting responsibilities throughout the commercialisation chain of the systems (e.g., when modified by users who subsequently market the modified version), the obligation to conduct impact assessments, measure the energy consumption of the systems, disclose training with works protected by intellectual property rights, and establish a public registry of these systems.

In Spain, the State Secretariat for Digitalisation and Artificial Intelligence has just published a draft Royal Decree that will regulate the sandbox environment in which AI system providers can self-assess their compliance with the Regulation project. The experience gained in the sandbox could influence the future obligations of these providers.

In conclusion, one of the key points of the Regulation proposal so far is the obligation for providers of general-purpose AI systems or foundational models to analyse the potential use of their products as high-risk AI systems or the potential integration of their products into such systems. The result of this analysis would determine whether the future Regulation applies to the system to be commercialised or not.

During this final legislative phase, negotiations between the Commission, Parliament, and Council will be crucial to explore the content of the obligations, which will have a fundamental impact on the limits of liability for providers of general-purpose AI systems and foundational models.

Follow
Interested in hearing more from Osborne Clarke?

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Interested in hearing more from Osborne Clarke?