Artificial intelligence (AI) is already embedded within our day-to-day activities from chatbots to internet searches. In computer science, AI is a broad term, referring to any human-like intelligence exhibited by a computer, robot, or other machine, such as visual perception, speech recognition, and language translation (and not at all like the sci-fi film references of days gone by).
The most challenging legal questions are raised by the categories of AI that involve deep learning and machine learning, that is systems using data to learn and, using what has been learnt, to make informed decisions.
As AI becomes an increasingly important asset to many companies, what intellectual property protection is available to stop copying or exploitation by third parties and competitors? The law is lagging behind the development of the technology at present with no clear act or provision to point to and further guidance is awaited from law makers.
The government has published its response to the call for views on the implications that AI might have for IP policy, which was conducted last year, with proposed action points to work towards to clarify and improve protection and the scope for innovation. While these are explored and implemented, it is important to consider IP rights in this technology from as many angles as possible in order to formulate optimal strategies.
Can copyright help?
The Copyright, Designs and Patents Act 1988 (CPDA) provides automatic copyright protection for computer programs. AI, in its most simplistic form, is just a collection of code written in a computer language to provide a specialised set of instructions to make computers do some specific output. If this exact code is copied, the copyright is infringed.
However, copyright has limitations for all computer software (not just AI), in particular the idea behind a computer program is not protected. It could be possible to re-write a program in a way that replicates the functionality, while side-stepping copyright infringement by re-writing the computer code. Algorithms (being sets of rules that define a sequence of operations) are particularly vulnerable to extraction and non-infringing copying. In an AI context, machines relying on particular algorithms may be vulnerable.
Another issue relates to the consideration of copyright laws in the context of machine learning. The value in such a system can be the application of a series of weights and biases across successive artificial neuron layers to influence an output in what is ultimately a mathematical process of refinement. The result is in effect a learned behaviour. The models however are inherently hard to protect, not least as they are usually dynamic with a continual evolution as further data is inputted and learnt from.
Despite these issues, the government believes copyright provides adequate protection for software, allowing developers (and those creating AI software) to benefit financially from their work. In view of this, they do not plan to do further work on how copyright protects software, or its licensing. This is why relying on copyright protection in isolation may not be the best strategy.
The secondary value of copyright protection is for data sets used for machine learning. For example, a machine learnt to develop a new painting in Rembrandt’s style using data from 346 of Rembrandt’s works to do so. Stopping third parties using copyright works as training data may limit their ability to copy a machine learning model.
The UK Intellectual Property Office (UKIPO) considered this issue more closely in their call for views; there is an argument that the risk of infringing copyright or other rights stops people making full use of AI and that AI should be able to use copyright material more easily which would support increased adoption of AI technology. This could favour a specific copyright exception to allow text and data mining (TDM) by AI. There is also the argument that licensing models evolve with technology and AI developers should license copyright works to ensure that right holders are remunerated when their works are used by AI.
In response to the call for views, the government has decided (among other actions) to review the ways in which copyright owners license their works for use with AI, and to consult on measures to make this easier. This includes improving licensing and copyright exceptions in order to support innovation and research. It remains to be seen whether this means change and what kind of practical implications this will bring.
What about patent protection?
In the UK, the starting position is that computer programs and mathematical methods are excluded from protection by way of patents. At first glance this would appear to pose a significant road block for all types of AI protection. However, the exclusion does not apply when a computer program has a "technical effect". These permissible "computer implemented inventions" with a "technical effect" involve something more than the execution of a program according to the normal running of computer software on a hardware platform.
They require some additional effect such as the control of a technical process, especially if it is outside of the computer system in the physical world. It could include a technical effect to enhance the internal functioning of the computer itself. For example, claims to a mathematical method for image processing have been allowed.
The reasoning was based on the idea that the method enhanced the quality of a digital image so the process of enhancing the image gave the claims the necessary technical character – the technical output was the enhanced image. However, an invention relating to how an AI system works, for example concerning the mathematical processes driving the operation of the software (sometimes termed a "core AI" invention), would face a challenge – without a clear technical effect to point to – on its road through the examination process.
The challenge with patenting AI systems that are directed to solving a particular problem is ensuring that the protection obtained is not limited to the specific practical application only when, at the time of drafting the patent, all potential applications (and especially the most lucrative application of such technology) are not yet known or have not been conceived.
As a result of the European Patent Office's (EPO) permissive approach to computer implemented inventions (in contrast to the US Patent and Trademark Office), the leading AI research companies have filed vast numbers of AI-related patents at the EPO in what has been described as an AI patent boom.
Yet, in this fast moving technical area, there is still much uncertainty about what can be patented and whether it will result in a valid patent if challenged in court. In response to a recent call for views by the UKIPO, the Chartered Institute of Patent Attorneys, among other respondents, called for this issue to be reviewed on the basis that AI inventions generally make a technical contribution and hence should be patentable based on current law and methodology.
In response, the government has concluded that the conditions for the granting of AI patents in the UK are generally fit for purpose, but it has said that it aims to publish enhanced UKIPO guidelines on exclusions for AI inventions. It also wants to engage AI-interested sectors, including small and medium-sized enterprises, and the patent attorney profession to enhance understanding of UK patent exclusion practice and AI inventions.
The UKIPO will review its patent practice in preparation for the guidelines and establish any difference in outcome for AI patent applications filed at the IPO and the EPO. If the guidelines are clear and facilitate the protection of core AI, inventors will be more likely to disclose their inventions which should drive AI development forward. Otherwise, they are likely to prefer to keep their technical advances as trade secrets. This is all in the context of government's general desire to foster AI in the UK and, to that end, an economic study will be commissioned to better understand the role of the patent system in incentivising investment into AI and to determine if further intervention is necessary.
What appears unlikely is that the substantive legal provisions governing patent law will be changed; any change could cause an imbalance in the patent system and potentially lead to further uncertainty where some inventions are treated differently from others if they have an AI component. Or it may mean that patenting AI just becomes more difficult – which is unlikely to incentivise AI innovation and would seem to be at odds with current government policy.
One significant question is whether the patent protection currently available is adequate for AI. The answer will depend on the individual aspects of an AI invention that would need to be assessed. Patent protection could prove a strong deterrent where it is in place. And a strong and comprehensive patent portfolio can be licensed on reasonable terms to allow further access and drive innovation and investment in the AI space forward.
Another consideration in relying on patent protection is the rapid pace of development in this fast moving field. The patenting process can be lengthy, and if the relevant technology becomes obsolete quickly, then the time and money spent seeking patent protection may be a wasted investment.
A well thought through IP strategy is key. Getting advice early on as to what features could be patentable, or whether the AI system or features of the system are better suited to other forms of IP protection, will enable a business to maximise the protections available.
Could trade secrets protection apply?
Technical elements of AI such as algorithms can potentially be hidden behind a user interface and as such there is scope for trade secret protection. For example, Google's search algorithm is a famously held trade secret of enormous value. UK courts have robust procedures to maintain the confidentiality of information in legal proceedings, should they be necessary, making protection of this type a useful tool.
The EU Trade Secrets Directive (2016/244/EU) was enacted to ensure a harmonised regime across Member States and has now been implemented by France, Germany, Belgium, Spain, Italy and Sweden. The Directive was also implemented into UK law in 2018 and this implementation remains in force following the end of the transition period.
Information will be considered a trade secret if it:
- is secret – in the sense that it is not (as a body or in the precise configuration and assembly of its components) generally known among, or readily accessible to, persons within the circles that normally deal with this kind of information;
- has commercial value because it is secret; and
- has been subject to reasonable steps (under the circumstances) to keep it secret by the personal lawfully in control of the information.
AI presents unique challenges in terms of trade secret protection. The use of open source (publically available) code can be useful as a spring board for AI initiatives, considering the vast amount of coding that is required in today's technology. Using this base code (for elements such as application programming interfaces and operating systems) frees up developers to focus on the more challenging and groundbreaking aspects of what they aim to create.
However, use of open source code is usually governed by a licence. Some licence terms are highly permissive (for example, the MIT licence imparts very few restrictions). The GNU General Public License requires any software incorporating the original code to be available under the same or equivalent open source licence terms, effectively prohibiting proprietary software as an end goal. If trade secret protection is sought, a company must monitor the open source licences and what its building into its systems or else jeopardise any claim to confidentiality.
As part of the call for views, the government has considered trade secret protections for AI, but has no plans to make urgent changes. It did note the importance of trade secret protection for technologies such as AI and that it will monitor future developments.
Rights in data
In machine learning, the machine finds patterns and features in large amounts of data and can then use that patterning to process new data or problems. As data drives this field, it brings a whole new value and requirement for its protection. One big issue is that there is no clear corresponding "data right" per se to rely upon; it is an established general principle there is no property right in information itself. Literary copyright would likely apply to the documents comprised within a dataset, or the exact representation of any summary (depending on whether sufficient skill, judgment or labour has been expended to meet the test for originality). The ideas or information expressed within the dataset are not protected by copyright.
The government is to review the ways in which copyright owners license their works for use with AI, and consult on measures to make this easier. It wants to better understand the merits of TDM exceptions in this area, including the approaches taken in other countries. The government states it remains committed to ensuring that a fair balance is struck between the needs of copyright owners and users which includes improving licensing or copyright exceptions in order to support innovation and research.
Database rights (that is, rights in compilations of data) may also assist and arise in the training data both as a right in copyright under the CPDA and also as a "sui generis" database right under the Database Regulations 1997. These are not straightforward rights. In this context, the rights provided by the CPDA are limited to protecting the way in which the database is structured or categorised and not the contents itself. Also, establishing a sui generis database right requires evidence that there has been a substantial investment in obtaining, verifying or confirming the contents, assessed independently of the resources used in creating the data, and can amount to a broader protection. There could be particular value in data if experts have been involved in its categorisation and that has led to the creation of a database. A database right is infringed if a person extracts or re-utilises all or a substantial part of the contents of a protected database without the consent of the owner. Where the data is to be licensed for use in training an AI system, careful thought is needed to ensure that the licensing provisions allow the data owner to maintain control.
Another consideration is the way AI developers manipulate data (via cleansing or aggregating etc.) and whether there is still value in the arrangement of information in a database. In unsupervised learning, machines can "learn" from unstructured (that is, uncategorised) data meaning a database format is not always necessary as a way to input information. Should this be the case, other rights may be the more relevant protections to consider. In particular, the use of database rights, alongside trade secrets provisions is one key way to monetise and protect data, providing appropriate licensing arrangements are put in place.
The power of a brand name
If an AI offering is to be public facing, naming the system could be key to building trust and brand identity. Seeking trade mark protection could also assist in preventing third parties from capitalising on goodwill. It is advisable to consider pre-existing registered rights when deciding on a brand name. Clearance searches (searching trademark registers for identical or similar names) can help identify conflicts that could be an obstacle to a proposed product name.
There are also limits as to what can be registered if trade marks are descriptive and/or non-distinctive. Applications for trade marks should be filed as early as possible to secure a filing date.
In its response to the call for views, the government recognised the complexities in the relationship between trade marks and AI technologies. However, the consensus from the respondents was that AI is not yet developed enough to impact the core legal concepts within trade mark law. The government concluded current trade mark legislation appears fit for purpose with no changes planned.
The government wants to make the UK a global centre for AI and data-driven innovation. Its mission is to increase uptake of AI for the benefit of everyone, which includes ensuring AI technology works for the people and making sure the UK has the best environment for developing and using AI. The government has proposed numerous next steps to work toward this aim, which could lead to changes in the application of legal protections.
There is not one clear IP right that provides complete protection for an AI system. The crucial point is to consider potential IP protections in the round and flexible strategies based on the best option available for a particular system. Alongside this, there is the need to monitor legal developments in this field closely and respond to forthcoming changes with a dynamic approach.
Please contact Will James, Hannah McCarthy or your usual Osborne Clarke contact to discuss how we can assist you.
For further exploration of the issues raised in this article and other AI related legal points, the second edition "Artificial Intelligence: The Practical Legal Issues" has now been published, authored by John Buyers who leads Osborne Clarke’s international Artificial Intelligence and Machine Learning group.