Sport looks to artificial intelligence to deliver competitive edge
Published on 25th Apr 2022
As teams and coaches turn to AI data to enhance success, what are the legal challenges and what is on the regulatory horizon?
In sport, where everything is about competitive edge, technology and data have become vital tools to aid teams in their wider pursuit of "marginal gains". Augmenting analytic capabilities through the use of machine learning models and artificial intelligence (AI) is just one method being employed by coaches to enhance success. However, while embracing new technology and data analytics can present exciting opportunities, they also pose legal issues and risks.
A couple of practical examples of how AI is being used by sports teams are:
- Squad rotation and selection. It is now commonplace for professional athletes to be fitted with wearable technology during training and competitions. These devices record data, such as heart rate, speed and distance covered, which can then determine athletes’ fitness levels or the need to rest a player. AI models can also be fed additional data, such as age, previous injuries and genetic predispositions, in order to analyse key fitness-related information and to predict when a player may be close to injury, based on their number of hours of exertion.
- Tactical Decision Making. Basketball team The Houston Rockets worked with computer scientist Daryl Morey to uncover that slam-dunks and three-point shots returned the best point-per-shot yield for teams and that two-point shots provided the worst. This information led to an increased focus on higher-yield shots and arguably turned the Rockets from an average team in the league into championship contenders, advancing to the playoffs nine times out of a possible twelve.
As is the case with any emerging technology, there are legal issues concerning the deployment of AI in sport, including around managing and licensing data effectively, the types of data used, data protection impact assessments (DPIA), AI black-box transparency, and the protection of intellectual property in AI models and in the contracts of employment or service agreements of data scientists and analysts.
Managing and licensing data
Before AI can be used to enhance a team's performance, AI models must first be created using training datasets. The more datasets input into a system, the more accurate an AI model will be.
But who holds the rights to these datasets? While the concept of data ownership is itself a controversial concept (contrary to popular views, there are no current sui generis rights of ownership in data per se), it is still vitally important that all commercial rights in such data are managed and licensed appropriately. If the data is gathered via wearable technology, claims could be made by the third-party tech providers (on the basis that it forms part of their proprietary technology solution), the individual players (on the basis that it is personal data), sports teams (under their player arrangements) or even the league itself. Therefore, a clear set of licensing arrangements is necessary to ensure data can be utilised for the desired purpose without restrictions from any other party.
Concerns also arise when players transfer away from an organisation. In such scenarios, sports teams would likely lose the consent of the individual player to use any data that has been gathered about them. Without proper protection, this could force teams to go through the painstaking process of unwinding AI models which have been built up over years.
Types of data
It is highly likely that the data that is input into a sports team's AI model will consist of some form of personal data. For example, a player's data may become health-related data if it refers to specific injuries they have suffered. Alternatively, biometric data relating to a player's gait – how they are running and moving – may be gathered and analysed.
In both situations, various obligations under the General Data Protection Regulation (GDPR) and Data Protection Act 2018 will fall on the relevant data processor. Sporting organisations and analysts must be clear in advance of these obligations before processing any personal data.
AI black-box transparency
Machine learning and AI technology can create issues regarding transparency due to the “black box” issue of not knowing why a particular output has been generated.
These technologies are set up to self-adjust in response to each piece of data passed through them. They use a maths-based process that is fundamentally different to human logic and reasoning and is, therefore, not simple to explain.
Organisations using AI systems should seek, as far as possible, to make sure that the systems that they use are sufficiently transparent in terms of usage to allow their operation to be explained.
Intellectual property in AI models
As an AI model is being trained on vast amounts of data, it will devise functional improvements to the model itself, as well as post-training outputs.
From an intellectual property perspective, licensing agreements must ensure that any outputs are properly protected, as distinct from rights in data, per se. In turn, this will allow teams and analysts to use and develop their models without restriction.
Moreover, when drafting data-licensing agreements, care must be taken to ensure that any data gathered from players can expressly be used for AI purposes.
Data scientists in sport
Increasingly, professional teams across high-performance and elite sports now hire statistical analysts to gather and translate data into meaningful outputs. In turn, analysts can assist teams by setting up new AI models internally.
As with players, data scientists operate within a highly competitive and dynamic environment and may leave an organisation, taking AI knowledge and critical know-how with them. Without contractual protection, including appropriate IP assignment provisions in contracts of employment (or service agreements), issues may arise at this point as to who owns the AI model in question.
EU draft AI Regulation
After extensive consultation, the European Commission has unveiled its proposed regulatory regime for AI. The legislation envisages a full regulatory framework, including new EU and national bodies with strong enforcement powers, and will place heavy fines on businesses for non-compliance. The AI Regulation will be applicable to any EU markets in which providers develop AI systems, importers import AI systems, sellers offer AI systems or other companies offer and use AI systems. In practice, a broad range of AI systems and businesses will be affected by the incoming Regulation, possibly having an impact upon the way sporting organisations utilise their AI systems.
The European Data Strategy
The Commission’s proposal for an EU Data Act was presented on 23 February 2022 as part of the European Data Strategy and aims to realise a European single market for data. The proposed legislation is set to have a huge impact on the European data economy: data is the fuel for data-driven business models and the Commission intends to provide this fuel.
The proposed EU Data Act aims to establish a cross-sectoral governance framework for data access and use – whether by individuals, organisations or public authorities. It has the potential to fundamentally change the environment for data-driven business models in the EU.
Osborne Clarke comment
As sports teams strive to gain an edge over their competitors, AI systems are becoming increasingly commonplace as an effective and efficient way of harnessing data to improve chances of success.
As this trend continues, however, sports teams and organisations must take care to ensure that they are properly protected in terms of the data and IP which they use. In addition, they must be mindful of their obligations under data protection laws, as well as under the incoming AI Regulation.
This is the first in a series of articles looking at cyber transformation in the world of sport, and the business risks and opportunities and the legal and regulatory issues raised by these rapidly emerging technologies and digitalisation.