Digitalisation

Will AI be a game changer for the sports industry?

Published on 2nd Aug 2022

Artificial intelligence is increasingly used in sport, but it is not yet clear how looming regulation will affect the sector

It is not always apparent how much some industries rely on artificial intelligence (AI) and usage of this technology varies by individual economic sector. The sports industry is a good example. According to Allied Market Research in 2020, global AI in sports was valued at USD 1.4b, with projected growth of another USD 19.2b by 2030.

To better understand the extent of AI in the sports industry, the scope of the industry should be properly defined. As defined by Cambridge Dictionary, the word "industry" means "all the companies involved in a particular type of business". This means that it is not just athletes and clubs that make up the sports industry - it is also marketing companies, sports medicine, companies in the player recruitment and search market, developers who create software to assist coaches and many, many more.

In each of the above-mentioned areas, there is a task for which AI offers the best solution.

AI on the pitch, court and ring

During the games themselves, AI is most often used in the role of the assistant referee.

All varieties of referee support programs, such as AI Referee (football) or Hawkeye (tennis) have permanently established themselves in their role as assistants. Their indications often determine the verdict of the referee, who, thanks to them, does not have to limit himself to the cameras and his own senses. Hawkeye - which is used to decide whether a ball has crossed the court line - is a great demonstration of the real help that technology gives to the world of sport.

A particularly interesting artificial assistant referee is Goal-line technology (GLT), an AI introduced by international football governing body FIFA after the controversial England v. Germany match at the 2010 World Cup. In the simplest terms, it is a program to confirm whether the entire ball has crossed the goal line. To make the correct assessment, the GLT uses not only 14 strategically placed cameras but also a magnetic field observed by sensors.

The GLT has been updated several times since 2010, but to this day it remains a permanent fixture on the pitch and almost a member of the refereeing team in its own right.

An artificial coach?

It is not only the programs assisting referees that keep a close eye on the athletes on the field. The most loyal fan of any athlete is the AI that tracks his or her every movement during a sporting struggle, informing both the athlete and his or her coach afterwards of the exact results.

Software of this type is most often based on detecting and tracking movement, colours and combinations of the two. Methods such as "bounding boxes" - which involve assigning each player their "field in the eye of the camera" - allow accurate analysis of, for example, ball possession and ball handling in football or the percentage of successful dodges in boxing.

The market for AI to support players and coaches in their development is thriving. The report "Player Tracking Market - Growth, Trends, COVID-19 Impact, and Forecasts (2021 - 2026)" published in January 2021, stated that in 2020, this part of the sports industry was worth USD 3,45b, with the prospect of growing to USD 13252.53m by 2026.

Marketing pitches

The sports industry is not just about the immediate vicinity of a pitch, ring, pool or hill. It is also all about the players themselves and their clubs. Thus, the topic of marketing cannot be overlooked. Especially now that, thanks to AI, fans can get a greeting from sports stars (such as footballer Lionel Messi).

Lay's (a brand of potato crisp), in collaboration with Synthesia (a web-based platform for creating videos with AI avatars) and the Paris Saint-Germain football club star, has created an app called "Messi Messages", allowing individuals to create funny videos by entering a friend's name and their favourite football club. The program allows the creation of semi-personalised messages in 10 different languages, in 20 different countries.

The basis of the program's AI is facial mapping, allowing the star's voice to be positioned and the movement of his mouth to be adjusted to achieve the desired sound.

So far, Synthesia has loaded 6000 different names in multiple variations into the app's database to give fans more opportunities to play with the program's possibilities.

Break a leg

Athletes can only run, train and compete for as long as their body is fit enough. While an unexpected injury can effectively derail plans for an entire season, sports medicine is always ready to support athletes in their recovery. However, a proper diagnosis is always needed.

AI-based algorithms in diagnostic decision support system (DDSS) is also something worth to mention. 

In the US alone, an estimated 8.6m sports-related injuries are reported to medical personnel annually. This huge number of requests for treatment presents huge potential for the use of systems such as DDSS - systems whose effectiveness in supporting diagnosticians has already been proven.

Applications such as DDSS based on proprietary databases, and increasingly machine learning, can work brilliantly during the season - when the sports doctor is working at other locations, often without access to specialised medical equipment. However, while diagnostic applications are part of the future of medicine, in some cases the need to use, for example, radiological diagnostics, will be unavoidable.

IT crowd

But what if the injured party is the program itself? The medics of any AI are computer scientists and programmers, but they too nowadays have machine assistants.

A self-solving service desk is a type of program based on machine learning. This product learns about a broad spectrum of problems and solutions through data provided by the developer and its end user. Thus, for future problems that arise, AI will be able to quickly propose an effective solution.

In addition, by tracking user behaviour, the same application can suggest changing certain habits related to the handling of IT resources, thus preventing further failures.

The operation of this type of application is based on comparing an existing database with a specific case presented by the user, which could suggest a limited number of solutions for a limited (by the capacity of the database) number of problems. The machine-learning component (which is often decisive in granting AI status to a given software), allows the user to learn more cases and their combinations. This makes it possible to prepare for the resolution of successive types of failures, without the user's involvement or intervention in the content of the data realm itself.

Is law regarding AI a good playmaker?

Depending on jurisdiction, the answer varies. The US and China have taken the path of economic freedom, avoiding more regulation so as not to hamper market development.

In the EU, the situation is different. While the EU clearly aspires to be a leader in the AI market, it does not shy away from regulating key aspects of its operation. A draft regulation is currently in the pipeline that holistically addresses the topic of this type of software. Of its provisions, two aspects, in particular, should be noted - the question of legality and liability.

The legality of the use of AI will be determined by a four-level scale, ranked according to the risk presented by the program to human rights. The following categories, starting with the one with the least risk, are: 1. no risk; 2. moderate risk, 3. high risk and 4. unacceptable risk. Assignment to the last of these four grades is expected to result in the banning of the AI in question from the internal market.

Liability for AI activities, on the other hand, is a topic that has long been debated in legal circles. The EU is proposing that those who deploy software of this type should be held responsible for its activities (deployer). This means that the team deploying it to a hospital or the hospital itself would be responsible for the mistakes of, for example, a DDSS medical application, a specific FIFA cell would be responsible for GLT mistakes, and so on.

In view of this, does the new regulation present any economic threat to existing AI solutions already operating in the sports industry?

In light of the current wording of the draft regulation, it seems unlikely. The risk scale deciding legality, according to the definitions in the proposed act, would assign all the previously discussed programs below unacceptable risk. Some doubts may apply to AI assisting medical practitioners - but in this context, the key issue is the impact of AI diagnosis on patient behaviour and decisions, thus shifting the question of the use of such applications from the possibility of their use to the manner and method of their use.

On the other hand, the possibility of assigning liability for damages, for example, to property, caused by the software, is intended to completely prevent impunity and encourage companies using AI to exercise reasonable caution. However, what if the application was flawed from the beginning, due to mistakes by the developer or producer?

The draft omits this topic for the time being. However, it will be regulated in the Defective Products Directive, or rather in its revised form.

Apart from that, it is important to realise that one of the most pressing problems associated with the use of AI has already been regulated: data processing.

Every program - including AI, of course - consists of two parts: the textual realm, that is, the code containing the commands for the application, and the non-textual realm (most often the database), being everything that the application is supposed to use to fulfil the commands. In the context of the existence of machine learning and data mining, the handling of data is immanently linked to the risk of data leakage or malicious use.

As far as personal data is concerned, this area is covered mostly (but not only) by the General Data Protection Regulation (GDPR), which, in its present form, can attribute liability and punishment to any AI-related offence. Non-personal data, on the other hand, has been unproblematically dealt with by adopting as a principle its completely free movement in the internal market, as long as intellectual property rights and trade secrets remain intact.

Osborne Clarke comment

The use of AI in the wider sports industry is not a new phenomenon. Its integration into most aspects of this economic sector is undeniable. The opportunities created together with this implementation, both already exploited and still to be exploited, present huge potential for growth - and, of course, earnings.

As lawyers, do we expect the sports-related AI market to be turned upside down by the upcoming regulations? Considering the current AI draft regulations, we do not consider such

an outcome. Instead, we expect further developments in a certain regulatory environment - once the regulation is in place, of course. However, taking into account the fact that AI is already being used, it is also necessary to take care of it from a legal point of view. It is worth ensuring that regulations and contracts are created from the beginning. What does that mean in reality?

Issues surround the method of use of AI - fundamental human rights must always be safeguarded and the method of the program's response to the questions asked should remain neutral in a way that always allows the individual to make the final decision.

However, it is important to remember that anything that relates to the draft AI regulation borders on a guessing game. While the draft has been in the legislative process for a long time, a lot can still change before it actually takes effect.

Follow

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?