AI: Threats and opportunities in the Mobility and Infrastructure sector
Published on 14th Nov 2023
Opportunities abound, but alongside them come novel risks – including that of cybersecurity.
The opportunities are everywhere – sometimes incremental and sometimes game-changing. AI is increasingly used to optimise systems management. As physical infrastructure is increasingly overlaid with connected systems and digital twins, AI can turn potentially overwhelming raw data flows into actionable insights.
"Normal" operations can be modelled, and anomalies can be spotted and flagged up. Systems and processes can be adjusted to ensure maximum efficiency. Traffic networks can be monitored in real time. Predictive maintenance can reduce unplanned breakdowns and reduce interruptions in performance and availability. For critical infrastructure, AI can create enhanced visibility of operation, boosting safety and risk management in granular detail and in real time. Physical networks of all types can be checked for faults using image recognition.
Energy infrastructure deploys AI to tame the ever-increasing complexity of generation to smooth grid balancing (we reported recently on AI in the energy sector). AI is augmenting research into areas such as batteries by predicting the avenues of research most likely to be successful.
Regulation of AI
Where AI is used in safety systems for vehicles or infrastructure in the EU and EEA, it is likely to be subject to the "high risk" regulatory regime under the proposed Artificial Intelligence Act, currently being finalised. Compliance with this regime will be detailed, with strong enforcement and fining powers for AI authorities around the EU/EEA. The new regulatory regime will bite at many different levels of the supply chain. Horizon-scanning is recommended, in order to understand its impact and to plan for any cost and resource needed for compliance.
In the UK, there will not be a new regulatory regime, but existing sectoral and economic regulators are tooling up to understand how to use their existing jurisdiction and powers to deal with any adverse impact of AI. No new law certainly will not mean no regulatory activity.
Businesses which develop AI for infrastructure and safety critical applications should be aligning their development and assurance processes with the likely future regulatory regimes. That is not straightforward but it is a critical risk mitigation step.
AI and liability
AI-enabled automation can mean that liability needs to be restructured, to fall in the most appropriate place. This is particularly true of autonomous vehicles, where widespread adoption will need to be built on a foundation of consumer/user trust. The UK's legislative programme announced in the King's Speech of 7 November included an Automated Vehicles Bill. This will implement recommendations from the Law Commission to create a new regulatory framework for self-driving vehicles. Amongst other things, this will place accountability on authorised organisations that develop and operate self-driving vehicles, as defined, and will remove liability for driving the vehicle from users.
AI and cybersecurity
An additional area of focus in an AI-enhanced world should be cybersecurity, particularly for the critical infrastructure that underpins this sector. AI, in essence, is another form of software and needs to be just as secure as any other software in terms of how it operates and the digital connections that it depends on. But it also raises additional risks.
The raw material for AI is data. Problems in the dataset used to train an AI system can flow through into problems in its output. The attack surface for AI systems extends into the data used to train it, which should not be overlooked. "Data poisoning" is a new type of software vulnerability where AI training data is tampered with. For example, a type of phishing email might be intentionally labelled as legitimate, enabling similar emails to pass through a spam filter powered by AI trained on the poisoned dataset.
Generative AI, as we have reported, is a particular threat due to its ability to generate code for malware, or to write convincing phishing emails in any language.
But just as AI can monitor for problems in physical systems, it can also monitor for unexpected or unusual activity in a digital network. AI is increasingly being used to defend against cybersecurity breaches, and to flag suspicious network activity rapidly. The AI-enabled arms race between the white hats and the black hats is well underway.
Regulatory requirements around information security and cybersecurity tend to be technology-neutral and so extend to AI whether or not originally considered by the legislator. Obligations in data protection laws and in network and information systems regulation require businesses to take "appropriate technical and organisational measures" to address security, taking into account the state of the art. Increasingly, obligations also extend to considering supply chain vulnerabilities.
AI certainly offers great opportunity for physical and digital infrastructure. But it is important also to understand how it generates new threats, particularly on the cybersecurity front. Harnessing its power means not only exploring how it could transform operations, but also how it might necessitate the adaptation of cybersecurity and information security compliance and protocols.