Intelligent transport vs artificial intelligence

Written on 31 Jul 2015

Intelligent Transport Systems (“ITS”) are already with us. Within organisations, management of vehicle fleets can be improved by the provision of on-line information and two way communication between manager and driver. Public ITS can improve the efficiency and safety of road transport generally, through providing on-line information to drivers in their vehicles and by equipping the vehicle with computerised systems which assist the driver (e.g. following and lane keeping). Electronic motorway tolling and congestion charging to influence driver choices are also options. Electronic systems to improve traffic control and enforcement of traffic regulations are rapidly being deployed – on French motorways, a speeding driver may encounter a message on the next overhead gantry giving the car’s number plate and warning of the infringement.

These more efficient transport systems will in principle reduce the number of vehicle-miles travelled and so reduce air and noise pollution on roads, as well as improving safety since, notoriously, most accidents happen through driver error.

But the “intelligence” in ITS is largely that of the human users: additional information, or an additional communication line, harnesses technology to provide more data for human decision makers, whether drivers, passengers or fleet managers. All of these have been deployed in railway systems for years, and more recently in bus networks. Even lane-following or similar applications tend to be “reactive” – sensing a predefined stimulus and activating a predetermined response – rather than “deliberative”, making active choices about how to respond to the situation, or learning from past experience how to deal with new experiences in the future.

Artificially intelligent autonomous vehicles will be able to choose a route, monitor the options and traffic loads as well as both static and moving hazards, and decide a speed, lane and change of plan without reference to a driver. Very few existing autonomous vehicles are capable of all of these.

One wheel at a time?

Of necessity, the Mars Rover ‘Curiosity’ is fully autonomous, though obviously it operates under a very different set of constraints from earthbound transports. In particular, it has no need to consider other moving objects, let alone to prioritise between actions to avoid colliding with them. But fully autonomous trucks do already operate within the defined – and somewhat Mars-like – environment of large mines. Again within a defined environment, autonomous warehousing vehicles now being deployed are able to sense and decide how to avoid obstacles on a planned route from shelf pick to loading bay. Systems comprising an overall control system and multiple vehicles, which can optimise allocation of tasks and devise optimum routes among changing obstacles are available.

The extension of such systems to defined outdoor environments such as yards, ports and airports is underway; moving further, to open ocean shipping operations (with a pilot taking control back on approaching crowded areas), might not be a major hurdle if the cost/ benefit balance justified it. But once autonomous vehicles are in a less strictly controlled setting, additional legal issues arise. Inside a warehouse, a control system failure could lead to a vehicle hitting another or a stack, and potentially result in damage to itself and the building, but the damage is unlikely to go beyond that. “Shut down and stop” may be a legitimate “fail safe” option in those circumstances. When the vehicle is carrying a larger value of goods in a less predictable setting, however, the system needs to make a sophisticated assessment of the nature of the failure and its possible consequences – such as widespread environmental pollution – and shut down in a way designed to minimise that risk.

A machine which can objectively assess and decide without fear for its own safety may well make decisions more conducive to the greater good than a human whose priority is inevitably to preserve his or her own life. However, setting the weighting to be given to different factors in the decision process, and the legal consequences of prioritising one outcome – reduced pollution, say – over another – successful delivery of part of the cargo to the nearest accessible port – will need very careful consideration. Allowing an artificial intelligence (“AI”) to set those weightings for itself as it learns from experience (its own, and others’) of previous journeys may mean any analysis of liability by human stakeholders itself becomes artificial. Insurance on the basis that outcomes are classed as Acts of God could be the only commercial approach.

Questions also need to be decided as to how to classify the product of an artificial intelligence: should copyright arise at all in new software for optimising vehicle-vehicle interactions, for instance, and if so would it belong to the owner of the machine which devised it, or the business which produced that machine and equipped it with the abilities which led to the new code? And what datasets should AIs be entitled to access, in order to learn: all data arising from operation of a given manufacturers’ vehicles, for instance, wherever and by whomever owned? Or all data from all vehicles capable of collecting and sharing it?

Control systems providing some deliberative functions, such as route optimisation algorithms, are already in use with human-operated vehicles. The next step, of driver assistance systems which can operate the vehicle autonomously in the relatively predictable environment of a long motorway section, is likely to arrive in the medium term since reducing driver hours at the wheel will enable longer effective shifts, if the driver can rest (or not even accompany the vehicle) during such sections. As an interim step, truck convoys operated solely by the driver of the lead vehicle have been road tested. As such solutions become more prevalent, the opportunity for driver errors will reduce, and transport will become both safer and more efficient in the use of resources of all kinds – including human operators. However, employers will have to undertake a new level of responsibility for maintaining skill levels for drivers when these only in fact control a vehicle for a small fraction of their working time.

Much more significant advances still need to be made, to have autonomous vehicles operating in the complex, unconstrained environment of city streets or small villages, with pedestrians, cyclists, dogs (or should that be foxes?), thrown or falling objects and a plethora of unpredictable signals – flashing shop signs, reflections from multiple fast-moving surfaces – all having to be processed, recognised and if necessary reacted to. But it will become reality soon. Interim stages such as designated routes where autonomous vehicles are permitted may provide the opportunity for the technology to improve from stage to stage of complexity of operating environment. An offence of operating an autonomous vehicle outside the permitted environment may be needed, just as the law is playing catch-up with the operators of drone aircraft.

Eventually, appropriate outcomes for autonomous vehicles facing unpredictable and ethical quandaries will have to be assessed. If forced to swerve to avoid a child should an autonomous car continue and hit the child, go off a cliff (and kill its passengers), or choose to risk injuring a woman pushing a pram on the opposite pavement? Such scenarios are rare in any driver’s experience, and so offer the least opportunity for any AI to learn in practice; but like a human driver they will still need to make some decision on facing it for the first time. A human driver in an impossible position evokes pity amidst the blame; but a machine’s inhumanity precludes that avenue for forgiveness. Nevertheless, the law cannot impose a prohibitive level of liability in response to public outrage, without foregoing the public interest in the undoubted benefits autonomous vehicles will bring. A new social balance will have to be forged.