Flying Cars
On whom does liability fall?
Whenever the future is represented in media, teleportation and flying machines are ubiquitous. While the transfer of matter from one point to another without traversing the physical space between them is still some way away, the 'flying car' is ripe to take off thanks to recent technological advancements.
"Mark my word: a combination airplane and motorcar is coming. You may smile, but it will come." - Henry Ford, 1940.
What are Flying Cars?
Flying cars aka roadable aircraft are vastly different to what was portrayed in Back to the Future. All prototypes in the works are electric and are vertical take-off and landing (VTOL) aircraft without the need for a runway - surprisingly similar to the 'Spinners' in Blade Runner.
Technology
The imminent roll out of 5th generation wireless systems (5G) and Edge and Fog Computing by 2020 will revolutionise data transfer speeds and greatly assist the Internet of Things (IoT) - the phrase coined to describe how anything which is electric will soon be connected to the internet.
The expectation is that the IoT will use information collected from positioning systems on VTOLs to ensure that collisions and other accidents are avoided. The idea long-term is that VTOLs will be in constant communication with each other as well as any airborne infrastructure. This is being made possible by Edge and Fog computing which can processes data collected in 'real time' rather than sending it to the Cloud which is vital for the safety and success of VTOLs.
Key players
According to technology network The Verge, at least 19 companies are developing flying cars with many having conducted their first test flight already, highlighting the scramble for companies to establish themselves in this market. There are several key players with viable VTOLs, all looking to get the first mover advantage, including: Google (Kitty Hawk), Airbus (Vahana) and Terrafugia (TF-X). These aircraft are expected to be ready for public use as early as 2026 – ahead of time for the Los Angeles Olympic Games. The U.S. and the Middle East (Dubai in particular) are leading the way in VTOL testing and it is predicted that these will be among the first markets to introduce such aircraft, with regulators eager to engage on novel approaches to regulation.
It is highly unlikely that VTOLs will be rolled out to the masses for consumer control due to the likelihood of human error – consumer adoption of flying cars would make falling metal machines a constant threat. Industry commentators predict that it is more likely that we will see unmanned VTOLs for safety reasons and also to ensure that there is no delay in having to train pilots. In order for the public to have confidence in these aircraft, the aviation industry has to ensure that safety is paramount and that adequate laws are in place, in particular, regarding liability.
Liability for Artificial Intelligence (AI)
As companies in the aviation industry continue to develop machine-learning as the basis for autonomous systems, AI is expected to be the next big disruptor. AI is defined by Merriam-Webster as "the capability of a machine to imitate intelligent human behaviour". It is anything that mimics human intelligence and has the ability to take different actions in similar situations based on what it has 'learned'. As VTOLs and associated robots develop cognitive abilities that enable them to make decisions independent of their programmer, the questions of legal responsibility and liability becomes paramount.
By way of example, imagine that a worker clocks-off and orders a VTOL on a digital platform to take her/him home. Now imagine that during the journey, the aircraft crashes and injures that passenger. Who would be at fault here: the platform provider? The aircraft manufacturer/owner? What if a third party developed the code for the AI system? This is where it gets interesting.
Civil Liability
Contract Law
Under the example above, the first place the individual would look for possible recourse would be to the contract for services entered into with the VTOL company. As there are clearly a number of possibilities as to who the counterparty could be, the industry needs to develop an evolved model to determine this.
Once the contracting parties have been settled, the issue of limitation of liability provisions (likely to be included in the contract) and how they are permitted under the applicable law will be relevant. VTOL manufacturers could seek to limit their liability using these provisions in a number of ways, by: (i) excluding liability for certain types of loss, or (ii) putting a financial cap on liability for such losses. These are sapient ways of balancing the risk between parties to a commercial contract.
However, this does not grant companies a carte blanche to exclude all liability and they should be aware that the law can make certain terms wholly ineffective e.g. it is not possible to exclude liability for death or personal injury resulting from negligence and any attempt to do so would run the risk of rendering the entire limitation of liability clause unenforceable. This would leave a company at risk of potentially unlimited liability. Liability for other implied terms, even where not unlawful, can also be limited or excluded if they are deemed to be unreasonable. In the example above, expecting the aircraft to be fit for purpose would most certainly be considered completely reasonable.
Manufacturers of VTOLs could also seek to rely on contractual indemnities – promises to pay money on the happening of a specified event – as another limitation on their liability for associated aircraft accidents. However, companies need to bear in mind that it is routine in many markets for indemnities to be deemed void if they go against public policy. In addition, there are some indemnities which, although not illegal, may be deemed invalid if they are found to be unfair. VTOL manufactures should also look to flow liability to third party technology companies in situations where these companies have provided the AI/programming and are fully or partly responsible for an accident.
Tort Law (Negligence)
In addition to contract law, an injured party could also look to other areas of law for recourse. Tort – the law for civil wrongdoings – and in particular, negligence, would be one area of legal liability to pursue. In order to prove negligence in the UK (as well as many other markets), certain factors must be proven. These factors include: (i) duty of care, (ii) breach of that duty, (iii) damage, and (iv) causation.
As VTOLs are unmanned aircraft, there will be no human to apportion blame to in cases of accidents and so fault must be found to lie elsewhere. The duty of care in the above example will likely fall on the aircraft manufacturer as the platform provider (if a separate company) would no doubt try to absolve itself from any liability by claiming to only provide the platform under which the aircraft operates (as most ride sharing platforms currently do).
Once the duty of care has been established, focus then turns to the second and third limbs of the test: (ii) breach of that duty, and (iii) damage. These limbs are largely factual and not overly cumbersome to prove. For limb (iv) causation, the test is whether the kind of damage suffered was reasonably foreseeable by the defendant at the time of the breach of duty. As AI further develops, 'learns' and acts in ways unimagined by its programmer, reasonable foreseeability may become impossible to prove. Under the law as it stands, if someone was to be injured by AI, there would be no possible recourse if the actions of the AI were not reasonably foreseeable. Due to this difficulty, strict liability (i.e. liability without having to prove negligence) on the manufacturer may be the only way to ensure that there is accountability.
Criminal liability for AI
A VTOL can never be liable under criminal law as only legal persons are subject to rights, responsibilities and legal liability. In the UK (as well as most other countries), criminal liability requires two elements to exist: (i) mens rea i.e. mental state/intention, and (ii) actus reus i.e. the act or omission. This is difficult to prove when you consider that a robot cannot ever possibly have the mens rea to commit a crime.
Gabriel Hallevy, writing for the Akron Intellectual Property Journal, recalls a story from 1981 where a 37-year-old Japanese employee of a motorcycle factory was killed by an AI robot working near him. The robot erroneously identified the employee as a threat to its mission, and calculated that the most efficient way to eliminate this threat was by pushing him into an adjacent operating machine, killing him instantly. Who should be held liable for this murder?
Hallvey discusses three models for the possible imposition of criminal liability on humans for the actions of AI entities:
- Perpetrator-via-another: if an innocent agent (e.g. a VTOL) was instructed by another person (e.g. software designer) to commit an offence, then the instructor would be held criminally liable;
- Natural-probable-consequence: a programmer would be held liable if the programmer knew, or should have known, that a criminal offence was a natural, probable consequence of use of their programme; or
- Direct liability: in this scenario both the mental and physical element of an offence would be attributed to a robot. Strict liability would be needed to apportion the mental element given the difficulties noted above. This would be the most likely scenario to succeed in apportioning liability to AI for VTOL accidents.
EU Legislation
The EU does not yet have specific legislation on robotics or AI. As it stands, robots are covered by a legislative patchwork of directives. The European Commission has recognised that there is a need for legal certainty in identifying where liability should fall amongst the different market players in this area and has announced that it will issue guidance by mid-2019 in the light of this dearth of regulation.
In May 2017, the European Commission published a paper announcing a series of regulatory and policy initiatives in response to a resolution from the European Parliament on European civil law rules on robotics. The proposed rules include establishing ethical standards for the development of AI and introducing an insurance scheme to cover liability for accidents involving AI, including driverless cars. If implemented, one can only assume will be extended to VTOLs further down the line. In April 2018, 25 European countries signed a declaration of cooperation on AI to deal with the social, economic, ethical and legal questions raised by this new technology evidencing the interest of European policy makers in this area.
Conclusion
As highlighted by Hallvey's example with the fatality in the motorcycle factory nearly four decades ago, liability for AI and the legal questions it raises is not a novel issue and should have been on the radar of regulators for some time. Apportioning liability is only going to become more difficult as AI continues to evolve and become more autonomous. One thing certain is that this technology is moving at a ferocious speed and the law needs an injection of pace to keep up.
Tech Note
'Edge' refers to the computing infrastructure that exists close to the sources of data. It is a decentralised mode of data processing which is faster than sending information to the cloud. 'Fog' refers to the network connections between edge devices and the cloud. Fog includes edge computing, but also includes the network needed to get processed data to its final destination.
Both types of computing share similar objectives to:
- reduce the amount of data sent to the cloud;
- provide real-time data analysis (decrease network latency); and
- improve system response time.
This article was written by Connor Shorten.