Smart products, artificial intelligence and manufacturers liability
Businesses have to assess whether or not they are sufficiently protected against liability risks arising from smart products
'Smart products' are tangible objects characterised by: "increasing level of complexity and variety of ecosystems, actors and value chains; autonomy in decision making and actuating; generation, processing and reliance of big volumes of data; and openness to software extensions, updates and patches after the products have been put into circulation". From self-driving cars to smart factories incorporating machine-to-machine communication and smart supply chains, smart products are able to act autonomously allowing for an economical and flexible production of goods.
The increasing degree of autonomy facilitated by AI has many advantages but also gives rise to previously unknown risks. Autonomous and interconnected products are increasingly becoming harder to control and can make independent and sometimes unforeseeable decisions when interacting with their environment. The unforeseeability of AI is very much a feature rather than a bug.
Smart products and AI are likely to cause a paradigm shift in terms of the rule of law and traditional liability regimes which typically attach liability to a person (legal or natural), rather than an autonomous system.
Current and future rules of law and the risks from autonomous products
It has been questioned whether the current liability regime adequately covers all aspects of smart products. While there seems to be a consensus that established legal principles are generally sufficient to address the current risks posed, European (and German) legislators are carefully considering the implications that may give rise to a change in the rule of law.
Smart products and AI under the current liability regime
The current European (and German) liability regime differentiates between contractual and non-contractual liability. While contractual liability typically relates to a warranty defect of a product, the non-contractual liability is governed by tort law in the shape of product and manufacturer's liability.
For contractual warranty claims, smart products put the contractual warranty laws to the test as liability arises from defects in the product at the time of the passing of risk. Given that smart products are able to adapt to their environment by machine-learning or over-the-air updates, implementing new features after the passing of risk of the product itself, this may no longer be an appropriate stand-alone solution. If a smart product shows an undesired (and unforeseen) behaviour after the passing of risk, it is an arduous task to prove that it was already defective at the transfer of risk. Notwithstanding, these contractual risks can be addressed by the parties to a certain extent. An issue remains, for example, for German manufacturers purchasing software or network services from U.S. based vendors. The extensive limitation of liability allowed by U.S. based jurisdictions is not mirrored in German law and may possibly effect a liability gap of German manufacturers with a view to supply chain recourse.
The same issue also arises in respect of non-contractual tort liability, for example, under the German Product Liability Act. The relevant point in time for determining whether or not a product is defective is when it is put into circulation. This raises the issue for smart products that can self-learn or be altered by software updates implementing new functions of the product as to whether the product was already defective when it was put in circulation.
Additionally, there are significant problems regarding the burden of proof as smart products are a combination of hardware and software with various interfaces and the possibility that the software updates itself through learning. For the customer, this complexity results in a lack of transparency when trying to determine the root cause of an alleged defectiveness. Moreover, it is questionable whether an unintended autonomous action of a smart product would qualify as a defect at all.
Addressing legal risks from smart products and AI: a look into the future
Market players are already reacting to the legal uncertainty and liability risks caused by smart products and AI. Some manufacturers, in an attempt to address the concerns of their customers, have promised special guarantees relating to the autonomous product risk. It is unlikely that this position will be universally followed in light of the unfathomable liabilities that may arise.
European and German legislators currently attach the responsibility arising from smart products and AI to the (natural or legal) person creating and controlling the respective risk, i.e. typically the user of a smart product as well as the manufacturer. For manufacturers and users alike, it is thus certainly worthwhile to be proactive in identifying and addressing risks by defining areas of responsibility in relation to defects.
With increasing autonomy of smart products and AI, the duty to maintain the safety of the product may evolve. For example, the duty to design a product so that it poses no unforeseeable risk may require a manufacturer of a smart product to limit the range of autonomy developed through machine learning to a socially acceptable level. If these principles are not borne in mind during the design and development of the product, it may be regarded as defective and the manufacturer may possibly be liable for an undesired function of the smart product. In addition, due to the increased connectivity of products, the product monitoring duty may require monitoring systems to collect, evaluate and efficiently respond to data obtained from the market.
While not imminent, both European and German legislators have considered the introduction of an independent 'e-person' status for smart products and AI systems. However, numerous questions and issues remain unanswered as of yet, for example, whether there should be a general registration of autonomous/AI systems, requirements of mandatory insurance and mutual liability pools in case no insurance coverage is available, as well as the ethical boundaries relating to a use of autonomous products and AI. At least for the time being, an introduction of an e-person status for autonomous systems appears to remain a proverbial "dream of the future".
Summary and outlook
While the discussion of smart products and AI is certainly prevalent and has been considered by legislators in the EU and Germany, legislative action does not appear to be imminent. Rather, the current liability regime has been deemed to address the risks posed by smart products for the time being.
Both European and German legislators already have noted that a revision of the existing liability regimes may be required in the future to ascertain their effectiveness. As noted by the EU Commission in relation to the product liability directive: "2018 is not 1985. The EU and its roles on product safety have evolved, as have the economy and technologies. Many products available today have characteristics that were considered science fiction in the 1980s."
In the meantime, businesses will have to assess whether or not they are sufficiently protected against liability risks arising from smart products, be it as users or manufacturers. This includes not only an adequate identification and assessment of relevant risks, but also an appropriate response to manage and dispose of the respective exposure, for example, by means of contractual arrangements with suppliers and/or customers or by taking out appropriate insurance coverage.
Key Take-away Points
- Smart products put the current liability regime to the test.
- European and German legislators recognise that the product liability laws may require a revision in respect of smart products and AI.
- Market participants should act to analyse and monitor their risk and take appropriate counter-measures to limit liability risks.