Generative AI: here to harm or help?
The development and adoption of AI systems has seen a number of significant milestones in recent years, the latest one being in the field of Generative AI.
Powered by unprecedented access to training data and advances in cutting-edge large language models, Generative AI's practical applications when it comes to creating human like content are broad and diverse: from the creative (songs, art, videos) to the commercial (bespoke agreements, efficient fact-finding). It has also democratised access to AI for many, with ChatGPT, the AI chatbot boasting over one million users within five days of launching. The speed at which Generative AI has gained traction in both businesses and our daily lives has re-ignited the debate around the potential for harm associated with highly automated systems deployed at scale.
Bias and privacy concerns
Like other AI systems, Generative AI processes large amounts of data to create 'accurate' outputs, often learning from data patterns without human direction. The system is dependent on the quality of its training data and is susceptible to a range of biases which can be introduced at any stage of the AI lifecycle; with particular vulnerabilities showing in the early stages of data collection, preparation and model building.
Bias in AI typically refers to divergent results arising across different population groups that may be impacted by the AI system's outputs. Bias can be caused by a myriad of factors: flawed configuration, low quality of, or inadequately representative training data, or even human bias in interpreting the results. While these risks are not new to AI, Generative AI’s ready accessibility to public data, broad applicability and availability all exponentially magnify the risk of bias, particularly as common users may not necessarily be able to discern fact from fiction.
The unprecedented amount of data collected, stored and processed by these systems also raises a number of data privacy concerns, showcased by Italy's scrutiny of AI chatbots in its data privacy regulator's ban on Replika and subsequently ChatGPT. These investigations highlighted children's vulnerability, impressionability and potentially limited understanding that they are interacting with AI, prompting the regulator to expressly call out ChatGPT’s lack of age verification, resulting in minors exposed "to absolutely unsuitable answers".
Regulatory Landscape
As the use of Generative AI increases, regulation to address associated risks is more important than ever. The global legal landscape in this space is rapidly evolving and regulators and governments are working at pace to ensure they don't fall behind.
The EU’s Artificial Intelligence Act (the 'AI Act'), expected to be finalised in the coming months, is the first comprehensive legal framework for AI. It is expected that legislation in China will follow, with the China Cyberspace Administration's draft law requiring all new AI products developed in China to undergo a "security assessment" being released to the public.
Unlike the EU, the UK Government recently indicated in its AI white paper that at this time, it would not be proposing AI-specific legislation or creating an AI regulator. Instead, the UK plans to rely on individual sector regulators to consider five principles of safety, transparency, fairness, accountability and contestability. More recently, the ICO have updated their guidance on AI and data protection to suggest bias mitigation methods at the different stages of the AI lifecycle, and have published a blog on the use of Generative AI.
While the landscape evolves, some laws, such as the GDPR, are already playing a role in regulating AI. For example, Article 22 sets out mechanisms for individuals to contest or question possibly unfair outcomes of automated decisions which might produce significant legal effects – providing a redress mechanism against potential harms caused by data biases in certain cases. Other laws relating to intellectual property, content moderation, human rights and product safety also play an important role in ensuring that unintended consequences associated with Generative AI can be avoided.
Organisations creating, developing or using rapidly advancing technologies such as Generative AI will need to carefully balance their approach to innovation and growth with the need to set up critical legal compliance and data governance guardrails.
This article first appeared on TechUK as part of their #AIWeek2023