Business and Human R(AI)ghts: The Guiding Principles and Code of Conduct for Organisations Developing Advanced AI Systems
The Principles and Code incorporate international BHR standards and provide guidance on risks that might arise when developing AI systems, as well as steps to advance responsible AI stewardship
On 30 October 2023, shortly before the AI Safety Summit, the G7 released a set of International Guiding Principles for Organisations Developing Advanced AI Systems (the Principles) and an International Code of Conduct for Organisations Developing Advanced AI Systems based on those Principles (the Code).
The Principles and Code are directed at organisations developing advanced AI systems (AI systems). They explicitly build upon the OECD AI Principles, which are a set of high-level values-based principles adopted by the OECD Committee on Digital Economy Policy in its 2019 Recommendation that are designed to foster the development of trustworthy AI.
The pre-ambles to the Principles and Code expressly state that private sector activities should be in line with international business and human rights (BHR) standards: specifically, the UN Guiding Principles on Business and Human Rights (UNGPs) and OECD Guidelines for Multinational Enterprises on Responsible Business Conduct (OECD MNE Guidelines).
In this blog, we provide an overview of the Principles and Code against the backdrop of organisations' existing responsibilities to protect, mitigate and remedy adverse human rights harms that arise in the context of AI development.
What do the Principles and Code say?
There are eleven Principles, which are drawn from the five OECD AI Principles and focus on key themes around inclusive growth, and sustainable development, transparency security safety, and accountability.
The Principles are:
- Take appropriate measures throughout the development of advanced AI systems, including prior to and throughout their deployment and placement on the market, to identify, evaluate, and mitigate risks across the AI lifecycle.
- Identify and mitigate vulnerabilities, and, where appropriate, incidents and patterns of misuse, after deployment including placement on the market.
- Publicly report advanced AI systems’ capabilities, limitations and domains of appropriate and inappropriate use, to support ensuring sufficient transparency, thereby contributing to increased accountability.
- Work towards responsible information sharing and reporting of incidents among organizations developing advanced AI systems including with industry, governments, civil society, and academia.
- Develop, implement and disclose AI governance and risk management policies, grounded in a risk-based approach – including privacy policies, and mitigation measures.
- Invest in and implement robust security controls, including physical security, cybersecurity and insider threat safeguards across the AI lifecycle.
- Develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content.
- Prioritize research to mitigate societal, safety and security risks and prioritize investment in effective mitigation measures.
- Prioritize the development of advanced AI systems to address the world’s greatest challenges, notably but not limited to the climate crisis, global health and education.
- Advance the development of and, where appropriate, adoption of international technical standards.
- Implement appropriate data input measures and protections for personal data and intellectual property.
The Code provides flesh to the bones of the Principles by setting out high-level practical recommendations that businesses can follow. This provides a much-needed steer for organisations that might be new to considering these types of issues while also being general enough to suit businesses of different sizes and based in different jurisdictions.
The Code and Principles address both the risks of human rights harms that organisations should address (i.e. what they should do to avoid those risks, in line with the principle of "do no harm"), as well as actions geared towards the responsible stewardship of AI (i.e. what they should do to foster a safe, trustworthy environment).
What are the risks?
The Principles identify specific risks, which the Code interrogates in more detail, that may arise within a business's operations or value chains during the lifecycle of advanced AI systems. Examples include risks of models "self-replicating" or training other models (embedded in Principle 1), as well as security risks, in respect of information security and cyber/physical access (Principles 1, 6 and 10).
In addition to identifying risks that arise specifically in the context of business operations, the Principles and Code also identify risks that might have a wider societal impact. These include threats to democratic values and human rights, including the facilitation of disinformation or harming privacy; as well as risks to individuals and communities, such as the ways in which advanced AI systems or models can give rise to harmful bias and discrimination.
What are the responsible stewardship actions?
The Principles and Code contain a number of recommended actions designed to encourage inclusive growth, sustainable development and well-being as well as human-centred values and fairness.
For example, Principle 9 recommends that, in support of progress on the UN Sustainable Development Goals, organisations "prioritise the development of advanced AI systems to address the world’s greatest challenges, notably but not limited to the climate crisis, global health and education". The Code expands on this concept and explains that this should include working with civil society and community groups to identify priority challenges and develop innovative solutions to address the world’s greatest challenges.
While the risk-based elements of the Principles and Code are gleaned from the UNGPs and OECD MNE Guidelines, the responsible stewardship elements are derived from the OECD AI Principles. In addition to advocating risk-related approaches in respect of transparency, security and accountability, the OECD AI Principles are also geared towards the promotion of responsible stewardship and the development of trustworthy and human-centric AI systems.
Recap: What are organisations' existing responsibilities with respect to addressing human rights related harms and how do the Principles and Code build upon these?
As noted, the Code and Principles address both the risks of human rights harms that organisations should address as well as actions geared towards the responsible stewardship of AI.
In relation to the former, since the introduction of the UNGPs and OECD MNE Guidelines in 2011, all businesses – including those operating in the AI space – have been expected to respect human rights and to take steps to identify, mitigate and remediate human rights related harms occurring in their businesses and supply chains, including by carrying out risk-based due diligence.
Whilst neither created new legally enforceable obligations on businesses, those standards are increasingly embedded in legislation and regulation and so developing from "soft" to "hard" law. This is particularly so within the EU. See for instance, our briefing on Navigating a Changing Legal Landscape in 2022, and our blog on how EU sustainability-focused regulation increasingly refers to the OECD MNE Guidelines.
The linkage of the Code to the UNGPs and OECD MNE Guidelines means that businesses can draw upon a now tried and tested risk-based approach to addressing potential adverse impacts arising from the development of AI systems – and in so doing, those businesses subject to current or future law and regulation requiring such an approach, will ensure that they are compliant.
What is a "risk-based" approach?
Under the OECD Due Diligence Guidance For Responsible Business Conduct (which is referenced in the OECD MNE Guidelines) a "risk-based approach" is one where businesses prioritise addressing adverse impacts that are likely to be the most significant, in terms of severity and likelihood of harm, and tailor their measures to those specific risks. This also means that any such measures must be "commensurate" – i.e. proportionate – to the severity and likelihood of that harm.
Certain risks will be more severe or likely depending on factors like the size of the business, the specific AI system being developed and the jurisdictions in which that business operates. By adopting this approach, businesses in the AI space can focus their resources on addressing the risks to human rights associated with AI development that matter the most, while also harnessing the beneficial potential of that technology.
The introduction of the Principles and Code are helpful when identifying the specific human rights and environmental risks that arise in the course of AI development. The Code in particular digs deeply into certain examples set out in the Principles.
Companies should also be alive to further updates to the Principles and Code: the pre-ambles describe them as "living document(s)", which are to be reviewed and updated as necessary, including through ongoing inclusive multistakeholder consultations.
Next steps and looking beyond the Hiroshima AI Process
In its Declaration on 30 October 2023, the G7 instructed relevant ministers to accelerate the process towards developing the Hiroshima AI Process Comprehensive Framework by the end of 2023 and to conduct multi-stakeholder outreach and consultation. The G7 has also instructed ministers to develop a work plan by the end of 2023 for further advancing the Hiroshima AI Process, so we expect to see the shape of that process in the near future. We will continue to monitor these developments but in the meantime, for key takeaways on the UK AI Safety Summit and Fringe, read our post here.
Further, AI regulation is developing around the globe – including in the EU, with the highly-anticipated Artificial Intelligence Act.
The protection of fundamental rights is deeply embedded in the Artificial Intelligence Act proposal, which as a general rule follows a risk-based approach. In light of what is currently reported and anticipated, this would take different forms, including: the prohibition of certain AI practices deemed particularly harmful, e.g., because they exploit human vulnerabilities, manipulate people, are particularly intrusive in terms of privacy or involve social scoring leading to disproportionate or unjustified detrimental treatment; a broad set of requirements to regulate 'high-risk AI systems', which may include certain AI systems affecting employment and workers' rights or the access to essential private and public services, or involving biometric identification, categorisation and emotion recognition (to the extent not prohibited), amongst others; provisions imposing such things as risk management systems and human oversight for high-risk AI systems, or rules to address risks of bias and discrimination; a requirement to carry out a prior fundamental rights impact assessment in certain cases; rules for 'general-purpose AI models', with more specific requirements still for those deemed with systemic risks; and more.
Work is underway to finalise the landmark Artificial Intelligence Act, following a critical political agreement reached by the EU institutions on 8 December 2023. It remains to be seen exactly what the definitive text will contain. That said, we do not expect the foundations of the proposal to change.
For further information on the interface between ESG and AI, please see the Practical Law article, which features contributions from the Clifford Chance team.
For some thoughts and takeaways following the political agreement reached by the EU institutions on the Artificial Intelligence Act, please see here.