What businesses need to know (for now) about the Biden Executive Order on AI
On October 30, 2023, President Joe Biden announced an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO), which marks a historic milestone in the regulation of artificial intelligence systems and technologies in the United States. During the announcement of the EO, White House Deputy Chief of Staff Bruce Reed issued a bold statement of intent: "President Biden is rolling out the strongest set of actions any government in the world has ever taken on AI safety, security and trust… It's the next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and mitigate the risks."
Given this strong messaging, most corporate organizations are asking the question: "What does the EO mean for me?" Since the EO was announced, we have seen a lot of speculation around its implications. This article, however, looks to focus on the key practical elements of the EO, and what they mean for organizations in the U.S. and globally.
Overview of the Executive Order
The EO is very broadly framed and covers eight general areas:
- Ensuring the Safety and Security of AI Technology
- Promoting Innovation and Competition
- Supporting Workers
- Advancing Equity and Civil Rights
- Protecting Consumers, Patients, Passengers and Students
- Protecting Privacy
- Advancing Federal Government Use of AI
- Strengthening American Leadership Abroad
Entities that are expressly called out for regulation include, among others, critical infrastructure providers (e.g., certain energy companies), infrastructure-as-a-service providers, financial institutions, and synthetic nucleic acid sequence providers.
The EO's notable provisions include requirements for developers of certain AI systems to share safety test results and other critical information with the U.S. government; provisions relating to U.S. government assessments and implementation of guidelines for AI threats to critical infrastructure as well as chemical, biological, radiological, nuclear and cybersecurity (CBRN) risks; and creation of standards for watermarking AI generated content and content provenance.
The EO references opportunities to engage in rulemaking, including, e.g., relating to development of the Department of Energy's AI model evaluation tools and AI test beds; evaluation of AI model capabilities to present CBRN threats; and identification of AI and other STEM-related occupations across the U.S. economy for which there are insufficient U.S. workers. For each area, the EO tasks various government agencies to develop more specific guidelines and parameters and it contemplates the establishment of new working groups, interagency councils, and a research coordination network.
On November 1, 2023, the Office of Management and Budget (OMB) released for comment a new draft policy on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. This guidance provides direction to federal agencies across three pillars: Strengthening AI Governance; Advancing Responsible AI Innovation; and Managing Risks from the Use of AI. OMB is soliciting public comment on the document until December 5, 2023.
The EO was issued during a week of significant developments in the AI policy space, including the G7 announcing International Guiding Principles on Artificial Intelligence and the voluntary Code of Conduct for AI developers under the Hiroshima AI process. We also saw the launch of AI Safety Institutes being announced in the U.S., UK and Canada following the AI Safety Summit hosted by the UK at Bletchley Park, which brought together 28 countries.
Executive Order Enforceability and Implementation
A key vulnerability in the EO is that it is only an executive order, which does not have the durability of legislation and therefore, could be overturned if there is a change of administration following the 2024 election. On the other hand, bipartisan legislation has been introduced in the Senate, by Senator Mark Warner (D-Va.) and Senator Jerry Moran (R-Kan.) that would codify certain elements contained in the EO, including by requiring federal agencies to incorporate the National Institute of Standards and Technology (NIST) AI Risk Management Framework into the management of the agencies' use of AI systems. U.S. Representative Ted Lieu is expected to introduce similar legislation in the U.S. House of Representatives.
Another consideration to track is that the effectiveness of the EO depends on significant inter-agency collaboration and investment over the coming years. Furthermore, the EO contains no express requirement to release details about training data or model weights. Finally, the EO stipulates different implementation timelines for various actions, from 90 to 365 days.
What Businesses Need to Know (For Now)
Although the EO generally directs government agencies as to their implementation, use, and management of AI, the EO will likely serve as a blueprint for how regulators will in turn seek to regulate private enterprises. In addition, this government action will likely also shape industry best practices in how AI is implemented, used, and managed. Considering these developments, corporate organizations should carefully consider taking several steps:
- Uses of AI and Third-Party Relationships with AI Providers. Organizations should diligence their existing and anticipated uses of AI, including any related third-party relationships. The EO requires agencies to track and report on AI use cases, and private enterprises should do the same. This baseline detailed understanding of systems and processes will be critical as regulatory requirements develop at pace. Regulators will likely require organizations to have a clear understanding of how AI is being used in its environment – both with back office and customer-facing use cases.
- Shared Understanding of Baseline Legal and Compliance Parameters. Organizations should inform their key stakeholders of key regulatory and compliance requirements to which the organizations are subject, including any frameworks and guardrails applicable to AI specifically. This shared fluency in applicable baseline parameters will enable faster organizational decision-making and evolution as new AI regulations and requirements are developed and come into play.
- Risk Management and Governance. Organizations should consider defining and implementing or refining their AI strategy and risk management frameworks. For example, the EO calls for the designation of Chief AI Officers and Artificial Intelligence Governance Boards where appropriate for government agencies – private enterprises should consider doing something similar. The EO also refers to governance plans and specifically references the NIST AI Risk Management Framework. Governance frameworks should be created and clearly establish potential high-risk use cases and a process for managing the associated risks.
- Call for Advocacy. Organizations should think deeply about their strategies for tech policy and advocacy. The EO introduces considerable opportunities for engagement with the government and the industry on multiple levels, such as, e.g., the OMB policy referenced above.
- Horizon Scanning. Organizations should create or reinforce effective systems for regulatory horizon scanning and ongoing stakeholder education. The launch of ChatGPT and other advanced generative AI technologies caught many by surprise, and sophisticated organizations should work to stay ahead of the curve.
We believe that the EO represents a significant step in the regulation of AI technologies. While its effectiveness hinges on multiple follow-on actions and close collaboration among agencies, the private sector, academia, and others, it represents a concerted effort by the U.S. administration to advance a comprehensive framework for the safe, secure, and trustworthy use of AI. Organizations should take proactive steps to understand the implications of the EO and to ensure their compliance with its requirements.
Clifford Chance and Artificial Intelligence
Clifford Chance is following AI developments very closely and will be conducting subsequent seminars and publishing additional articles on new AI laws and regulations. If you are interested in receiving information from Clifford Chance on these topics, please reach out to your usual Clifford chance contact or complete this preferences form.
Clifford Chance was the only law firm that participated as a partner in the recent AI Fringe, which brought together civil society, academic, commercial, advocacy and government stakeholders in London in October 2023 – all the sessions can be found here.
Clifford Chance has also recently published an insightful report on "Responsible AI in Practice", which examines public attitudes to many of the issues discussed in this article.