Biden administration releases wide-ranging executive order on AI
On Oct. 30, 2023, the Biden administration issued an executive order focused on the growing field of artificial intelligence. The administration is advancing a comprehensive and coordinated approach to the safe and responsible development and use of AI and setting a marker for the world. The EO is practical and aspirational with varying degrees of immediate impact for businesses and their leadership teams.
Our team is pleased to offer this summary of the EO and related guidance, and to share key provisions and initial takeaways. In the coming weeks, we will dive deeper into critical topics covered by the new EO.
Executive order on AI: Background and policy
The Biden administration issued its Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence on Oct. 30, 2023. Concerns about the potential risks associated with AI have reverberated around the country and throughout Washington. The White House aims to establish comprehensive regulatory principles over the AI industry. The order builds upon an array of prior governmental and industrial guidance and actions, including the following materials published on ai.gov:
- The Blueprint for an AI Bill of Rights
- The NIST AI Risk Management Framework
- Voluntary commitments from several companies heavily involved in the AI industry
- The National AI R&D Strategic Plan
- The National AI Research Resource Roadmap
In parallel, the Office of Management and Budget issued draft policy guidance on the federal government’s use of AI. OMB is accepting public comments on this draft document until Dec. 5, 2023.
The EO is a clear sign from the White House that the U.S. intends to be the leader in global AI policy and to ensure AI is imbued with Western values as it develops. The EO comes as major industry leaders including Open AI, Google and Microsoft have called for federal or globally-coordinated AI regulation, and Senate Majority Leader Chuck Schumer (D-NY) has launched a major effort to write legislation regulating AI. The European Union and China have already started to regulate AI.
Scope of the EO
The EO seeks to address AI’s broad applications and promote accountability and safety in AI development and deployment across various sectors of the economy. The EO emphasizes eight principles and priorities for AI governance, including:
- Ensure AI safety through robust, standardized evaluations, institutions and risk mitigation before deployment.
- Promote responsible innovation, competition and education while addressing IP concerns and preventing monopoly.
- Support American workers through education, job training and labor impact understanding.
- Advance equity and civil rights.
- Protect the interests of Americans using AI products in their daily lives.
- Safeguard privacy and civil liberties with lawful, secure data handling.
- Manage risks in federal AI use and enhance regulatory capacity for better results.
- Lead global progress, collaborate with international partners and develop an AI risk management framework.
Proposed reporting and rulemaking requirements
Of important note, the EO establishes a number of industry reporting requirements and regulatory promulgation deadlines for federal agencies, ranging from 45 to 365 days, to implement various directives, with stakeholder engagement playing a crucial role. The EO uses the Defense Production Act to mandate certain reporting requirements.
Governmental stakeholders
The implementation and oversight of the EO and associated AI policies will be conducted by the White House AI Council, comprised of representatives from executive branch agencies and departments. The White House AI Council will be chaired by the assistant to the president and deputy chief of staff for policy (currently Bruce Reed) and will include the secretaries or their designees for most of the cabinet agencies, as well as the directors of National Intelligence, National Science Foundation, OMB and Office of Science and Technology Policy, among others.
Following the pattern established with the CHIPS Act, the Department of Commerce and the National Institute of Standards and Technology will take the lead in coordinating and implementing the EO, emphasizing the “all-of-government” approach.
Key takeaways
Potential legislative actions
While congress continues its development of AI legislation, the EO primarily focuses on federal agency programs, AI procurement requirements, national security and potential rulemaking.
Current focus on federal government use of AI and critical AI risks
It is important to note that while the fanfare and publicity of this EO has been significant, the order cannot independently create new laws or regulations. The EO provides guidance to federal agencies who will issue related regulations. The EO also includes directives pertaining to federal agency programs, criteria for AI systems acquired by the federal government, obligations related to national security and vital infrastructure, and initiating potential regulatory processes for supervised entities.
This alert provides a comprehensive overview of the recent Executive Order on AI and its broad implications. This is the first installment in a short series to delve deeper into key takeaways and deliver essential industry-specific insights for businesses and their leadership, shedding light on the extensive impact of the EO.
For more information on the EO, please contact Adrian Snead, Matt Lapin, Maxwell Herath or any member of Porter Wright’s Government & Regulatory Affairs Practice Group.