Executive Order (EO) on Artificial Intelligence (AI)
Introduction
On 30 October 2023, President Biden issued an Executive Order (EO) on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (AI). The EO highlights the US government’s sense of urgency to establish common guardrails and rules for developing AI technology safely, given the general lack of competence, collaboration, and effectiveness in the US Congress.
Applicability
The EO applies to all US government executive branch agencies and the contractors that support them. The EO is not a law - it is a directive to the government. Nevertheless, the EO will have a significant impact on the private sector, especially contractors or anyone wishing to do business with the government. Moreover, by invoking the Defense Production Act, businesses that are developing AI solutions considered to be “dual-use foundational models” (Sec. 3(k) must adhere to the EO.
The President and senior members of Congress agree that comprehensive AI legislation is the ultimate goal. Sen. Charles Schumer (D-NY) leads a bi-partisan group of lawmakers who are expected to propose AI legislation in 2024.
Principles
The EO sets forth the following eight (8) principles:
Ensuring the Safety and Security of AI Technology
Promoting Innovation and Competition
Supporting Workers
Advancing Equity and Civil Rights
Protecting Consumers, Patients, Passengers, and Students
Protecting Privacy
Advancing Federal Government Use of AI
Strengthening American Leadership Abroad
Details
Ensuring the Safety and Security of AI Technology
The first principle is significant, which is evidenced by the amount of text dedicated to it in the EO. It focuses on developing guidelines, standards, and best practices to ultimately achieve a consensus on industry standards for creating “safe, secure, and trustworthy AI systems.”
For example, for generative AI, agencies are directed to partner with the National Institute of Standards and Technology (NIST) to develop a companion resource to the NIST AI Risk Management Framework (NIST AI 100-1). While voluntary, this national guidance provides clarity as to how federal agencies will likely address risk with regard to AI and large language models (LLMs).
To achieve “safe and reliable AI”, the President invokes the Defense Production Act - a Title 50 war power focused on national defense - to empower the Secretary of Commerce to require companies developing “dual-use foundational models” (Sec. 3(k)) to provide the results of Red Team cyber tests.
Additionally, companies that provide infrastructure as a service (IaaS) to foreign entities that develop AI models will have reporting requirements to the government. The safety and security principle also addresses AI used in cybersecurity for critical infrastructure and risks related to chemical, biological, radiological, and nuclear threats.
Finally, the first principle seeks to reduce risks posed by synthetic content. Synthetic content includes “information, such as images, videos, audio clips, and text, that has been significantly modified or generated by algorithms, including by AI.” In short, the government seeks the ability to clearly identify and label synthetic content to establish its authenticity and provenance when it is produced by the government or on its behalf. Think watermarks or labels that clearly indicate a text or image was created in whole or in part by AI.
Promoting Innovation and Competition
Several government agencies are directed to collaborate on the following:
Attract AI Talent to the United States. In short, the Departments of State and Homeland Security will expand the J-1 and F-1 visa programs to attract and retain the best external AI talent in the world.
Promote innovation, specifically through public-private partnerships. Various agencies are tasked to create pilot programs, clarify intellectual property (IP) issues (especially for copyrights), develop training programs for IP risk, and advance responsible AI applications for healthcare patients and workers, veterans health, climate change and clean energy, and scientific research.
Promote competition, specifically via the Department of Commerce, to promote fair competition in the AI marketplace and to ensure that consumers and workers are protected from AI-related harms. The order specifically calls out the semiconductor industry and small businesses that are commercializing AI.
Supporting Workers
The government seeks to address AI-related workforce disruptions, focus on how AI can advance worker well-being, and foster an AI-competent workforce.
Advancing Equity and Civil Rights
Agencies must focus on leveraging AI to improve the criminal justice system, educate law enforcement on issues like privacy and cybersecurity, protect civil rights under government benefits programs, and combat discrimination.
Protecting Consumers, Patients, Passengers, and Students
Various agencies - including the Departments of Homeland Security, Health and Human Services, Transportation, Federal Communications Commission - are directed to pursue the safe, responsible use of AI in the healthcare industry, transportation sector, public education, and in national communications networks.
Protecting Privacy
The EO calls out privacy specifically, acknowledging the threats AI poses to the unintentional or malicious use and disclosure of personally identifiable information (PII).
The Director of the Office of Management and Budget (OMB) will identify what commercially available information (CAI) that contains PII is collected by federal agencies, and then identify what standards and procedures can be improved regarding the collection and processing of such PII.
Federal privacy impact assessment (PIA) processes will be reviewed for effectiveness.
The government will create guidelines for agencies to evaluate the efficacy of privacy enhancing technologies (PETs) such as differential-privacy.
Advancing Federal Government Use of AI
The government will review and provide guidance on how AI technologies can be used and coordinated across Federal agencies. The Director of OMB will convene an interagency council to coordinate the use and development of AI at the Federal level.
OMB will issue guidance to all Federal agencies, including a requirement to appoint a Chief Artificial Intelligence Officer and to create Artificial Intelligence Governance Boards. OMB will promulgate other AI-related guidance in the coming months, including an annual summarizing how the government is using AI.
The EO also focuses on developing AI talent with the US federal government. Competition for talented AI workers may be a significant challenge for the government, as the private sector will likely always be a more lucrative and attractive route for top performers.
Strengthening American Leadership Abroad
The US will pursue bilateral, multilateral, and multi-stakeholder fora to advance a common understanding of AI-related guidance that enhances international collaboration while supporting US policies.
The Secretaries of State and Commerce will lead a coordinated effort with US allies, partners, and standards development organizations to pursue consensus on AI-related standards, cooperation, coordination, and information sharing.
In the coming months, look for an AI Global Development Playbook and a Global AI Research Agenda.
Finally, the White House will establish an Artificial Intelligence Council. The AI Council will coordinate across the federal sector to synchronize the formulation, development, communication, and industry engagements related to implementing AI-related policies.
Key Take-Aways
If you are doing or plan to do business with the US government and your goods or services leverage AI, then elements of the EO will impact you. Specifically, expect to conduct red team assessments of your AI-driven products and services, and share the results with the government.
If the red team assessment of your AI-driven product identifies flaws and vulnerabilities, like harmful or discriminatory outputs, unforeseen or undesirable system behaviors, limitations, or potential risks from misuse of the system, these results will likely become publicly available.
The EO will likely accelerate the adoption of common standards for AI, just as federal government adoption of cloud-based services drove standardization in cloud privacy and cybersecurity safeguards that benefit the larger public.
One of the greatest challenges will be finding and hiring the skilled personnel to support effective AI adoption within the government. The pool of experts is still relatively small, demand for experts remains high, and the private sector always moves with greater alacrity and higher compensation.
Certain industries should monitor the implementation of this EO closely; specifically, businesses involved in national security, law enforcement, healthcare, transportation, human resources and labor, and education.
Next Steps You Should Consider
If you sell products or services that leverage AI-related technologies, if you plan to contract with the US government, or if you operate in impacted sectors like healthcare, law enforcement, healthcare, or education, you should consider creating an AI governance strategy.
An AI governance strategy provides a structured framework for how artificial intelligence technologies are managed, used, and monitored across your organization. A robust AI governance framework addresses multiple considerations, from technical and ethical implications, to legal compliance and social impacts.
Key Components of AI Governance
Ethical and Responsible AI Use
Ethical Guidelines: Establish ethical principles that guide AI development and use.
Bias and Fairness: Implement mechanisms to identify and mitigate biases in AI models.
Legal Compliance
Data Protection: Ensure adherence to global data protection regulations like the General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA), etc.
Regulatory Compliance: Align AI systems with relevant industry-specific regulations.
Data Management
Data Quality: Ensure data used for training and operating AI models is accurate and reliable.
Data Privacy: Protect personal data and privacy throughout the AI/data lifecycle.
Technical Robustness
Reliability: Ensure AI systems perform reliably under varied conditions.
Security: Safeguard AI systems against unauthorized access, data breaches, and attacks.
Transparency and Accountability
Explainability: Ensure AI models and decision-making processes are interpretable and transparent.
Auditability: Implement mechanisms to facilitate external and internal audits of AI systems.
Stakeholder Engagement
Inclusion: Involve diverse stakeholders in AI development and decision-making processes.
Communication: Maintain open channels for feedback and grievances related to AI applications.
Social Impact
Social Well-being: Consider the broader societal impacts and contributions of AI.
Sustainable AI: Ensure AI practices align with sustainability goals and responsible resource use.
Implementation Strategies for AI Governance
Establish an AI Governance Committee
Ensure the committee is composed of members from various functions (e.g., legal, ethical, product, IT, human resources, etc.) to oversee AI initiatives.
Creating Policies and Guidelines
Developing a handbook or policy document that delineates the principles and practices of AI use within the organization.
Ensure that Documentation is Transparent
Maintaining comprehensive documentation of AI systems, models, training data, and decision-making processes.
Regular Red Team Assessments, Auditing, and Monitoring
Implement regular red team assessments, as well as independent audits and reviews of AI systems to document adherence to your governance policies and regulatory requirements.
Continuous Training and Development
Providing ongoing education and training to employees regarding responsible AI use and ethical considerations.
External Collaboration
Engage with external bodies, regulatory authorities, and industry peers to stay abreast of best practices and emerging trends in AI governance.
Establish Feedback Mechanisms
Incorporate feedback loops to continuously improve AI governance policies and practices based on learnings and evolving standards.
Conclusion
President Biden’s executive order on artificial intelligence signals the urgency of defining sensible guardrails and guidance to ensure AI is applied in an ethical, productive manner. Concerns regarding AI’s potentially negative impacts on data privacy, cybersecurity, workers, and vulnerable communities are serious, but AI can also be applied to solutions.
If your business leverages AI in its products or services, works with the government, or operates in an industry impacted by the AO, you should consider developing an AI governance strategy.
Implementing robust governance is crucial not only for compliance and risk mitigation, but also for earning trust from customers, employees, and stakeholders by demonstrating a commitment to ethical and responsible AI deployment.
If you need assistance building and maturing your governance, risk, and compliance (GRC) capabilities, consider a complimentary consultation with 1GDPA. We can help you create policies, procedures, and guidelines that ensure the responsible, ethical, and legal use of AI across your business offerings.