Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 75191-75226 [2023-24283]

Download as PDF 75191 Presidential Documents Federal Register Vol. 88, No. 210 Wednesday, November 1, 2023 Title 3— Executive Order 14110 of October 30, 2023 The President Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered as follows: Section 1. Purpose. Artificial intelligence (AI) holds extraordinary potential for both promise and peril. Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security. Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks. This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society. My Administration places the highest urgency on governing the development and use of AI safely and responsibly, and is therefore advancing a coordinated, Federal Government-wide approach to doing so. The rapid speed at which AI capabilities are advancing compels the United States to lead in this moment for the sake of our security, economy, and society. ddrumheller on DSK120RN23PROD with PRESDOC-E0 In the end, AI reflects the principles of the people who build it, the people who use it, and the data upon which it is built. I firmly believe that the power of our ideals; the foundations of our society; and the creativity, diversity, and decency of our people are the reasons that America thrived in past eras of rapid change. They are the reasons we will succeed again in this moment. We are more than capable of harnessing AI for justice, security, and opportunity for all. Sec. 2. Policy and Principles. It is the policy of my Administration to advance and govern the development and use of AI in accordance with eight guiding principles and priorities. When undertaking the actions set forth in this order, executive departments and agencies (agencies) shall, as appropriate and consistent with applicable law, adhere to these principles, while, as feasible, taking into account the views of other agencies, industry, members of academia, civil society, labor unions, international allies and partners, and other relevant organizations: (a) Artificial Intelligence must be safe and secure. Meeting this goal requires robust, reliable, repeatable, and standardized evaluations of AI systems, as well as policies, institutions, and, as appropriate, other mechanisms to test, understand, and mitigate risks from these systems before they are put to use. It also requires addressing AI systems’ most pressing security risks— including with respect to biotechnology, cybersecurity, critical infrastructure, and other national security dangers—while navigating AI’s opacity and complexity. Testing and evaluations, including post-deployment performance monitoring, will help ensure that AI systems function as intended, are resilient against misuse or dangerous modifications, are ethically developed and operated in a secure manner, and are compliant with applicable Federal laws and policies. Finally, my Administration will help develop effective labeling and content provenance mechanisms, so that Americans are able to determine when content is generated using AI and when it is not. These actions will provide a vital foundation for an approach that addresses AI’s risks without unduly reducing its benefits. VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00003 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 75192 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents (b) Promoting responsible innovation, competition, and collaboration will allow the United States to lead in AI and unlock the technology’s potential to solve some of society’s most difficult challenges. This effort requires investments in AI-related education, training, development, research, and capacity, while simultaneously tackling novel intellectual property (IP) questions and other problems to protect inventors and creators. Across the Federal Government, my Administration will support programs to provide Americans the skills they need for the age of AI and attract the world’s AI talent to our shores—not just to study, but to stay—so that the companies and technologies of the future are made in America. The Federal Government will promote a fair, open, and competitive ecosystem and marketplace for AI and related technologies so that small developers and entrepreneurs can continue to drive innovation. Doing so requires stopping unlawful collusion and addressing risks from dominant firms’ use of key assets such as semiconductors, computing power, cloud storage, and data to disadvantage competitors, and it requires supporting a marketplace that harnesses the benefits of AI to provide new opportunities for small businesses, workers, and entrepreneurs. (c) The responsible development and use of AI require a commitment to supporting American workers. As AI creates new jobs and industries, all workers need a seat at the table, including through collective bargaining, to ensure that they benefit from these opportunities. My Administration will seek to adapt job training and education to support a diverse workforce and help provide access to opportunities that AI creates. In the workplace itself, AI should not be deployed in ways that undermine rights, worsen job quality, encourage undue worker surveillance, lessen market competition, introduce new health and safety risks, or cause harmful labor-force disruptions. The critical next steps in AI development should be built on the views of workers, labor unions, educators, and employers to support responsible uses of AI that improve workers’ lives, positively augment human work, and help all people safely enjoy the gains and opportunities from technological innovation. ddrumheller on DSK120RN23PROD with PRESDOC-E0 (d) Artificial Intelligence policies must be consistent with my Administration’s dedication to advancing equity and civil rights. My Administration cannot—and will not—tolerate the use of AI to disadvantage those who are already too often denied equal opportunity and justice. From hiring to housing to healthcare, we have seen what happens when AI use deepens discrimination and bias, rather than improving quality of life. Artificial Intelligence systems deployed irresponsibly have reproduced and intensified existing inequities, caused new types of harmful discrimination, and exacerbated online and physical harms. My Administration will build on the important steps that have already been taken—such as issuing the Blueprint for an AI Bill of Rights, the AI Risk Management Framework, and Executive Order 14091 of February 16, 2023 (Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government)— in seeking to ensure that AI complies with all Federal laws and to promote robust technical evaluations, careful oversight, engagement with affected communities, and rigorous regulation. It is necessary to hold those developing and deploying AI accountable to standards that protect against unlawful discrimination and abuse, including in the justice system and the Federal Government. Only then can Americans trust AI to advance civil rights, civil liberties, equity, and justice for all. (e) The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected. Use of new technologies, such as AI, does not excuse organizations from their legal obligations, and hard-won consumer protections are more important than ever in moments of technological change. The Federal Government will enforce existing consumer protection laws and principles and enact appropriate safeguards against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI. Such protections are VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00004 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents 75193 especially important in critical fields like healthcare, financial services, education, housing, law, and transportation, where mistakes by or misuse of AI could harm patients, cost consumers or small businesses, or jeopardize safety or rights. At the same time, my Administration will promote responsible uses of AI that protect consumers, raise the quality of goods and services, lower their prices, or expand selection and availability. (f) Americans’ privacy and civil liberties must be protected as AI continues advancing. Artificial Intelligence is making it easier to extract, re-identify, link, infer, and act on sensitive information about people’s identities, locations, habits, and desires. Artificial Intelligence’s capabilities in these areas can increase the risk that personal data could be exploited and exposed. To combat this risk, the Federal Government will ensure that the collection, use, and retention of data is lawful, is secure, and mitigates privacy and confidentiality risks. Agencies shall use available policy and technical tools, including privacy-enhancing technologies (PETs) where appropriate, to protect privacy and to combat the broader legal and societal risks—including the chilling of First Amendment rights—that result from the improper collection and use of people’s data. (g) It is important to manage the risks from the Federal Government’s own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI to deliver better results for Americans. These efforts start with people, our Nation’s greatest asset. My Administration will take steps to attract, retain, and develop public service-oriented AI professionals, including from underserved communities, across disciplines— including technology, policy, managerial, procurement, regulatory, ethical, governance, and legal fields—and ease AI professionals’ path into the Federal Government to help harness and govern AI. The Federal Government will work to ensure that all members of its workforce receive adequate training to understand the benefits, risks, and limitations of AI for their job functions, and to modernize Federal Government information technology infrastructure, remove bureaucratic obstacles, and ensure that safe and rights-respecting AI is adopted, deployed, and used. ddrumheller on DSK120RN23PROD with PRESDOC-E0 (h) The Federal Government should lead the way to global societal, economic, and technological progress, as the United States has in previous eras of disruptive innovation and change. This leadership is not measured solely by the technological advancements our country makes. Effective leadership also means pioneering those systems and safeguards needed to deploy technology responsibly—and building and promoting those safeguards with the rest of the world. My Administration will engage with international allies and partners in developing a framework to manage AI’s risks, unlock AI’s potential for good, and promote common approaches to shared challenges. The Federal Government will seek to promote responsible AI safety and security principles and actions with other nations, including our competitors, while leading key global conversations and collaborations to ensure that AI benefits the whole world, rather than exacerbating inequities, threatening human rights, and causing other harms. Sec. 3. Definitions. For purposes of this order: (a) The term ‘‘agency’’ means each agency described in 44 U.S.C. 3502(1), except for the independent regulatory agencies described in 44 U.S.C. 3502(5). (b) The term ‘‘artificial intelligence’’ or ‘‘AI’’ has the meaning set forth in 15 U.S.C. 9401(3): a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action. (c) The term ‘‘AI model’’ means a component of an information system that implements AI technology and uses computational, statistical, or machine-learning techniques to produce outputs from a given set of inputs. VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00005 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 75194 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents (d) The term ‘‘AI red-teaming’’ means a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI. Artificial Intelligence red-teaming is most often performed by dedicated ‘‘red teams’’ that adopt adversarial methods to identify flaws and vulnerabilities, such as harmful or discriminatory outputs from an AI system, unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the system. (e) The term ‘‘AI system’’ means any data system, software, hardware, application, tool, or utility that operates in whole or in part using AI. (f) The term ‘‘commercially available information’’ means any information or data about an individual or group of individuals, including an individual’s or group of individuals’ device or location, that is made available or obtainable and sold, leased, or licensed to the general public or to governmental or non-governmental entities. (g) The term ‘‘crime forecasting’’ means the use of analytical techniques to attempt to predict future crimes or crime-related information. It can include machine-generated predictions that use algorithms to analyze large volumes of data, as well as other forecasts that are generated without machines and based on statistics, such as historical crime statistics. (h) The term ‘‘critical and emerging technologies’’ means those technologies listed in the February 2022 Critical and Emerging Technologies List Update issued by the National Science and Technology Council (NSTC), as amended by subsequent updates to the list issued by the NSTC. (i) The term ‘‘critical infrastructure’’ has the meaning set forth in section 1016(e) of the USA PATRIOT Act of 2001, 42 U.S.C. 5195c(e). (j) The term ‘‘differential-privacy guarantee’’ means protections that allow information about a group to be shared while provably limiting the improper access, use, or disclosure of personal information about particular entities. (k) The term ‘‘dual-use foundation model’’ means an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by: (i) substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons; ddrumheller on DSK120RN23PROD with PRESDOC-E0 (ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or (iii) permitting the evasion of human control or oversight through means of deception or obfuscation. Models meet this definition even if they are provided to end users with technical safeguards that attempt to prevent users from taking advantage of the relevant unsafe capabilities. (l) The term ‘‘Federal law enforcement agency’’ has the meaning set forth in section 21(a) of Executive Order 14074 of May 25, 2022 (Advancing Effective, Accountable Policing and Criminal Justice Practices To Enhance Public Trust and Public Safety). (m) The term ‘‘floating-point operation’’ means any mathematical operation or assignment involving floating-point numbers, which are a subset of the real numbers typically represented on computers by an integer of fixed precision scaled by an integer exponent of a fixed base. (n) The term ‘‘foreign person’’ has the meaning set forth in section 5(c) of Executive Order 13984 of January 19, 2021 (Taking Additional Steps VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00006 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents 75195 To Address the National Emergency With Respect to Significant Malicious Cyber-Enabled Activities). (o) The terms ‘‘foreign reseller’’ and ‘‘foreign reseller of United States Infrastructure as a Service Products’’ mean a foreign person who has established an Infrastructure as a Service Account to provide Infrastructure as a Service Products subsequently, in whole or in part, to a third party. (p) The term ‘‘generative AI’’ means the class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital content. (q) The terms ‘‘Infrastructure as a Service Product,’’ ‘‘United States Infrastructure as a Service Product,’’ ‘‘United States Infrastructure as a Service Provider,’’ and ‘‘Infrastructure as a Service Account’’ each have the respective meanings given to those terms in section 5 of Executive Order 13984. (r) The term ‘‘integer operation’’ means any mathematical operation or assignment involving only integers, or whole numbers expressed without a decimal point. (s) The term ‘‘Intelligence Community’’ has the meaning given to that term in section 3.5(h) of Executive Order 12333 of December 4, 1981 (United States Intelligence Activities), as amended. (t) The term ‘‘machine learning’’ means a set of techniques that can be used to train AI algorithms to improve performance at a task based on data. (u) The term ‘‘model weight’’ means a numerical parameter within an AI model that helps determine the model’s outputs in response to inputs. (v) The term ‘‘national security system’’ has the meaning set forth in 44 U.S.C. 3552(b)(6). (w) The term ‘‘omics’’ means biomolecules, including nucleic acids, proteins, and metabolites, that make up a cell or cellular system. (x) The term ‘‘Open RAN’’ means the Open Radio Access Network approach to telecommunications-network standardization adopted by the O–RAN Alliance, Third Generation Partnership Project, or any similar set of published open standards for multi-vendor network equipment interoperability. (y) The term ‘‘personally identifiable information’’ has the meaning set forth in Office of Management and Budget (OMB) Circular No. A–130. (z) The term ‘‘privacy-enhancing technology’’ means any software or hardware solution, technical process, technique, or other technological means of mitigating privacy risks arising from data processing, including by enhancing predictability, manageability, disassociability, storage, security, and confidentiality. These technological means may include secure multiparty computation, homomorphic encryption, zero-knowledge proofs, federated learning, secure enclaves, differential privacy, and synthetic-data-generation tools. This is also sometimes referred to as ‘‘privacy-preserving technology.’’ ddrumheller on DSK120RN23PROD with PRESDOC-E0 (aa) The term ‘‘privacy impact assessment’’ has the meaning set forth in OMB Circular No. A–130. (bb) The term ‘‘Sector Risk Management Agency’’ has the meaning set forth in 6 U.S.C. 650(23). (cc) The term ‘‘self-healing network’’ means a telecommunications network that automatically diagnoses and addresses network issues to permit selfrestoration. (dd) The term ‘‘synthetic biology’’ means a field of science that involves redesigning organisms, or the biomolecules of organisms, at the genetic level to give them new characteristics. Synthetic nucleic acids are a type of biomolecule redesigned through synthetic-biology methods. VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00007 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 75196 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents (ee) The term ‘‘synthetic content’’ means information, such as images, videos, audio clips, and text, that has been significantly modified or generated by algorithms, including by AI. (ff) The term ‘‘testbed’’ means a facility or mechanism equipped for conducting rigorous, transparent, and replicable testing of tools and technologies, including AI and PETs, to help evaluate the functionality, usability, and performance of those tools or technologies. (gg) The term ‘‘watermarking’’ means the act of embedding information, which is typically difficult to remove, into outputs created by AI—including into outputs such as photos, videos, audio clips, or text—for the purposes of verifying the authenticity of the output or the identity or characteristics of its provenance, modifications, or conveyance. Sec. 4. Ensuring the Safety and Security of AI Technology. 4.1. Developing Guidelines, Standards, and Best Practices for AI Safety and Security. (a) Within 270 days of the date of this order, to help ensure the development of safe, secure, and trustworthy AI systems, the Secretary of Commerce, acting through the Director of the National Institute of Standards and Technology (NIST), in coordination with the Secretary of Energy, the Secretary of Homeland Security, and the heads of other relevant agencies as the Secretary of Commerce may deem appropriate, shall: (i) Establish guidelines and best practices, with the aim of promoting consensus industry standards, for developing and deploying safe, secure, and trustworthy AI systems, including: (A) developing a companion resource to the AI Risk Management Framework, NIST AI 100–1, for generative AI; (B) developing a companion resource to the Secure Software Development Framework to incorporate secure development practices for generative AI and for dual-use foundation models; and (C) launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as in the areas of cybersecurity and biosecurity. (ii) Establish appropriate guidelines (except for AI used as a component of a national security system), including appropriate procedures and processes, to enable developers of AI, especially of dual-use foundation models, to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems. These efforts shall include: ddrumheller on DSK120RN23PROD with PRESDOC-E0 (A) coordinating or developing guidelines related to assessing and managing the safety, security, and trustworthiness of dual-use foundation models; and (B) in coordination with the Secretary of Energy and the Director of the National Science Foundation (NSF), developing and helping to ensure the availability of testing environments, such as testbeds, to support the development of safe, secure, and trustworthy AI technologies, as well as to support the design, development, and deployment of associated PETs, consistent with section 9(b) of this order. (b) Within 270 days of the date of this order, to understand and mitigate AI security risks, the Secretary of Energy, in coordination with the heads of other Sector Risk Management Agencies (SRMAs) as the Secretary of Energy may deem appropriate, shall develop and, to the extent permitted by law and available appropriations, implement a plan for developing the Department of Energy’s AI model evaluation tools and AI testbeds. The Secretary shall undertake this work using existing solutions where possible, and shall develop these tools and AI testbeds to be capable of assessing near-term extrapolations of AI systems’ capabilities. At a minimum, the Secretary shall develop tools to evaluate AI capabilities to generate outputs that may represent nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards. The Secretary shall VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00008 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents 75197 do this work solely for the purposes of guarding against these threats, and shall also develop model guardrails that reduce such risks. The Secretary shall, as appropriate, consult with private AI laboratories, academia, civil society, and third-party evaluators, and shall use existing solutions. 4.2. Ensuring Safe and Reliable AI. (a) Within 90 days of the date of this order, to ensure and verify the continuous availability of safe, reliable, and effective AI in accordance with the Defense Production Act, as amended, 50 U.S.C. 4501 et seq., including for the national defense and the protection of critical infrastructure, the Secretary of Commerce shall require: (i) Companies developing or demonstrating an intent to develop potential dual-use foundation models to provide the Federal Government, on an ongoing basis, with information, reports, or records regarding the following: (A) any ongoing or planned activities related to training, developing, or producing dual-use foundation models, including the physical and cybersecurity protections taken to assure the integrity of that training process against sophisticated threats; (B) the ownership and possession of the model weights of any dualuse foundation models, and the physical and cybersecurity measures taken to protect those model weights; and (C) the results of any developed dual-use foundation model’s performance in relevant AI red-team testing based on guidance developed by NIST pursuant to subsection 4.1(a)(ii) of this section, and a description of any associated measures the company has taken to meet safety objectives, such as mitigations to improve performance on these red-team tests and strengthen overall model security. Prior to the development of guidance on red-team testing standards by NIST pursuant to subsection 4.1(a)(ii) of this section, this description shall include the results of any red-team testing that the company has conducted relating to lowering the barrier to entry for the development, acquisition, and use of biological weapons by non-state actors; the discovery of software vulnerabilities and development of associated exploits; the use of software or tools to influence real or virtual events; the possibility for self-replication or propagation; and associated measures to meet safety objectives; and ddrumheller on DSK120RN23PROD with PRESDOC-E0 (ii) Companies, individuals, or other organizations or entities that acquire, develop, or possess a potential large-scale computing cluster to report any such acquisition, development, or possession, including the existence and location of these clusters and the amount of total computing power available in each cluster. (b) The Secretary of Commerce, in consultation with the Secretary of State, the Secretary of Defense, the Secretary of Energy, and the Director of National Intelligence, shall define, and thereafter update as needed on a regular basis, the set of technical conditions for models and computing clusters that would be subject to the reporting requirements of subsection 4.2(a) of this section. Until such technical conditions are defined, the Secretary shall require compliance with these reporting requirements for: (i) any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations; and (ii) any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI. (c) Because I find that additional steps must be taken to deal with the national emergency related to significant malicious cyber-enabled activities declared in Executive Order 13694 of April 1, 2015 (Blocking the Property of Certain Persons Engaging in Significant Malicious Cyber-Enabled Activities), as amended by Executive Order 13757 of December 28, 2016 (Taking VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00009 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 75198 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents Additional Steps to Address the National Emergency With Respect to Significant Malicious Cyber-Enabled Activities), and further amended by Executive Order 13984, to address the use of United States Infrastructure as a Service (IaaS) Products by foreign malicious cyber actors, including to impose additional record-keeping obligations with respect to foreign transactions and to assist in the investigation of transactions involving foreign malicious cyber actors, I hereby direct the Secretary of Commerce, within 90 days of the date of this order, to: (i) Propose regulations that require United States IaaS Providers to submit a report to the Secretary of Commerce when a foreign person transacts with that United States IaaS Provider to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity (a ‘‘training run’’). Such reports shall include, at a minimum, the identity of the foreign person and the existence of any training run of an AI model meeting the criteria set forth in this section, or other criteria defined by the Secretary in regulations, as well as any additional information identified by the Secretary. (ii) Include a requirement in the regulations proposed pursuant to subsection 4.2(c)(i) of this section that United States IaaS Providers prohibit any foreign reseller of their United States IaaS Product from providing those products unless such foreign reseller submits to the United States IaaS Provider a report, which the United States IaaS Provider must provide to the Secretary of Commerce, detailing each instance in which a foreign person transacts with the foreign reseller to use the United States IaaS Product to conduct a training run described in subsection 4.2(c)(i) of this section. Such reports shall include, at a minimum, the information specified in subsection 4.2(c)(i) of this section as well as any additional information identified by the Secretary. (iii) Determine the set of technical conditions for a large AI model to have potential capabilities that could be used in malicious cyber-enabled activity, and revise that determination as necessary and appropriate. Until the Secretary makes such a determination, a model shall be considered to have potential capabilities that could be used in malicious cyber-enabled activity if it requires a quantity of computing power greater than 1026 integer or floating-point operations and is trained on a computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum compute capacity of 1020 integer or floatingpoint operations per second for training AI. (d) Within 180 days of the date of this order, pursuant to the finding set forth in subsection 4.2(c) of this section, the Secretary of Commerce shall propose regulations that require United States IaaS Providers to ensure that foreign resellers of United States IaaS Products verify the identity of any foreign person that obtains an IaaS account (account) from the foreign reseller. These regulations shall, at a minimum: ddrumheller on DSK120RN23PROD with PRESDOC-E0 (i) Set forth the minimum standards that a United States IaaS Provider must require of foreign resellers of its United States IaaS Products to verify the identity of a foreign person who opens an account or maintains an existing account with a foreign reseller, including: (A) the types of documentation and procedures that foreign resellers of United States IaaS Products must require to verify the identity of any foreign person acting as a lessee or sub-lessee of these products or services; (B) records that foreign resellers of United States IaaS Products must securely maintain regarding a foreign person that obtains an account, including information establishing: (1) the identity of such foreign person, including name and address; VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00010 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents 75199 (2) the means and source of payment (including any associated financial institution and other identifiers such as credit card number, account number, customer identifier, transaction identifiers, or virtual currency wallet or wallet address identifier); (3) the electronic mail address and telephonic contact information used to verify a foreign person’s identity; and (4) the internet Protocol addresses used for access or administration and the date and time of each such access or administrative action related to ongoing verification of such foreign person’s ownership of such an account; and (C) methods that foreign resellers of United States IaaS Products must implement to limit all third-party access to the information described in this subsection, except insofar as such access is otherwise consistent with this order and allowed under applicable law; ddrumheller on DSK120RN23PROD with PRESDOC-E0 (ii) Take into consideration the types of accounts maintained by foreign resellers of United States IaaS Products, methods of opening an account, and types of identifying information available to accomplish the objectives of identifying foreign malicious cyber actors using any such products and avoiding the imposition of an undue burden on such resellers; and (iii) Provide that the Secretary of Commerce, in accordance with such standards and procedures as the Secretary may delineate and in consultation with the Secretary of Defense, the Attorney General, the Secretary of Homeland Security, and the Director of National Intelligence, may exempt a United States IaaS Provider with respect to any specific foreign reseller of their United States IaaS Products, or with respect to any specific type of account or lessee, from the requirements of any regulation issued pursuant to this subsection. Such standards and procedures may include a finding by the Secretary that such foreign reseller, account, or lessee complies with security best practices to otherwise deter abuse of United States IaaS Products. (e) The Secretary of Commerce is hereby authorized to take such actions, including the promulgation of rules and regulations, and to employ all powers granted to the President by the International Emergency Economic Powers Act, 50 U.S.C. 1701 et seq., as may be necessary to carry out the purposes of subsections 4.2(c) and (d) of this section. Such actions may include a requirement that United States IaaS Providers require foreign resellers of United States IaaS Products to provide United States IaaS Providers verifications relative to those subsections. 4.3. Managing AI in Critical Infrastructure and in Cybersecurity. (a) To ensure the protection of critical infrastructure, the following actions shall be taken: (i) Within 90 days of the date of this order, and at least annually thereafter, the head of each agency with relevant regulatory authority over critical infrastructure and the heads of relevant SRMAs, in coordination with the Director of the Cybersecurity and Infrastructure Security Agency within the Department of Homeland Security for consideration of cross-sector risks, shall evaluate and provide to the Secretary of Homeland Security an assessment of potential risks related to the use of AI in critical infrastructure sectors involved, including ways in which deploying AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyber attacks, and shall consider ways to mitigate these vulnerabilities. Independent regulatory agencies are encouraged, as they deem appropriate, to contribute to sector-specific risk assessments. (ii) Within 150 days of the date of this order, the Secretary of the Treasury shall issue a public report on best practices for financial institutions to manage AI-specific cybersecurity risks. (iii) Within 180 days of the date of this order, the Secretary of Homeland Security, in coordination with the Secretary of Commerce and with SRMAs and other regulators as determined by the Secretary of Homeland Security, shall incorporate as appropriate the AI Risk Management Framework, NIST VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00011 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 75200 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents AI 100–1, as well as other appropriate security guidance, into relevant safety and security guidelines for use by critical infrastructure owners and operators. (iv) Within 240 days of the completion of the guidelines described in subsection 4.3(a)(iii) of this section, the Assistant to the President for National Security Affairs and the Director of OMB, in consultation with the Secretary of Homeland Security, shall coordinate work by the heads of agencies with authority over critical infrastructure to develop and take steps for the Federal Government to mandate such guidelines, or appropriate portions thereof, through regulatory or other appropriate action. Independent regulatory agencies are encouraged, as they deem appropriate, to consider whether to mandate guidance through regulatory action in their areas of authority and responsibility. (v) The Secretary of Homeland Security shall establish an Artificial Intelligence Safety and Security Board as an advisory committee pursuant to section 871 of the Homeland Security Act of 2002 (Public Law 107– 296). The Advisory Committee shall include AI experts from the private sector, academia, and government, as appropriate, and provide to the Secretary of Homeland Security and the Federal Government’s critical infrastructure community advice, information, or recommendations for improving security, resilience, and incident response related to AI usage in critical infrastructure. (b) To capitalize on AI’s potential to improve United States cyber defenses: (i) The Secretary of Defense shall carry out the actions described in subsections 4.3(b)(ii) and (iii) of this section for national security systems, and the Secretary of Homeland Security shall carry out these actions for non-national security systems. Each shall do so in consultation with the heads of other relevant agencies as the Secretary of Defense and the Secretary of Homeland Security may deem appropriate. ddrumheller on DSK120RN23PROD with PRESDOC-E0 (ii) As set forth in subsection 4.3(b)(i) of this section, within 180 days of the date of this order, the Secretary of Defense and the Secretary of Homeland Security shall, consistent with applicable law, each develop plans for, conduct, and complete an operational pilot project to identify, develop, test, evaluate, and deploy AI capabilities, such as large-language models, to aid in the discovery and remediation of vulnerabilities in critical United States Government software, systems, and networks. (iii) As set forth in subsection 4.3(b)(i) of this section, within 270 days of the date of this order, the Secretary of Defense and the Secretary of Homeland Security shall each provide a report to the Assistant to the President for National Security Affairs on the results of actions taken pursuant to the plans and operational pilot projects required by subsection 4.3(b)(ii) of this section, including a description of any vulnerabilities found and fixed through the development and deployment of AI capabilities and any lessons learned on how to identify, develop, test, evaluate, and deploy AI capabilities effectively for cyber defense. 4.4. Reducing Risks at the Intersection of AI and CBRN Threats. (a) To better understand and mitigate the risk of AI being misused to assist in the development or use of CBRN threats—with a particular focus on biological weapons—the following actions shall be taken: (i) Within 180 days of the date of this order, the Secretary of Homeland Security, in consultation with the Secretary of Energy and the Director of the Office of Science and Technology Policy (OSTP), shall evaluate the potential for AI to be misused to enable the development or production of CBRN threats, while also considering the benefits and application of AI to counter these threats, including, as appropriate, the results of work conducted under section 8(b) of this order. The Secretary of Homeland Security shall: (A) consult with experts in AI and CBRN issues from the Department of Energy, private AI laboratories, academia, and third-party model evaluators, as appropriate, to evaluate AI model capabilities to present CBRN VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00012 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents 75201 threats—for the sole purpose of guarding against those threats—as well as options for minimizing the risks of AI model misuse to generate or exacerbate those threats; and (B) submit a report to the President that describes the progress of these efforts, including an assessment of the types of AI models that may present CBRN risks to the United States, and that makes recommendations for regulating or overseeing the training, deployment, publication, or use of these models, including requirements for safety evaluations and guardrails for mitigating potential threats to national security. (ii) Within 120 days of the date of this order, the Secretary of Defense, in consultation with the Assistant to the President for National Security Affairs and the Director of OSTP, shall enter into a contract with the National Academies of Sciences, Engineering, and Medicine to conduct— and submit to the Secretary of Defense, the Assistant to the President for National Security Affairs, the Director of the Office of Pandemic Preparedness and Response Policy, the Director of OSTP, and the Chair of the Chief Data Officer Council—a study that: (A) assesses the ways in which AI can increase biosecurity risks, including risks from generative AI models trained on biological data, and makes recommendations on how to mitigate these risks; (B) considers the national security implications of the use of data and datasets, especially those associated with pathogens and omics studies, that the United States Government hosts, generates, funds the creation of, or otherwise owns, for the training of generative AI models, and makes recommendations on how to mitigate the risks related to the use of these data and datasets; (C) assesses the ways in which AI applied to biology can be used to reduce biosecurity risks, including recommendations on opportunities to coordinate data and high-performance computing resources; and (D) considers additional concerns and opportunities at the intersection of AI and synthetic biology that the Secretary of Defense deems appropriate. (b) To reduce the risk of misuse of synthetic nucleic acids, which could be substantially increased by AI’s capabilities in this area, and improve biosecurity measures for the nucleic acid synthesis industry, the following actions shall be taken: (i) Within 180 days of the date of this order, the Director of OSTP, in consultation with the Secretary of State, the Secretary of Defense, the Attorney General, the Secretary of Commerce, the Secretary of Health and Human Services (HHS), the Secretary of Energy, the Secretary of Homeland Security, the Director of National Intelligence, and the heads of other relevant agencies as the Director of OSTP may deem appropriate, shall establish a framework, incorporating, as appropriate, existing United States Government guidance, to encourage providers of synthetic nucleic acid sequences to implement comprehensive, scalable, and verifiable synthetic nucleic acid procurement screening mechanisms, including standards and recommended incentives. As part of this framework, the Director of OSTP shall: ddrumheller on DSK120RN23PROD with PRESDOC-E0 (A) establish criteria and mechanisms for ongoing identification of biological sequences that could be used in a manner that would pose a risk to the national security of the United States; and (B) determine standardized methodologies and tools for conducting and verifying the performance of sequence synthesis procurement screening, including customer screening approaches to support due diligence with respect to managing security risks posed by purchasers of biological sequences identified in subsection 4.4(b)(i)(A) of this section, and processes for the reporting of concerning activity to enforcement entities. (ii) Within 180 days of the date of this order, the Secretary of Commerce, acting through the Director of NIST, in coordination with the Director VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00013 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 75202 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents of OSTP, and in consultation with the Secretary of State, the Secretary of HHS, and the heads of other relevant agencies as the Secretary of Commerce may deem appropriate, shall initiate an effort to engage with industry and relevant stakeholders, informed by the framework developed under subsection 4.4(b)(i) of this section, to develop and refine for possible use by synthetic nucleic acid sequence providers: (A) specifications for effective nucleic acid synthesis procurement screening; (B) best practices, including security and access controls, for managing sequence-of-concern databases to support such screening; (C) technical implementation guides for effective screening; and (D) conformity-assessment best practices and mechanisms. (iii) Within 180 days of the establishment of the framework pursuant to subsection 4.4(b)(i) of this section, all agencies that fund life-sciences research shall, as appropriate and consistent with applicable law, establish that, as a requirement of funding, synthetic nucleic acid procurement is conducted through providers or manufacturers that adhere to the framework, such as through an attestation from the provider or manufacturer. The Assistant to the President for National Security Affairs and the Director of OSTP shall coordinate the process of reviewing such funding requirements to facilitate consistency in implementation of the framework across funding agencies. (iv) In order to facilitate effective implementation of the measures described in subsections 4.4(b)(i)–(iii) of this section, the Secretary of Homeland Security, in consultation with the heads of other relevant agencies as the Secretary of Homeland Security may deem appropriate, shall: (A) within 180 days of the establishment of the framework pursuant to subsection 4.4(b)(i) of this section, develop a framework to conduct structured evaluation and stress testing of nucleic acid synthesis procurement screening, including the systems developed in accordance with subsections 4.4(b)(i)–(ii) of this section and implemented by providers of synthetic nucleic acid sequences; and (B) following development of the framework pursuant to subsection 4.4(b)(iv)(A) of this section, submit an annual report to the Assistant to the President for National Security Affairs, the Director of the Office of Pandemic Preparedness and Response Policy, and the Director of OSTP on any results of the activities conducted pursuant to subsection 4.4(b)(iv)(A) of this section, including recommendations, if any, on how to strengthen nucleic acid synthesis procurement screening, including customer screening systems. ddrumheller on DSK120RN23PROD with PRESDOC-E0 4.5. Reducing the Risks Posed by Synthetic Content. To foster capabilities for identifying and labeling synthetic content produced by AI systems, and to establish the authenticity and provenance of digital content, both synthetic and not synthetic, produced by the Federal Government or on its behalf: (a) Within 240 days of the date of this order, the Secretary of Commerce, in consultation with the heads of other relevant agencies as the Secretary of Commerce may deem appropriate, shall submit a report to the Director of OMB and the Assistant to the President for National Security Affairs identifying the existing standards, tools, methods, and practices, as well as the potential development of further science-backed standards and techniques, for: (i) authenticating content and tracking its provenance; (ii) labeling synthetic content, such as using watermarking; (iii) detecting synthetic content; VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00014 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents 75203 (iv) preventing generative AI from producing child sexual abuse material or producing non-consensual intimate imagery of real individuals (to include intimate digital depictions of the body or body parts of an identifiable individual); (v) testing software used for the above purposes; and (vi) auditing and maintaining synthetic content. (b) Within 180 days of submitting the report required under subsection 4.5(a) of this section, and updated periodically thereafter, the Secretary of Commerce, in coordination with the Director of OMB, shall develop guidance regarding the existing tools and practices for digital content authentication and synthetic content detection measures. The guidance shall include measures for the purposes listed in subsection 4.5(a) of this section. (c) Within 180 days of the development of the guidance required under subsection 4.5(b) of this section, and updated periodically thereafter, the Director of OMB, in consultation with the Secretary of State; the Secretary of Defense; the Attorney General; the Secretary of Commerce, acting through the Director of NIST; the Secretary of Homeland Security; the Director of National Intelligence; and the heads of other agencies that the Director of OMB deems appropriate, shall—for the purpose of strengthening public confidence in the integrity of official United States Government digital content—issue guidance to agencies for labeling and authenticating such content that they produce or publish. (d) The Federal Acquisition Regulatory Council shall, as appropriate and consistent with applicable law, consider amending the Federal Acquisition Regulation to take into account the guidance established under subsection 4.5 of this section. 4.6. Soliciting Input on Dual-Use Foundation Models with Widely Available Model Weights. When the weights for a dual-use foundation model are widely available—such as when they are publicly posted on the internet— there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model. To address the risks and potential benefits of dual-use foundation models with widely available weights, within 270 days of the date of this order, the Secretary of Commerce, acting through the Assistant Secretary of Commerce for Communications and Information, and in consultation with the Secretary of State, shall: (a) solicit input from the private sector, academia, civil society, and other stakeholders through a public consultation process on potential risks, benefits, other implications, and appropriate policy and regulatory approaches related to dual-use foundation models for which the model weights are widely available, including: (i) risks associated with actors fine-tuning dual-use foundation models for which the model weights are widely available or removing those models’ safeguards; ddrumheller on DSK120RN23PROD with PRESDOC-E0 (ii) benefits to AI innovation and research, including research into AI safety and risk management, of dual-use foundation models for which the model weights are widely available; and (iii) potential voluntary, regulatory, and international mechanisms to manage the risks and maximize the benefits of dual-use foundation models for which the model weights are widely available; and (b) based on input from the process described in subsection 4.6(a) of this section, and in consultation with the heads of other relevant agencies as the Secretary of Commerce deems appropriate, submit a report to the President on the potential benefits, risks, and implications of dual-use foundation models for which the model weights are widely available, as well as policy and regulatory recommendations pertaining to those models. 4.7. Promoting Safe Release and Preventing the Malicious Use of Federal Data for AI Training. To improve public data access and manage security risks, and consistent with the objectives of the Open, Public, Electronic, VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00015 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 75204 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents and Necessary Government Data Act (title II of Public Law 115–435) to expand public access to Federal data assets in a machine-readable format while also taking into account security considerations, including the risk that information in an individual data asset in isolation does not pose a security risk but, when combined with other available information, may pose such a risk: (a) within 270 days of the date of this order, the Chief Data Officer Council, in consultation with the Secretary of Defense, the Secretary of Commerce, the Secretary of Energy, the Secretary of Homeland Security, and the Director of National Intelligence, shall develop initial guidelines for performing security reviews, including reviews to identify and manage the potential security risks of releasing Federal data that could aid in the development of CBRN weapons as well as the development of autonomous offensive cyber capabilities, while also providing public access to Federal Government data in line with the goals stated in the Open, Public, Electronic, and Necessary Government Data Act (title II of Public Law 115–435); and (b) within 180 days of the development of the initial guidelines required by subsection 4.7(a) of this section, agencies shall conduct a security review of all data assets in the comprehensive data inventory required under 44 U.S.C. 3511(a)(1) and (2)(B) and shall take steps, as appropriate and consistent with applicable law, to address the highest-priority potential security risks that releasing that data could raise with respect to CBRN weapons, such as the ways in which that data could be used to train AI systems. 4.8. Directing the Development of a National Security Memorandum. To develop a coordinated executive branch approach to managing AI’s security risks, the Assistant to the President for National Security Affairs and the Assistant to the President and Deputy Chief of Staff for Policy shall oversee an interagency process with the purpose of, within 270 days of the date of this order, developing and submitting a proposed National Security Memorandum on AI to the President. The memorandum shall address the governance of AI used as a component of a national security system or for military and intelligence purposes. The memorandum shall take into account current efforts to govern the development and use of AI for national security systems. The memorandum shall outline actions for the Department of Defense, the Department of State, other relevant agencies, and the Intelligence Community to address the national security risks and potential benefits posed by AI. In particular, the memorandum shall: (a) provide guidance to the Department of Defense, other relevant agencies, and the Intelligence Community on the continued adoption of AI capabilities to advance the United States national security mission, including through directing specific AI assurance and risk-management practices for national security uses of AI that may affect the rights or safety of United States persons and, in appropriate contexts, non-United States persons; and (b) direct continued actions, as appropriate and consistent with applicable law, to address the potential use of AI systems by adversaries and other foreign actors in ways that threaten the capabilities or objectives of the Department of Defense or the Intelligence Community, or that otherwise pose risks to the security of the United States or its allies and partners. ddrumheller on DSK120RN23PROD with PRESDOC-E0 Sec. 5. Promoting Innovation and Competition. 5.1. Attracting AI Talent to the United States. (a) Within 90 days of the date of this order, to attract and retain talent in AI and other critical and emerging technologies in the United States economy, the Secretary of State and the Secretary of Homeland Security shall take appropriate steps to: (i) streamline processing times of visa petitions and applications, including by ensuring timely availability of visa appointments, for noncitizens who seek to travel to the United States to work on, study, or conduct research in AI or other critical and emerging technologies; and VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00016 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents 75205 (ii) facilitate continued availability of visa appointments in sufficient volume for applicants with expertise in AI or other critical and emerging technologies. (b) Within 120 days of the date of this order, the Secretary of State shall: (i) consider initiating a rulemaking to establish new criteria to designate countries and skills on the Department of State’s Exchange Visitor Skills List as it relates to the 2-year foreign residence requirement for certain J–1 nonimmigrants, including those skills that are critical to the United States; (ii) consider publishing updates to the 2009 Revised Exchange Visitor Skills List (74 FR 20108); and (iii) consider implementing a domestic visa renewal program under 22 CFR 41.111(b) to facilitate the ability of qualified applicants, including highly skilled talent in AI and critical and emerging technologies, to continue their work in the United States without unnecessary interruption. (c) Within 180 days of the date of this order, the Secretary of State shall: (i) consider initiating a rulemaking to expand the categories of nonimmigrants who qualify for the domestic visa renewal program covered under 22 CFR 41.111(b) to include academic J–1 research scholars and F–1 students in science, technology, engineering, and mathematics (STEM); and ddrumheller on DSK120RN23PROD with PRESDOC-E0 (ii) establish, to the extent permitted by law and available appropriations, a program to identify and attract top talent in AI and other critical and emerging technologies at universities, research institutions, and the private sector overseas, and to establish and increase connections with that talent to educate them on opportunities and resources for research and employment in the United States, including overseas educational components to inform top STEM talent of nonimmigrant and immigrant visa options and potential expedited adjudication of their visa petitions and applications. (d) Within 180 days of the date of this order, the Secretary of Homeland Security shall: (i) review and initiate any policy changes the Secretary determines necessary and appropriate to clarify and modernize immigration pathways for experts in AI and other critical and emerging technologies, including O–1A and EB–1 noncitizens of extraordinary ability; EB–2 advanced-degree holders and noncitizens of exceptional ability; and startup founders in AI and other critical and emerging technologies using the International Entrepreneur Rule; and (ii) continue its rulemaking process to modernize the H–1B program and enhance its integrity and usage, including by experts in AI and other critical and emerging technologies, and consider initiating a rulemaking to enhance the process for noncitizens, including experts in AI and other critical and emerging technologies and their spouses, dependents, and children, to adjust their status to lawful permanent resident. (e) Within 45 days of the date of this order, for purposes of considering updates to the ‘‘Schedule A’’ list of occupations, 20 CFR 656.5, the Secretary of Labor shall publish a request for information (RFI) to solicit public input, including from industry and worker-advocate communities, identifying AI and other STEM-related occupations, as well as additional occupations across the economy, for which there is an insufficient number of ready, willing, able, and qualified United States workers. (f) The Secretary of State and the Secretary of Homeland Security shall, consistent with applicable law and implementing regulations, use their discretionary authorities to support and attract foreign nationals with special skills in AI and other critical and emerging technologies seeking to work, study, or conduct research in the United States. VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00017 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 75206 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents (g) Within 120 days of the date of this order, the Secretary of Homeland Security, in consultation with the Secretary of State, the Secretary of Commerce, and the Director of OSTP, shall develop and publish informational resources to better attract and retain experts in AI and other critical and emerging technologies, including: (i) a clear and comprehensive guide for experts in AI and other critical and emerging technologies to understand their options for working in the United States, to be published in multiple relevant languages on AI.gov; and (ii) a public report with relevant data on applications, petitions, approvals, and other key indicators of how experts in AI and other critical and emerging technologies have utilized the immigration system through the end of Fiscal Year 2023. 5.2. Promoting Innovation. (a) To develop and strengthen public-private partnerships for advancing innovation, commercialization, and risk-mitigation methods for AI, and to help promote safe, responsible, fair, privacyprotecting, and trustworthy AI systems, the Director of NSF shall take the following steps: (i) Within 90 days of the date of this order, in coordination with the heads of agencies that the Director of NSF deems appropriate, launch a pilot program implementing the National AI Research Resource (NAIRR), consistent with past recommendations of the NAIRR Task Force. The program shall pursue the infrastructure, governance mechanisms, and user interfaces to pilot an initial integration of distributed computational, data, model, and training resources to be made available to the research community in support of AI-related research and development. The Director of NSF shall identify Federal and private sector computational, data, software, and training resources appropriate for inclusion in the NAIRR pilot program. To assist with such work, within 45 days of the date of this order, the heads of agencies whom the Director of NSF identifies for coordination pursuant to this subsection shall each submit to the Director of NSF a report identifying the agency resources that could be developed and integrated into such a pilot program. These reports shall include a description of such resources, including their current status and availability; their format, structure, or technical specifications; associated agency expertise that will be provided; and the benefits and risks associated with their inclusion in the NAIRR pilot program. The heads of independent regulatory agencies are encouraged to take similar steps, as they deem appropriate. ddrumheller on DSK120RN23PROD with PRESDOC-E0 (ii) Within 150 days of the date of this order, fund and launch at least one NSF Regional Innovation Engine that prioritizes AI-related work, such as AI-related research, societal, or workforce needs. (iii) Within 540 days of the date of this order, establish at least four new National AI Research Institutes, in addition to the 25 currently funded as of the date of this order. (b) Within 120 days of the date of this order, to support activities involving high-performance and data-intensive computing, the Secretary of Energy, in coordination with the Director of NSF, shall, in a manner consistent with applicable law and available appropriations, establish a pilot program to enhance existing successful training programs for scientists, with the goal of training 500 new researchers by 2025 capable of meeting the rising demand for AI talent. (c) To promote innovation and clarify issues related to AI and inventorship of patentable subject matter, the Under Secretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office (USPTO Director) shall: (i) within 120 days of the date of this order, publish guidance to USPTO patent examiners and applicants addressing inventorship and the use of AI, including generative AI, in the inventive process, including illustrative VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00018 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents 75207 examples in which AI systems play different roles in inventive processes and how, in each example, inventorship issues ought to be analyzed; (ii) subsequently, within 270 days of the date of this order, issue additional guidance to USPTO patent examiners and applicants to address other considerations at the intersection of AI and IP, which could include, as the USPTO Director deems necessary, updated guidance on patent eligibility to address innovation in AI and critical and emerging technologies; and (iii) within 270 days of the date of this order or 180 days after the United States Copyright Office of the Library of Congress publishes its forthcoming AI study that will address copyright issues raised by AI, whichever comes later, consult with the Director of the United States Copyright Office and issue recommendations to the President on potential executive actions relating to copyright and AI. The recommendations shall address any copyright and related issues discussed in the United States Copyright Office’s study, including the scope of protection for works produced using AI and the treatment of copyrighted works in AI training. (d) Within 180 days of the date of this order, to assist developers of AI in combatting AI-related IP risks, the Secretary of Homeland Security, acting through the Director of the National Intellectual Property Rights Coordination Center, and in consultation with the Attorney General, shall develop a training, analysis, and evaluation program to mitigate AI-related IP risks. Such a program shall: (i) include appropriate personnel dedicated to collecting and analyzing reports of AI-related IP theft, investigating such incidents with implications for national security, and, where appropriate and consistent with applicable law, pursuing related enforcement actions; (ii) implement a policy of sharing information and coordinating on such work, as appropriate and consistent with applicable law, with the Federal Bureau of Investigation; United States Customs and Border Protection; other agencies; State and local agencies; and appropriate international organizations, including through work-sharing agreements; (iii) develop guidance and other appropriate resources to assist private sector actors with mitigating the risks of AI-related IP theft; ddrumheller on DSK120RN23PROD with PRESDOC-E0 (iv) share information and best practices with AI developers and law enforcement personnel to identify incidents, inform stakeholders of current legal requirements, and evaluate AI systems for IP law violations, as well as develop mitigation strategies and resources; and (v) assist the Intellectual Property Enforcement Coordinator in updating the Intellectual Property Enforcement Coordinator Joint Strategic Plan on Intellectual Property Enforcement to address AI-related issues. (e) To advance responsible AI innovation by a wide range of healthcare technology developers that promotes the welfare of patients and workers in the healthcare sector, the Secretary of HHS shall identify and, as appropriate and consistent with applicable law and the activities directed in section 8 of this order, prioritize grantmaking and other awards, as well as undertake related efforts, to support responsible AI development and use, including: (i) collaborating with appropriate private sector actors through HHS programs that may support the advancement of AI-enabled tools that develop personalized immune-response profiles for patients, consistent with section 4 of this order; (ii) prioritizing the allocation of 2024 Leading Edge Acceleration Project cooperative agreement awards to initiatives that explore ways to improve healthcare-data quality to support the responsible development of AI tools for clinical care, real-world-evidence programs, population health, public health, and related research; and VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00019 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 75208 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents (iii) accelerating grants awarded through the National Institutes of Health Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM–AHEAD) program and showcasing current AIM–AHEAD activities in underserved communities. (f) To advance the development of AI systems that improve the quality of veterans’ healthcare, and in order to support small businesses’ innovative capacity, the Secretary of Veterans Affairs shall: (i) within 365 days of the date of this order, host two 3-month nationwide AI Tech Sprint competitions; and (ii) as part of the AI Tech Sprint competitions and in collaboration with appropriate partners, provide participants access to technical assistance, mentorship opportunities, individualized expert feedback on products under development, potential contract opportunities, and other programming and resources. (g) Within 180 days of the date of this order, to support the goal of strengthening our Nation’s resilience against climate change impacts and building an equitable clean energy economy for the future, the Secretary of Energy, in consultation with the Chair of the Federal Energy Regulatory Commission, the Director of OSTP, the Chair of the Council on Environmental Quality, the Assistant to the President and National Climate Advisor, and the heads of other relevant agencies as the Secretary of Energy may deem appropriate, shall: (i) issue a public report describing the potential for AI to improve planning, permitting, investment, and operations for electric grid infrastructure and to enable the provision of clean, affordable, reliable, resilient, and secure electric power to all Americans; (ii) develop tools that facilitate building foundation models useful for basic and applied science, including models that streamline permitting and environmental reviews while improving environmental and social outcomes; (iii) collaborate, as appropriate, with private sector organizations and members of academia to support development of AI tools to mitigate climate change risks; ddrumheller on DSK120RN23PROD with PRESDOC-E0 (iv) take steps to expand partnerships with industry, academia, other agencies, and international allies and partners to utilize the Department of Energy’s computing capabilities and AI testbeds to build foundation models that support new applications in science and energy, and for national security, including partnerships that increase community preparedness for climate-related risks, enable clean-energy deployment (including addressing delays in permitting reviews), and enhance grid reliability and resilience; and (v) establish an office to coordinate development of AI and other critical and emerging technologies across Department of Energy programs and the 17 National Laboratories. (h) Within 180 days of the date of this order, to understand AI’s implications for scientific research, the President’s Council of Advisors on Science and Technology shall submit to the President and make publicly available a report on the potential role of AI, especially given recent developments in AI, in research aimed at tackling major societal and global challenges. The report shall include a discussion of issues that may hinder the effective use of AI in research and practices needed to ensure that AI is used responsibly for research. 5.3. Promoting Competition. (a) The head of each agency developing policies and regulations related to AI shall use their authorities, as appropriate and consistent with applicable law, to promote competition in AI and related technologies, as well as in other markets. Such actions include addressing risks arising from concentrated control of key inputs, taking steps to stop unlawful collusion and prevent dominant firms from disadvantaging competitors, and working to provide new opportunities for small businesses and VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00020 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents 75209 entrepreneurs. In particular, the Federal Trade Commission is encouraged to consider, as it deems appropriate, whether to exercise the Commission’s existing authorities, including its rulemaking authority under the Federal Trade Commission Act, 15 U.S.C. 41 et seq., to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI. (b) To promote competition and innovation in the semiconductor industry, recognizing that semiconductors power AI technologies and that their availability is critical to AI competition, the Secretary of Commerce shall, in implementing division A of Public Law 117–167, known as the Creating Helpful Incentives to Produce Semiconductors (CHIPS) Act of 2022, promote competition by: (i) implementing a flexible membership structure for the National Semiconductor Technology Center that attracts all parts of the semiconductor and microelectronics ecosystem, including startups and small firms; (ii) implementing mentorship programs to increase interest and participation in the semiconductor industry, including from workers in underserved communities; (iii) increasing, where appropriate and to the extent permitted by law, the availability of resources to startups and small businesses, including: (A) funding for physical assets, such as specialty equipment or facilities, to which startups and small businesses may not otherwise have access; (B) datasets—potentially including test and performance data—collected, aggregated, or shared by CHIPS research and development programs; (C) workforce development programs; (D) design and process technology, as well as IP, as appropriate; and (E) other resources, including technical and intellectual property assistance, that could accelerate commercialization of new technologies by startups and small businesses, as appropriate; and (iv) considering the inclusion, to the maximum extent possible, and as consistent with applicable law, of competition-increasing measures in notices of funding availability for commercial research-and-development facilities focused on semiconductors, including measures that increase access to facility capacity for startups or small firms developing semiconductors used to power AI technologies. (c) To support small businesses innovating and commercializing AI, as well as in responsibly adopting and deploying AI, the Administrator of the Small Business Administration shall: (i) prioritize the allocation of Regional Innovation Cluster program funding for clusters that support planning activities related to the establishment of one or more Small Business AI Innovation and Commercialization Institutes that provide support, technical assistance, and other resources to small businesses seeking to innovate, commercialize, scale, or otherwise advance the development of AI; ddrumheller on DSK120RN23PROD with PRESDOC-E0 (ii) prioritize the allocation of up to $2 million in Growth Accelerator Fund Competition bonus prize funds for accelerators that support the incorporation or expansion of AI-related curricula, training, and technical assistance, or other AI-related resources within their programming; and (iii) assess the extent to which the eligibility criteria of existing programs, including the State Trade Expansion Program, Technical and Business Assistance funding, and capital-access programs—such as the 7(a) loan program, 504 loan program, and Small Business Investment Company (SBIC) program—support appropriate expenses by small businesses related to the adoption of AI and, if feasible and appropriate, revise eligibility criteria to improve support for these expenses. (d) The Administrator of the Small Business Administration, in coordination with resource partners, shall conduct outreach regarding, and raise VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00021 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 75210 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents awareness of, opportunities for small businesses to use capital-access programs described in subsection 5.3(c) of this section for eligible AI-related purposes, and for eligible investment funds with AI-related expertise—particularly those seeking to serve or with experience serving underserved communities—to apply for an SBIC license. Sec. 6. Supporting Workers. (a) To advance the Government’s understanding of AI’s implications for workers, the following actions shall be taken within 180 days of the date of this order: (i) The Chairman of the Council of Economic Advisers shall prepare and submit a report to the President on the labor-market effects of AI. (ii) To evaluate necessary steps for the Federal Government to address AI-related workforce disruptions, the Secretary of Labor shall submit to the President a report analyzing the abilities of agencies to support workers displaced by the adoption of AI and other technological advancements. The report shall, at a minimum: (A) assess how current or formerly operational Federal programs designed to assist workers facing job disruptions—including unemployment insurance and programs authorized by the Workforce Innovation and Opportunity Act (Public Law 113–128)—could be used to respond to possible future AI-related disruptions; and (B) identify options, including potential legislative measures, to strengthen or develop additional Federal support for workers displaced by AI and, in consultation with the Secretary of Commerce and the Secretary of Education, strengthen and expand education and training opportunities that provide individuals pathways to occupations related to AI. (b) To help ensure that AI deployed in the workplace advances employees’ well-being: (i) The Secretary of Labor shall, within 180 days of the date of this order and in consultation with other agencies and with outside entities, including labor unions and workers, as the Secretary of Labor deems appropriate, develop and publish principles and best practices for employers that could be used to mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits. The principles and best practices shall include specific steps for employers to take with regard to AI, and shall cover, at a minimum: (A) job-displacement risks and career opportunities related to AI, including effects on job skills and evaluation of applicants and workers; (B) labor standards and job quality, including issues related to the equity, protected-activity, compensation, health, and safety implications of AI in the workplace; and (C) implications for workers of employers’ AI-related collection and use of data about them, including transparency, engagement, management, and activity protected under worker-protection laws. ddrumheller on DSK120RN23PROD with PRESDOC-E0 (ii) After principles and best practices are developed pursuant to subsection (b)(i) of this section, the heads of agencies shall consider, in consultation with the Secretary of Labor, encouraging the adoption of these guidelines in their programs to the extent appropriate for each program and consistent with applicable law. (iii) To support employees whose work is monitored or augmented by AI in being compensated appropriately for all of their work time, the Secretary of Labor shall issue guidance to make clear that employers that deploy AI to monitor or augment employees’ work must continue to comply with protections that ensure that workers are compensated for their hours worked, as defined under the Fair Labor Standards Act of 1938, 29 U.S.C. 201 et seq., and other legal requirements. (c) To foster a diverse AI-ready workforce, the Director of NSF shall prioritize available resources to support AI-related education and AI-related VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00022 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents 75211 workforce development through existing programs. The Director shall additionally consult with agencies, as appropriate, to identify further opportunities for agencies to allocate resources for those purposes. The actions by the Director shall use appropriate fellowship programs and awards for these purposes. Sec. 7. Advancing Equity and Civil Rights. 7.1. Strengthening AI and Civil Rights in the Criminal Justice System. (a) To address unlawful discrimination and other harms that may be exacerbated by AI, the Attorney General shall: (i) consistent with Executive Order 12250 of November 2, 1980 (Leadership and Coordination of Nondiscrimination Laws), Executive Order 14091, and 28 CFR 0.50–51, coordinate with and support agencies in their implementation and enforcement of existing Federal laws to address civil rights and civil liberties violations and discrimination related to AI; (ii) direct the Assistant Attorney General in charge of the Civil Rights Division to convene, within 90 days of the date of this order, a meeting of the heads of Federal civil rights offices—for which meeting the heads of civil rights offices within independent regulatory agencies will be encouraged to join—to discuss comprehensive use of their respective authorities and offices to: prevent and address discrimination in the use of automated systems, including algorithmic discrimination; increase coordination between the Department of Justice’s Civil Rights Division and Federal civil rights offices concerning issues related to AI and algorithmic discrimination; improve external stakeholder engagement to promote public awareness of potential discriminatory uses and effects of AI; and develop, as appropriate, additional training, technical assistance, guidance, or other resources; and (iii) consider providing, as appropriate and consistent with applicable law, guidance, technical assistance, and training to State, local, Tribal, and territorial investigators and prosecutors on best practices for investigating and prosecuting civil rights violations and discrimination related to automated systems, including AI. (b) To promote the equitable treatment of individuals and adhere to the Federal Government’s fundamental obligation to ensure fair and impartial justice for all, with respect to the use of AI in the criminal justice system, the Attorney General shall, in consultation with the Secretary of Homeland Security and the Director of OSTP: (i) within 365 days of the date of this order, submit to the President a report that addresses the use of AI in the criminal justice system, including any use in: (A) sentencing; (B) parole, supervised release, and probation; (C) bail, pretrial release, and pretrial detention; (D) risk assessments, including pretrial, earned time, and early release or transfer to home-confinement determinations; (E) police surveillance; (F) crime forecasting and predictive policing, including the ingestion of historical crime data into AI systems to predict high-density ‘‘hot spots’’; ddrumheller on DSK120RN23PROD with PRESDOC-E0 (G) prison-management tools; and (H) forensic analysis; (ii) within the report set forth in subsection 7.1(b)(i) of this section: (A) identify areas where AI can enhance law enforcement efficiency and accuracy, consistent with protections for privacy, civil rights, and civil liberties; and (B) recommend best practices for law enforcement agencies, including safeguards and appropriate use limits for AI, to address the concerns VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00023 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 75212 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents set forth in section 13(e)(i) of Executive Order 14074 as well as the best practices and the guidelines set forth in section 13(e)(iii) of Executive Order 14074; and (iii) supplement the report set forth in subsection 7.1(b)(i) of this section as appropriate with recommendations to the President, including with respect to requests for necessary legislation. (c) To advance the presence of relevant technical experts and expertise (such as machine-learning engineers, software and infrastructure engineering, data privacy experts, data scientists, and user experience researchers) among law enforcement professionals: (i) The interagency working group created pursuant to section 3 of Executive Order 14074 shall, within 180 days of the date of this order, identify and share best practices for recruiting and hiring law enforcement professionals who have the technical skills mentioned in subsection 7.1(c) of this section, and for training law enforcement professionals about responsible application of AI. ddrumheller on DSK120RN23PROD with PRESDOC-E0 (ii) Within 270 days of the date of this order, the Attorney General shall, in consultation with the Secretary of Homeland Security, consider those best practices and the guidance developed under section 3(d) of Executive Order 14074 and, if necessary, develop additional general recommendations for State, local, Tribal, and territorial law enforcement agencies and criminal justice agencies seeking to recruit, hire, train, promote, and retain highly qualified and service-oriented officers and staff with relevant technical knowledge. In considering this guidance, the Attorney General shall consult with State, local, Tribal, and territorial law enforcement agencies, as appropriate. (iii) Within 365 days of the date of this order, the Attorney General shall review the work conducted pursuant to section 2(b) of Executive Order 14074 and, if appropriate, reassess the existing capacity to investigate law enforcement deprivation of rights under color of law resulting from the use of AI, including through improving and increasing training of Federal law enforcement officers, their supervisors, and Federal prosecutors on how to investigate and prosecute cases related to AI involving the deprivation of rights under color of law pursuant to 18 U.S.C. 242. 7.2. Protecting Civil Rights Related to Government Benefits and Programs. (a) To advance equity and civil rights, consistent with the directives of Executive Order 14091, and in addition to complying with the guidance on Federal Government use of AI issued pursuant to section 10.1(b) of this order, agencies shall use their respective civil rights and civil liberties offices and authorities—as appropriate and consistent with applicable law— to prevent and address unlawful discrimination and other harms that result from uses of AI in Federal Government programs and benefits administration. This directive does not apply to agencies’ civil or criminal enforcement authorities. Agencies shall consider opportunities to ensure that their respective civil rights and civil liberties offices are appropriately consulted on agency decisions regarding the design, development, acquisition, and use of AI in Federal Government programs and benefits administration. To further these objectives, agencies shall also consider opportunities to increase coordination, communication, and engagement about AI as appropriate with community-based organizations; civil-rights and civil-liberties organizations; academic institutions; industry; State, local, Tribal, and territorial governments; and other stakeholders. (b) To promote equitable administration of public benefits: (i) The Secretary of HHS shall, within 180 days of the date of this order and in consultation with relevant agencies, publish a plan, informed by the guidance issued pursuant to section 10.1(b) of this order, addressing the use of automated or algorithmic systems in the implementation by States and localities of public benefits and services administered by the Secretary, such as to promote: assessment of access to benefits by qualified recipients; notice to recipients about the presence of such systems; regular VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00024 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents 75213 evaluation to detect unjust denials; processes to retain appropriate levels of discretion of expert agency staff; processes to appeal denials to human reviewers; and analysis of whether algorithmic systems in use by benefit programs achieve equitable and just outcomes. (ii) The Secretary of Agriculture shall, within 180 days of the date of this order and as informed by the guidance issued pursuant to section 10.1(b) of this order, issue guidance to State, local, Tribal, and territorial public-benefits administrators on the use of automated or algorithmic systems in implementing benefits or in providing customer support for benefit programs administered by the Secretary, to ensure that programs using those systems: (A) maximize program access for eligible recipients; (B) employ automated or algorithmic systems in a manner consistent with any requirements for using merit systems personnel in public-benefits programs; (C) identify instances in which reliance on automated or algorithmic systems would require notification by the State, local, Tribal, or territorial government to the Secretary; (D) identify instances when applicants and participants can appeal benefit determinations to a human reviewer for reconsideration and can receive other customer support from a human being; (E) enable auditing and, if necessary, remediation of the logic used to arrive at an individual decision or determination to facilitate the evaluation of appeals; and ddrumheller on DSK120RN23PROD with PRESDOC-E0 (F) enable the analysis of whether algorithmic systems in use by benefit programs achieve equitable outcomes. 7.3. Strengthening AI and Civil Rights in the Broader Economy. (a) Within 365 days of the date of this order, to prevent unlawful discrimination from AI used for hiring, the Secretary of Labor shall publish guidance for Federal contractors regarding nondiscrimination in hiring involving AI and other technology-based hiring systems. (b) To address discrimination and biases against protected groups in housing markets and consumer financial markets, the Director of the Federal Housing Finance Agency and the Director of the Consumer Financial Protection Bureau are encouraged to consider using their authorities, as they deem appropriate, to require their respective regulated entities, where possible, to use appropriate methodologies including AI tools to ensure compliance with Federal law and: (i) evaluate their underwriting models for bias or disparities affecting protected groups; and (ii) evaluate automated collateral-valuation and appraisal processes in ways that minimize bias. (c) Within 180 days of the date of this order, to combat unlawful discrimination enabled by automated or algorithmic tools used to make decisions about access to housing and in other real estate-related transactions, the Secretary of Housing and Urban Development shall, and the Director of the Consumer Financial Protection Bureau is encouraged to, issue additional guidance: (i) addressing the use of tenant screening systems in ways that may violate the Fair Housing Act (Public Law 90–284), the Fair Credit Reporting Act (Public Law 91–508), or other relevant Federal laws, including how the use of data, such as criminal records, eviction records, and credit information, can lead to discriminatory outcomes in violation of Federal law; and (ii) addressing how the Fair Housing Act, the Consumer Financial Protection Act of 2010 (title X of Public Law 111–203), or the Equal Credit Opportunity Act (Public Law 93–495) apply to the advertising of housing, VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00025 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 75214 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents credit, and other real estate-related transactions through digital platforms, including those that use algorithms to facilitate advertising delivery, as well as on best practices to avoid violations of Federal law. (d) To help ensure that people with disabilities benefit from AI’s promise while being protected from its risks, including unequal treatment from the use of biometric data like gaze direction, eye tracking, gait analysis, and hand motions, the Architectural and Transportation Barriers Compliance Board is encouraged, as it deems appropriate, to solicit public participation and conduct community engagement; to issue technical assistance and recommendations on the risks and benefits of AI in using biometric data as an input; and to provide people with disabilities access to information and communication technology and transportation services. Sec. 8. Protecting Consumers, Patients, Passengers, and Students. (a) Independent regulatory agencies are encouraged, as they deem appropriate, to consider using their full range of authorities to protect American consumers from fraud, discrimination, and threats to privacy and to address other risks that may arise from the use of AI, including risks to financial stability, and to consider rulemaking, as well as emphasizing or clarifying where existing regulations and guidance apply to AI, including clarifying the responsibility of regulated entities to conduct due diligence on and monitor any third-party AI services they use, and emphasizing or clarifying requirements and expectations related to the transparency of AI models and regulated entities’ ability to explain their use of AI models. (b) To help ensure the safe, responsible deployment and use of AI in the healthcare, public-health, and human-services sectors: (i) Within 90 days of the date of this order, the Secretary of HHS shall, in consultation with the Secretary of Defense and the Secretary of Veterans Affairs, establish an HHS AI Task Force that shall, within 365 days of its creation, develop a strategic plan that includes policies and frameworks—possibly including regulatory action, as appropriate—on responsible deployment and use of AI and AI-enabled technologies in the health and human services sector (including research and discovery, drug and device safety, healthcare delivery and financing, and public health), and identify appropriate guidance and resources to promote that deployment, including in the following areas: (A) development, maintenance, and use of predictive and generative AI-enabled technologies in healthcare delivery and financing—including quality measurement, performance improvement, program integrity, benefits administration, and patient experience—taking into account considerations such as appropriate human oversight of the application of AIgenerated output; ddrumheller on DSK120RN23PROD with PRESDOC-E0 (B) long-term safety and real-world performance monitoring of AI-enabled technologies in the health and human services sector, including clinically relevant or significant modifications and performance across population groups, with a means to communicate product updates to regulators, developers, and users; (C) incorporation of equity principles in AI-enabled technologies used in the health and human services sector, using disaggregated data on affected populations and representative population data sets when developing new models, monitoring algorithmic performance against discrimination and bias in existing models, and helping to identify and mitigate discrimination and bias in current systems; (D) incorporation of safety, privacy, and security standards into the software-development lifecycle for protection of personally identifiable information, including measures to address AI-enhanced cybersecurity threats in the health and human services sector; (E) development, maintenance, and availability of documentation to help users determine appropriate and safe uses of AI in local settings in the health and human services sector; VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00026 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents 75215 (F) work to be done with State, local, Tribal, and territorial health and human services agencies to advance positive use cases and best practices for use of AI in local settings; and (G) identification of uses of AI to promote workplace efficiency and satisfaction in the health and human services sector, including reducing administrative burdens. (ii) Within 180 days of the date of this order, the Secretary of HHS shall direct HHS components, as the Secretary of HHS deems appropriate, to develop a strategy, in consultation with relevant agencies, to determine whether AI-enabled technologies in the health and human services sector maintain appropriate levels of quality, including, as appropriate, in the areas described in subsection (b)(i) of this section. This work shall include the development of AI assurance policy—to evaluate important aspects of the performance of AI-enabled healthcare tools—and infrastructure needs for enabling pre-market assessment and post-market oversight of AI-enabled healthcare-technology algorithmic system performance against real-world data. (iii) Within 180 days of the date of this order, the Secretary of HHS shall, in consultation with relevant agencies as the Secretary of HHS deems appropriate, consider appropriate actions to advance the prompt understanding of, and compliance with, Federal nondiscrimination laws by health and human services providers that receive Federal financial assistance, as well as how those laws relate to AI. Such actions may include: (A) convening and providing technical assistance to health and human services providers and payers about their obligations under Federal nondiscrimination and privacy laws as they relate to AI and the potential consequences of noncompliance; and (B) issuing guidance, or taking other action as appropriate, in response to any complaints or other reports of noncompliance with Federal nondiscrimination and privacy laws as they relate to AI. (iv) Within 365 days of the date of this order, the Secretary of HHS shall, in consultation with the Secretary of Defense and the Secretary of Veterans Affairs, establish an AI safety program that, in partnership with voluntary federally listed Patient Safety Organizations: (A) establishes a common framework for approaches to identifying and capturing clinical errors resulting from AI deployed in healthcare settings as well as specifications for a central tracking repository for associated incidents that cause harm, including through bias or discrimination, to patients, caregivers, or other parties; (B) analyzes captured data and generated evidence to develop, wherever appropriate, recommendations, best practices, or other informal guidelines aimed at avoiding these harms; and (C) disseminates those recommendations, best practices, or other informal guidance to appropriate stakeholders, including healthcare providers. ddrumheller on DSK120RN23PROD with PRESDOC-E0 (v) Within 365 days of the date of this order, the Secretary of HHS shall develop a strategy for regulating the use of AI or AI-enabled tools in drug-development processes. The strategy shall, at a minimum: (A) define the objectives, goals, and high-level principles required for appropriate regulation throughout each phase of drug development; (B) identify areas where future rulemaking, guidance, or additional statutory authority may be necessary to implement such a regulatory system; (C) identify the existing budget, resources, personnel, and potential for new public/private partnerships necessary for such a regulatory system; and (D) consider risks identified by the actions undertaken to implement section 4 of this order. VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00027 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 75216 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents (c) To promote the safe and responsible development and use of AI in the transportation sector, in consultation with relevant agencies: (i) Within 30 days of the date of this order, the Secretary of Transportation shall direct the Nontraditional and Emerging Transportation Technology (NETT) Council to assess the need for information, technical assistance, and guidance regarding the use of AI in transportation. The Secretary of Transportation shall further direct the NETT Council, as part of any such efforts, to: (A) support existing and future initiatives to pilot transportation-related applications of AI, as they align with policy priorities articulated in the Department of Transportation’s (DOT) Innovation Principles, including, as appropriate, through technical assistance and connecting stakeholders; (B) evaluate the outcomes of such pilot programs in order to assess when DOT, or other Federal or State agencies, have sufficient information to take regulatory actions, as appropriate, and recommend appropriate actions when that information is available; and (C) establish a new DOT Cross-Modal Executive Working Group, which will consist of members from different divisions of DOT and coordinate applicable work among these divisions, to solicit and use relevant input from appropriate stakeholders. (ii) Within 90 days of the date of this order, the Secretary of Transportation shall direct appropriate Federal Advisory Committees of the DOT to provide advice on the safe and responsible use of AI in transportation. The committees shall include the Advanced Aviation Advisory Committee, the Transforming Transportation Advisory Committee, and the Intelligent Transportation Systems Program Advisory Committee. ddrumheller on DSK120RN23PROD with PRESDOC-E0 (iii) Within 180 days of the date of this order, the Secretary of Transportation shall direct the Advanced Research Projects Agency-Infrastructure (ARPA–I) to explore the transportation-related opportunities and challenges of AI—including regarding software-defined AI enhancements impacting autonomous mobility ecosystems. The Secretary of Transportation shall further encourage ARPA–I to prioritize the allocation of grants to those opportunities, as appropriate. The work tasked to ARPA–I shall include soliciting input on these topics through a public consultation process, such as an RFI. (d) To help ensure the responsible development and deployment of AI in the education sector, the Secretary of Education shall, within 365 days of the date of this order, develop resources, policies, and guidance regarding AI. These resources shall address safe, responsible, and nondiscriminatory uses of AI in education, including the impact AI systems have on vulnerable and underserved communities, and shall be developed in consultation with stakeholders as appropriate. They shall also include the development of an ‘‘AI toolkit’’ for education leaders implementing recommendations from the Department of Education’s AI and the Future of Teaching and Learning report, including appropriate human review of AI decisions, designing AI systems to enhance trust and safety and align with privacy-related laws and regulations in the educational context, and developing education-specific guardrails. (e) The Federal Communications Commission is encouraged to consider actions related to how AI will affect communications networks and consumers, including by: (i) examining the potential for AI to improve spectrum management, increase the efficiency of non-Federal spectrum usage, and expand opportunities for the sharing of non-Federal spectrum; (ii) coordinating with the National Telecommunications and Information Administration to create opportunities for sharing spectrum between Federal and non-Federal spectrum operations; VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00028 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents 75217 (iii) providing support for efforts to improve network security, resiliency, and interoperability using next-generation technologies that incorporate AI, including self-healing networks, 6G, and Open RAN; and (iv) encouraging, including through rulemaking, efforts to combat unwanted robocalls and robotexts that are facilitated or exacerbated by AI and to deploy AI technologies that better serve consumers by blocking unwanted robocalls and robotexts. Sec. 9. Protecting Privacy. (a) To mitigate privacy risks potentially exacerbated by AI—including by AI’s facilitation of the collection or use of information about individuals, or the making of inferences about individuals—the Director of OMB shall: (i) evaluate and take steps to identify commercially available information (CAI) procured by agencies, particularly CAI that contains personally identifiable information and including CAI procured from data brokers and CAI procured and processed indirectly through vendors, in appropriate agency inventory and reporting processes (other than when it is used for the purposes of national security); (ii) evaluate, in consultation with the Federal Privacy Council and the Interagency Council on Statistical Policy, agency standards and procedures associated with the collection, processing, maintenance, use, sharing, dissemination, and disposition of CAI that contains personally identifiable information (other than when it is used for the purposes of national security) to inform potential guidance to agencies on ways to mitigate privacy and confidentiality risks from agencies’ activities related to CAI; (iii) within 180 days of the date of this order, in consultation with the Attorney General, the Assistant to the President for Economic Policy, and the Director of OSTP, issue an RFI to inform potential revisions to guidance to agencies on implementing the privacy provisions of the E-Government Act of 2002 (Public Law 107–347). The RFI shall seek feedback regarding how privacy impact assessments may be more effective at mitigating privacy risks, including those that are further exacerbated by AI; and ddrumheller on DSK120RN23PROD with PRESDOC-E0 (iv) take such steps as are necessary and appropriate, consistent with applicable law, to support and advance the near-term actions and longterm strategy identified through the RFI process, including issuing new or updated guidance or RFIs or consulting other agencies or the Federal Privacy Council. (b) Within 365 days of the date of this order, to better enable agencies to use PETs to safeguard Americans’ privacy from the potential threats exacerbated by AI, the Secretary of Commerce, acting through the Director of NIST, shall create guidelines for agencies to evaluate the efficacy of differential-privacy-guarantee protections, including for AI. The guidelines shall, at a minimum, describe the significant factors that bear on differentialprivacy safeguards and common risks to realizing differential privacy in practice. (c) To advance research, development, and implementation related to PETs: (i) Within 120 days of the date of this order, the Director of NSF, in collaboration with the Secretary of Energy, shall fund the creation of a Research Coordination Network (RCN) dedicated to advancing privacy research and, in particular, the development, deployment, and scaling of PETs. The RCN shall serve to enable privacy researchers to share information, coordinate and collaborate in research, and develop standards for the privacy-research community. (ii) Within 240 days of the date of this order, the Director of NSF shall engage with agencies to identify ongoing work and potential opportunities to incorporate PETs into their operations. The Director of NSF shall, where feasible and appropriate, prioritize research—including efforts to translate research discoveries into practical applications—that encourage the adoption of leading-edge PETs solutions for agencies’ use, including VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00029 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 75218 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents through research engagement through the RCN described in subsection (c)(i) of this section. (iii) The Director of NSF shall use the results of the United States-United Kingdom PETs Prize Challenge to inform the approaches taken, and opportunities identified, for PETs research and adoption. Sec. 10. Advancing Federal Government Use of AI. 10.1. Providing Guidance for AI Management. (a) To coordinate the use of AI across the Federal Government, within 60 days of the date of this order and on an ongoing basis as necessary, the Director of OMB shall convene and chair an interagency council to coordinate the development and use of AI in agencies’ programs and operations, other than the use of AI in national security systems. The Director of OSTP shall serve as Vice Chair for the interagency council. The interagency council’s membership shall include, at minimum, the heads of the agencies identified in 31 U.S.C. 901(b), the Director of National Intelligence, and other agencies as identified by the Chair. Until agencies designate their permanent Chief AI Officers consistent with the guidance described in subsection 10.1(b) of this section, they shall be represented on the interagency council by an appropriate official at the Assistant Secretary level or equivalent, as determined by the head of each agency. (b) To provide guidance on Federal Government use of AI, within 150 days of the date of this order and updated periodically thereafter, the Director of OMB, in coordination with the Director of OSTP, and in consultation with the interagency council established in subsection 10.1(a) of this section, shall issue guidance to agencies to strengthen the effective and appropriate use of AI, advance AI innovation, and manage risks from AI in the Federal Government. The Director of OMB’s guidance shall specify, to the extent appropriate and consistent with applicable law: (i) the requirement to designate at each agency within 60 days of the issuance of the guidance a Chief Artificial Intelligence Officer who shall hold primary responsibility in their agency, in coordination with other responsible officials, for coordinating their agency’s use of AI, promoting AI innovation in their agency, managing risks from their agency’s use of AI, and carrying out the responsibilities described in section 8(c) of Executive Order 13960 of December 3, 2020 (Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government), and section 4(b) of Executive Order 14091; (ii) the Chief Artificial Intelligence Officers’ roles, responsibilities, seniority, position, and reporting structures; ddrumheller on DSK120RN23PROD with PRESDOC-E0 (iii) for the agencies identified in 31 U.S.C. 901(b), the creation of internal Artificial Intelligence Governance Boards, or other appropriate mechanisms, at each agency within 60 days of the issuance of the guidance to coordinate and govern AI issues through relevant senior leaders from across the agency; (iv) required minimum risk-management practices for Government uses of AI that impact people’s rights or safety, including, where appropriate, the following practices derived from OSTP’s Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework: conducting public consultation; assessing data quality; assessing and mitigating disparate impacts and algorithmic discrimination; providing notice of the use of AI; continuously monitoring and evaluating deployed AI; and granting human consideration and remedies for adverse decisions made using AI; (v) specific Federal Government uses of AI that are presumed by default to impact rights or safety; (vi) recommendations to agencies to reduce barriers to the responsible use of AI, including barriers related to information technology infrastructure, data, workforce, budgetary restrictions, and cybersecurity processes; (vii) requirements that agencies identified in 31 U.S.C. 901(b) develop AI strategies and pursue high-impact AI use cases; VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00030 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents 75219 (viii) in consultation with the Secretary of Commerce, the Secretary of Homeland Security, and the heads of other appropriate agencies as determined by the Director of OMB, recommendations to agencies regarding: (A) external testing for AI, including AI red-teaming for generative AI, to be developed in coordination with the Cybersecurity and Infrastructure Security Agency; (B) testing and safeguards against discriminatory, misleading, inflammatory, unsafe, or deceptive outputs, as well as against producing child sexual abuse material and against producing non-consensual intimate imagery of real individuals (including intimate digital depictions of the body or body parts of an identifiable individual), for generative AI; (C) reasonable steps to watermark or otherwise label output from generative AI; (D) application of the mandatory minimum risk-management practices defined under subsection 10.1(b)(iv) of this section to procured AI; (E) independent evaluation of vendors’ claims concerning both the effectiveness and risk mitigation of their AI offerings; (F) documentation and oversight of procured AI; (G) maximizing the value to agencies when relying on contractors to use and enrich Federal Government data for the purposes of AI development and operation; (H) provision of incentives for the continuous improvement of procured AI; and (I) training on AI in accordance with the principles set out in this order and in other references related to AI listed herein; and (ix) requirements for public reporting on compliance with this guidance. (c) To track agencies’ AI progress, within 60 days of the issuance of the guidance established in subsection 10.1(b) of this section and updated periodically thereafter, the Director of OMB shall develop a method for agencies to track and assess their ability to adopt AI into their programs and operations, manage its risks, and comply with Federal policy on AI. This method should draw on existing related efforts as appropriate and should address, as appropriate and consistent with applicable law, the practices, processes, and capabilities necessary for responsible AI adoption, training, and governance across, at a minimum, the areas of information technology infrastructure, data, workforce, leadership, and risk management. ddrumheller on DSK120RN23PROD with PRESDOC-E0 (d) To assist agencies in implementing the guidance to be established in subsection 10.1(b) of this section: (i) within 90 days of the issuance of the guidance, the Secretary of Commerce, acting through the Director of NIST, and in coordination with the Director of OMB and the Director of OSTP, shall develop guidelines, tools, and practices to support implementation of the minimum risk-management practices described in subsection 10.1(b)(iv) of this section; and (ii) within 180 days of the issuance of the guidance, the Director of OMB shall develop an initial means to ensure that agency contracts for the acquisition of AI systems and services align with the guidance described in subsection 10.1(b) of this section and advance the other aims identified in section 7224(d)(1) of the Advancing American AI Act (Public Law 117–263, div. G, title LXXII, subtitle B). (e) To improve transparency for agencies’ use of AI, the Director of OMB shall, on an annual basis, issue instructions to agencies for the collection, reporting, and publication of agency AI use cases, pursuant to section 7225(a) of the Advancing American AI Act. Through these instructions, the Director shall, as appropriate, expand agencies’ reporting on how they are managing risks from their AI use cases and update or replace the guidance originally established in section 5 of Executive Order 13960. VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00031 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 75220 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents (f) To advance the responsible and secure use of generative AI in the Federal Government: (i) As generative AI products become widely available and common in online platforms, agencies are discouraged from imposing broad general bans or blocks on agency use of generative AI. Agencies should instead limit access, as necessary, to specific generative AI services based on specific risk assessments; establish guidelines and limitations on the appropriate use of generative AI; and, with appropriate safeguards in place, provide their personnel and programs with access to secure and reliable generative AI capabilities, at least for the purposes of experimentation and routine tasks that carry a low risk of impacting Americans’ rights. To protect Federal Government information, agencies are also encouraged to employ risk-management practices, such as training their staff on proper use, protection, dissemination, and disposition of Federal information; negotiating appropriate terms of service with vendors; implementing measures designed to ensure compliance with record-keeping, cybersecurity, confidentiality, privacy, and data protection requirements; and deploying other measures to prevent misuse of Federal Government information in generative AI. (ii) Within 90 days of the date of this order, the Administrator of General Services, in coordination with the Director of OMB, and in consultation with the Federal Secure Cloud Advisory Committee and other relevant agencies as the Administrator of General Services may deem appropriate, shall develop and issue a framework for prioritizing critical and emerging technologies offerings in the Federal Risk and Authorization Management Program authorization process, starting with generative AI offerings that have the primary purpose of providing large language model-based chat interfaces, code-generation and debugging tools, and associated application programming interfaces, as well as prompt-based image generators. This framework shall apply for no less than 2 years from the date of its issuance. Agency Chief Information Officers, Chief Information Security Officers, and authorizing officials are also encouraged to prioritize generative AI and other critical and emerging technologies in granting authorities for agency operation of information technology systems and any other applicable release or oversight processes, using continuous authorizations and approvals wherever feasible. (iii) Within 180 days of the date of this order, the Director of the Office of Personnel Management (OPM), in coordination with the Director of OMB, shall develop guidance on the use of generative AI for work by the Federal workforce. ddrumheller on DSK120RN23PROD with PRESDOC-E0 (g) Within 30 days of the date of this order, to increase agency investment in AI, the Technology Modernization Board shall consider, as it deems appropriate and consistent with applicable law, prioritizing funding for AI projects for the Technology Modernization Fund for a period of at least 1 year. Agencies are encouraged to submit to the Technology Modernization Fund project funding proposals that include AI—and particularly generative AI—in service of mission delivery. (h) Within 180 days of the date of this order, to facilitate agencies’ access to commercial AI capabilities, the Administrator of General Services, in coordination with the Director of OMB, and in collaboration with the Secretary of Defense, the Secretary of Homeland Security, the Director of National Intelligence, the Administrator of the National Aeronautics and Space Administration, and the head of any other agency identified by the Administrator of General Services, shall take steps consistent with applicable law to facilitate access to Federal Government-wide acquisition solutions for specified types of AI services and products, such as through the creation of a resource guide or other tools to assist the acquisition workforce. Specified types of AI capabilities shall include generative AI and specialized computing infrastructure. VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00032 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents 75221 (i) The initial means, instructions, and guidance issued pursuant to subsections 10.1(a)–(h) of this section shall not apply to AI when it is used as a component of a national security system, which shall be addressed by the proposed National Security Memorandum described in subsection 4.8 of this order. 10.2. Increasing AI Talent in Government. (a) Within 45 days of the date of this order, to plan a national surge in AI talent in the Federal Government, the Director of OSTP and the Director of OMB, in consultation with the Assistant to the President for National Security Affairs, the Assistant to the President for Economic Policy, the Assistant to the President and Domestic Policy Advisor, and the Assistant to the President and Director of the Gender Policy Council, shall identify priority mission areas for increased Federal Government AI talent, the types of talent that are highest priority to recruit and develop to ensure adequate implementation of this order and use of relevant enforcement and regulatory authorities to address AI risks, and accelerated hiring pathways. (b) Within 45 days of the date of this order, to coordinate rapid advances in the capacity of the Federal AI workforce, the Assistant to the President and Deputy Chief of Staff for Policy, in coordination with the Director of OSTP and the Director of OMB, and in consultation with the National Cyber Director, shall convene an AI and Technology Talent Task Force, which shall include the Director of OPM, the Director of the General Services Administration’s Technology Transformation Services, a representative from the Chief Human Capital Officers Council, the Assistant to the President for Presidential Personnel, members of appropriate agency technology talent programs, a representative of the Chief Data Officer Council, and a representative of the interagency council convened under subsection 10.1(a) of this section. The Task Force’s purpose shall be to accelerate and track the hiring of AI and AI-enabling talent across the Federal Government, including through the following actions: (i) within 180 days of the date of this order, tracking and reporting progress to the President on increasing AI capacity across the Federal Government, including submitting to the President a report and recommendations for further increasing capacity; (ii) identifying and circulating best practices for agencies to attract, hire, retain, train, and empower AI talent, including diversity, inclusion, and accessibility best practices, as well as to plan and budget adequately for AI workforce needs; ddrumheller on DSK120RN23PROD with PRESDOC-E0 (iii) coordinating, in consultation with the Director of OPM, the use of fellowship programs and agency technology-talent programs and humancapital teams to build hiring capabilities, execute hires, and place AI talent to fill staffing gaps; and (iv) convening a cross-agency forum for ongoing collaboration between AI professionals to share best practices and improve retention. (c) Within 45 days of the date of this order, to advance existing Federal technology talent programs, the United States Digital Service, Presidential Innovation Fellowship, United States Digital Corps, OPM, and technology talent programs at agencies, with support from the AI and Technology Talent Task Force described in subsection 10.2(b) of this section, as appropriate and permitted by law, shall develop and begin to implement plans to support the rapid recruitment of individuals as part of a Federal Government-wide AI talent surge to accelerate the placement of key AI and AI-enabling talent in high-priority areas and to advance agencies’ data and technology strategies. (d) To meet the critical hiring need for qualified personnel to execute the initiatives in this order, and to improve Federal hiring practices for AI talent, the Director of OPM, in consultation with the Director of OMB, shall: (i) within 60 days of the date of this order, conduct an evidence-based review on the need for hiring and workplace flexibility, including Federal Government-wide direct-hire authority for AI and related data-science and VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00033 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 75222 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents technical roles, and, where the Director of OPM finds such authority is appropriate, grant it; this review shall include the following job series at all General Schedule (GS) levels: IT Specialist (2210), Computer Scientist (1550), Computer Engineer (0854), and Program Analyst (0343) focused on AI, and any subsequently developed job series derived from these job series; (ii) within 60 days of the date of this order, consider authorizing the use of excepted service appointments under 5 CFR 213.3102(i)(3) to address the need for hiring additional staff to implement directives of this order; (iii) within 90 days of the date of this order, coordinate a pooled-hiring action informed by subject-matter experts and using skills-based assessments to support the recruitment of AI talent across agencies; (iv) within 120 days of the date of this order, as appropriate and permitted by law, issue guidance for agency application of existing pay flexibilities or incentive pay programs for AI, AI-enabling, and other key technical positions to facilitate appropriate use of current pay incentives; (v) within 180 days of the date of this order, establish guidance and policy on skills-based, Federal Government-wide hiring of AI, data, and technology talent in order to increase access to those with nontraditional academic backgrounds to Federal AI, data, and technology roles; (vi) within 180 days of the date of this order, establish an interagency working group, staffed with both human-resources professionals and recruiting technical experts, to facilitate Federal Government-wide hiring of people with AI and other technical skills; (vii) within 180 days of the date of this order, review existing Executive Core Qualifications (ECQs) for Senior Executive Service (SES) positions informed by data and AI literacy competencies and, within 365 days of the date of this order, implement new ECQs as appropriate in the SES assessment process; ddrumheller on DSK120RN23PROD with PRESDOC-E0 (viii) within 180 days of the date of this order, complete a review of competencies for civil engineers (GS–0810 series) and, if applicable, other related occupations, and make recommendations for ensuring that adequate AI expertise and credentials in these occupations in the Federal Government reflect the increased use of AI in critical infrastructure; and (ix) work with the Security, Suitability, and Credentialing Performance Accountability Council to assess mechanisms to streamline and accelerate personnel-vetting requirements, as appropriate, to support AI and fields related to other critical and emerging technologies. (e) To expand the use of special authorities for AI hiring and retention, agencies shall use all appropriate hiring authorities, including Schedule A(r) excepted service hiring and direct-hire authority, as applicable and appropriate, to hire AI talent and AI-enabling talent rapidly. In addition to participating in OPM-led pooled hiring actions, agencies shall collaborate, where appropriate, on agency-led pooled hiring under the Competitive Service Act of 2015 (Public Law 114–137) and other shared hiring. Agencies shall also, where applicable, use existing incentives, pay-setting authorities, and other compensation flexibilities, similar to those used for cyber and information technology positions, for AI and data-science professionals, as well as plain-language job titles, to help recruit and retain these highly skilled professionals. Agencies shall ensure that AI and other related talent needs (such as technology governance and privacy) are reflected in strategic workforce planning and budget formulation. (f) To facilitate the hiring of data scientists, the Chief Data Officer Council shall develop a position-description library for data scientists (job series 1560) and a hiring guide to support agencies in hiring data scientists. (g) To help train the Federal workforce on AI issues, the head of each agency shall implement—or increase the availability and use of—AI training and familiarization programs for employees, managers, and leadership in VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00034 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents 75223 technology as well as relevant policy, managerial, procurement, regulatory, ethical, governance, and legal fields. Such training programs should, for example, empower Federal employees, managers, and leaders to develop and maintain an operating knowledge of emerging AI technologies to assess opportunities to use these technologies to enhance the delivery of services to the public, and to mitigate risks associated with these technologies. Agencies that provide professional-development opportunities, grants, or funds for their staff should take appropriate steps to ensure that employees who do not serve in traditional technical roles, such as policy, managerial, procurement, or legal fields, are nonetheless eligible to receive funding for programs and courses that focus on AI, machine learning, data science, or other related subject areas. (h) Within 180 days of the date of this order, to address gaps in AI talent for national defense, the Secretary of Defense shall submit a report to the President through the Assistant to the President for National Security Affairs that includes: (i) recommendations to address challenges in the Department of Defense’s ability to hire certain noncitizens, including at the Science and Technology Reinvention Laboratories; (ii) recommendations to clarify and streamline processes for accessing classified information for certain noncitizens through Limited Access Authorization at Department of Defense laboratories; (iii) recommendations for the appropriate use of enlistment authority under 10 U.S.C. 504(b)(2) for experts in AI and other critical and emerging technologies; and ddrumheller on DSK120RN23PROD with PRESDOC-E0 (iv) recommendations for the Department of Defense and the Department of Homeland Security to work together to enhance the use of appropriate authorities for the retention of certain noncitizens of vital importance to national security by the Department of Defense and the Department of Homeland Security. Sec. 11. Strengthening American Leadership Abroad. (a) To strengthen United States leadership of global efforts to unlock AI’s potential and meet its challenges, the Secretary of State, in coordination with the Assistant to the President for National Security Affairs, the Assistant to the President for Economic Policy, the Director of OSTP, and the heads of other relevant agencies as appropriate, shall: (i) lead efforts outside of military and intelligence areas to expand engagements with international allies and partners in relevant bilateral, multilateral, and multi-stakeholder fora to advance those allies’ and partners’ understanding of existing and planned AI-related guidance and policies of the United States, as well as to enhance international collaboration; and (ii) lead efforts to establish a strong international framework for managing the risks and harnessing the benefits of AI, including by encouraging international allies and partners to support voluntary commitments similar to those that United States companies have made in pursuit of these objectives and coordinating the activities directed by subsections (b), (c), (d), and (e) of this section, and to develop common regulatory and other accountability principles for foreign nations, including to manage the risk that AI systems pose. (b) To advance responsible global technical standards for AI development and use outside of military and intelligence areas, the Secretary of Commerce, in coordination with the Secretary of State and the heads of other relevant agencies as appropriate, shall lead preparations for a coordinated effort with key international allies and partners and with standards development organizations, to drive the development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing. In particular, the Secretary of Commerce shall: VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00035 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 75224 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents (i) within 270 days of the date of this order, establish a plan for global engagement on promoting and developing AI standards, with lines of effort that may include: (A) AI nomenclature and terminology; (B) best practices regarding data capture, processing, protection, privacy, confidentiality, handling, and analysis; (C) trustworthiness, verification, and assurance of AI systems; and (D) AI risk management; (ii) within 180 days of the date the plan is established, submit a report to the President on priority actions taken pursuant to the plan; and (iii) ensure that such efforts are guided by principles set out in the NIST AI Risk Management Framework and United States Government National Standards Strategy for Critical and Emerging Technology. (c) Within 365 days of the date of this order, to promote safe, responsible, and rights-affirming development and deployment of AI abroad: (i) The Secretary of State and the Administrator of the United States Agency for International Development, in coordination with the Secretary of Commerce, acting through the director of NIST, shall publish an AI in Global Development Playbook that incorporates the AI Risk Management Framework’s principles, guidelines, and best practices into the social, technical, economic, governance, human rights, and security conditions of contexts beyond United States borders. As part of this work, the Secretary of State and the Administrator of the United States Agency for International Development shall draw on lessons learned from programmatic uses of AI in global development. (ii) The Secretary of State and the Administrator of the United States Agency for International Development, in collaboration with the Secretary of Energy and the Director of NSF, shall develop a Global AI Research Agenda to guide the objectives and implementation of AI-related research in contexts beyond United States borders. The Agenda shall: (A) include principles, guidelines, priorities, and best practices aimed at ensuring the safe, responsible, beneficial, and sustainable global development and adoption of AI; and ddrumheller on DSK120RN23PROD with PRESDOC-E0 (B) address AI’s labor-market implications across international contexts, including by recommending risk mitigations. (d) To address cross-border and global AI risks to critical infrastructure, the Secretary of Homeland Security, in coordination with the Secretary of State, and in consultation with the heads of other relevant agencies as the Secretary of Homeland Security deems appropriate, shall lead efforts with international allies and partners to enhance cooperation to prevent, respond to, and recover from potential critical infrastructure disruptions resulting from incorporation of AI into critical infrastructure systems or malicious use of AI. (i) Within 270 days of the date of this order, the Secretary of Homeland Security, in coordination with the Secretary of State, shall develop a plan for multilateral engagements to encourage the adoption of the AI safety and security guidelines for use by critical infrastructure owners and operators developed in section 4.3(a) of this order. (ii) Within 180 days of establishing the plan described in subsection (d)(i) of this section, the Secretary of Homeland Security shall submit a report to the President on priority actions to mitigate cross-border risks to critical United States infrastructure. Sec. 12. Implementation. (a) There is established, within the Executive Office of the President, the White House Artificial Intelligence Council (White House AI Council). The function of the White House AI Council is to coordinate the activities of agencies across the Federal Government to ensure the effective formulation, development, communication, industry engagement VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00036 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents 75225 related to, and timely implementation of AI-related policies, including policies set forth in this order. (b) The Assistant to the President and Deputy Chief of Staff for Policy shall serve as Chair of the White House AI Council. (c) In addition to the Chair, the White House AI Council shall consist of the following members, or their designees: (i) the Secretary of State; (ii) the Secretary of the Treasury; (iii) the Secretary of Defense; (iv) the Attorney General; (v) the Secretary of Agriculture; (vi) the Secretary of Commerce; (vii) the Secretary of Labor; (viii) the Secretary of HHS; (ix) the Secretary of Housing and Urban Development; (x) the Secretary of Transportation; (xi) the Secretary of Energy; (xii) the Secretary of Education; (xiii) the Secretary of Veterans Affairs; (xiv) the Secretary of Homeland Security; (xv) the Administrator of the Small Business Administration; (xvi) the Administrator of the United States Agency for International Development; (xvii) the Director of National Intelligence; (xviii) the Director of NSF; (xix) the Director of OMB; (xx) the Director of OSTP; (xxi) the Assistant to the President for National Security Affairs; (xxii) the Assistant to the President for Economic Policy; (xxiii) the Assistant to the President and Domestic Policy Advisor; (xxiv) the Assistant to the President and Chief of Staff to the Vice President; (xxv) the Assistant to the President and Director of the Gender Policy Council; (xxvi) the Chairman of the Council of Economic Advisers; (xxvii) the National Cyber Director; ddrumheller on DSK120RN23PROD with PRESDOC-E0 (xxviii) the Chairman of the Joint Chiefs of Staff; and (xxix) the heads of such other agencies, independent regulatory agencies, and executive offices as the Chair may from time to time designate or invite to participate. (d) The Chair may create and coordinate subgroups consisting of White House AI Council members or their designees, as appropriate. Sec. 13. General Provisions. (a) Nothing in this order shall be construed to impair or otherwise affect: (i) the authority granted by law to an executive department or agency, or the head thereof; or (ii) the functions of the Director of the Office of Management and Budget relating to budgetary, administrative, or legislative proposals. (b) This order shall be implemented consistent with applicable law and subject to the availability of appropriations. VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00037 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 75226 Federal Register / Vol. 88, No. 210 / Wednesday, November 1, 2023 / Presidential Documents (c) This order is not intended to, and does not, create any right or benefit, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person. THE WHITE HOUSE, October 30, 2023. [FR Doc. 2023–24283 Filed 10–31–23; 11:15 am] VerDate Sep<11>2014 23:02 Oct 31, 2023 Jkt 262001 PO 00000 Frm 00038 Fmt 4705 Sfmt 4790 E:\FR\FM\01NOE0.SGM 01NOE0 BIDEN.EPS</GPH> ddrumheller on DSK120RN23PROD with PRESDOC-E0 Billing code 3395–F4–P

Agencies

[Federal Register Volume 88, Number 210 (Wednesday, November 1, 2023)]
[Presidential Documents]
[Pages 75191-75226]
From the Federal Register Online via the Government Publishing Office [www.gpo.gov]
[FR Doc No: 2023-24283]



[[Page 75189]]

Vol. 88

Wednesday,

No. 210

November 1, 2023

Part III





The President





-----------------------------------------------------------------------



Executive Order 14110--Safe, Secure, and Trustworthy Development and 
Use of Artificial Intelligence


                        Presidential Documents 



Federal Register / Vol. 88 , No. 210 / Wednesday, November 1, 2023 / 
Presidential Documents

___________________________________________________________________

Title 3--
The President

[[Page 75191]]

                Executive Order 14110 of October 30, 2023

                
Safe, Secure, and Trustworthy Development and Use 
                of Artificial Intelligence

                By the authority vested in me as President by the 
                Constitution and the laws of the United States of 
                America, it is hereby ordered as follows:

                Section 1. Purpose. Artificial intelligence (AI) holds 
                extraordinary potential for both promise and peril. 
                Responsible AI use has the potential to help solve 
                urgent challenges while making our world more 
                prosperous, productive, innovative, and secure. At the 
                same time, irresponsible use could exacerbate societal 
                harms such as fraud, discrimination, bias, and 
                disinformation; displace and disempower workers; stifle 
                competition; and pose risks to national security. 
                Harnessing AI for good and realizing its myriad 
                benefits requires mitigating its substantial risks. 
                This endeavor demands a society-wide effort that 
                includes government, the private sector, academia, and 
                civil society.

                My Administration places the highest urgency on 
                governing the development and use of AI safely and 
                responsibly, and is therefore advancing a coordinated, 
                Federal Government-wide approach to doing so. The rapid 
                speed at which AI capabilities are advancing compels 
                the United States to lead in this moment for the sake 
                of our security, economy, and society.

                In the end, AI reflects the principles of the people 
                who build it, the people who use it, and the data upon 
                which it is built. I firmly believe that the power of 
                our ideals; the foundations of our society; and the 
                creativity, diversity, and decency of our people are 
                the reasons that America thrived in past eras of rapid 
                change. They are the reasons we will succeed again in 
                this moment. We are more than capable of harnessing AI 
                for justice, security, and opportunity for all.

                Sec. 2. Policy and Principles. It is the policy of my 
                Administration to advance and govern the development 
                and use of AI in accordance with eight guiding 
                principles and priorities. When undertaking the actions 
                set forth in this order, executive departments and 
                agencies (agencies) shall, as appropriate and 
                consistent with applicable law, adhere to these 
                principles, while, as feasible, taking into account the 
                views of other agencies, industry, members of academia, 
                civil society, labor unions, international allies and 
                partners, and other relevant organizations:

                    (a) Artificial Intelligence must be safe and 
                secure. Meeting this goal requires robust, reliable, 
                repeatable, and standardized evaluations of AI systems, 
                as well as policies, institutions, and, as appropriate, 
                other mechanisms to test, understand, and mitigate 
                risks from these systems before they are put to use. It 
                also requires addressing AI systems' most pressing 
                security risks--including with respect to 
                biotechnology, cybersecurity, critical infrastructure, 
                and other national security dangers--while navigating 
                AI's opacity and complexity. Testing and evaluations, 
                including post-deployment performance monitoring, will 
                help ensure that AI systems function as intended, are 
                resilient against misuse or dangerous modifications, 
                are ethically developed and operated in a secure 
                manner, and are compliant with applicable Federal laws 
                and policies. Finally, my Administration will help 
                develop effective labeling and content provenance 
                mechanisms, so that Americans are able to determine 
                when content is generated using AI and when it is not. 
                These actions will provide a vital foundation for an 
                approach that addresses AI's risks without unduly 
                reducing its benefits.

[[Page 75192]]

                    (b) Promoting responsible innovation, competition, 
                and collaboration will allow the United States to lead 
                in AI and unlock the technology's potential to solve 
                some of society's most difficult challenges. This 
                effort requires investments in AI-related education, 
                training, development, research, and capacity, while 
                simultaneously tackling novel intellectual property 
                (IP) questions and other problems to protect inventors 
                and creators. Across the Federal Government, my 
                Administration will support programs to provide 
                Americans the skills they need for the age of AI and 
                attract the world's AI talent to our shores--not just 
                to study, but to stay--so that the companies and 
                technologies of the future are made in America. The 
                Federal Government will promote a fair, open, and 
                competitive ecosystem and marketplace for AI and 
                related technologies so that small developers and 
                entrepreneurs can continue to drive innovation. Doing 
                so requires stopping unlawful collusion and addressing 
                risks from dominant firms' use of key assets such as 
                semiconductors, computing power, cloud storage, and 
                data to disadvantage competitors, and it requires 
                supporting a marketplace that harnesses the benefits of 
                AI to provide new opportunities for small businesses, 
                workers, and entrepreneurs.
                    (c) The responsible development and use of AI 
                require a commitment to supporting American workers. As 
                AI creates new jobs and industries, all workers need a 
                seat at the table, including through collective 
                bargaining, to ensure that they benefit from these 
                opportunities. My Administration will seek to adapt job 
                training and education to support a diverse workforce 
                and help provide access to opportunities that AI 
                creates. In the workplace itself, AI should not be 
                deployed in ways that undermine rights, worsen job 
                quality, encourage undue worker surveillance, lessen 
                market competition, introduce new health and safety 
                risks, or cause harmful labor-force disruptions. The 
                critical next steps in AI development should be built 
                on the views of workers, labor unions, educators, and 
                employers to support responsible uses of AI that 
                improve workers' lives, positively augment human work, 
                and help all people safely enjoy the gains and 
                opportunities from technological innovation.
                    (d) Artificial Intelligence policies must be 
                consistent with my Administration's dedication to 
                advancing equity and civil rights. My Administration 
                cannot--and will not--tolerate the use of AI to 
                disadvantage those who are already too often denied 
                equal opportunity and justice. From hiring to housing 
                to healthcare, we have seen what happens when AI use 
                deepens discrimination and bias, rather than improving 
                quality of life. Artificial Intelligence systems 
                deployed irresponsibly have reproduced and intensified 
                existing inequities, caused new types of harmful 
                discrimination, and exacerbated online and physical 
                harms. My Administration will build on the important 
                steps that have already been taken--such as issuing the 
                Blueprint for an AI Bill of Rights, the AI Risk 
                Management Framework, and Executive Order 14091 of 
                February 16, 2023 (Further Advancing Racial Equity and 
                Support for Underserved Communities Through the Federal 
                Government)--in seeking to ensure that AI complies with 
                all Federal laws and to promote robust technical 
                evaluations, careful oversight, engagement with 
                affected communities, and rigorous regulation. It is 
                necessary to hold those developing and deploying AI 
                accountable to standards that protect against unlawful 
                discrimination and abuse, including in the justice 
                system and the Federal Government. Only then can 
                Americans trust AI to advance civil rights, civil 
                liberties, equity, and justice for all.
                    (e) The interests of Americans who increasingly 
                use, interact with, or purchase AI and AI-enabled 
                products in their daily lives must be protected. Use of 
                new technologies, such as AI, does not excuse 
                organizations from their legal obligations, and hard-
                won consumer protections are more important than ever 
                in moments of technological change. The Federal 
                Government will enforce existing consumer protection 
                laws and principles and enact appropriate safeguards 
                against fraud, unintended bias, discrimination, 
                infringements on privacy, and other harms from AI. Such 
                protections are

[[Page 75193]]

                especially important in critical fields like 
                healthcare, financial services, education, housing, 
                law, and transportation, where mistakes by or misuse of 
                AI could harm patients, cost consumers or small 
                businesses, or jeopardize safety or rights. At the same 
                time, my Administration will promote responsible uses 
                of AI that protect consumers, raise the quality of 
                goods and services, lower their prices, or expand 
                selection and availability.
                    (f) Americans' privacy and civil liberties must be 
                protected as AI continues advancing. Artificial 
                Intelligence is making it easier to extract, re-
                identify, link, infer, and act on sensitive information 
                about people's identities, locations, habits, and 
                desires. Artificial Intelligence's capabilities in 
                these areas can increase the risk that personal data 
                could be exploited and exposed. To combat this risk, 
                the Federal Government will ensure that the collection, 
                use, and retention of data is lawful, is secure, and 
                mitigates privacy and confidentiality risks. Agencies 
                shall use available policy and technical tools, 
                including privacy-enhancing technologies (PETs) where 
                appropriate, to protect privacy and to combat the 
                broader legal and societal risks--including the 
                chilling of First Amendment rights--that result from 
                the improper collection and use of people's data.
                    (g) It is important to manage the risks from the 
                Federal Government's own use of AI and increase its 
                internal capacity to regulate, govern, and support 
                responsible use of AI to deliver better results for 
                Americans. These efforts start with people, our 
                Nation's greatest asset. My Administration will take 
                steps to attract, retain, and develop public service-
                oriented AI professionals, including from underserved 
                communities, across disciplines--including technology, 
                policy, managerial, procurement, regulatory, ethical, 
                governance, and legal fields--and ease AI 
                professionals' path into the Federal Government to help 
                harness and govern AI. The Federal Government will work 
                to ensure that all members of its workforce receive 
                adequate training to understand the benefits, risks, 
                and limitations of AI for their job functions, and to 
                modernize Federal Government information technology 
                infrastructure, remove bureaucratic obstacles, and 
                ensure that safe and rights-respecting AI is adopted, 
                deployed, and used.
                    (h) The Federal Government should lead the way to 
                global societal, economic, and technological progress, 
                as the United States has in previous eras of disruptive 
                innovation and change. This leadership is not measured 
                solely by the technological advancements our country 
                makes. Effective leadership also means pioneering those 
                systems and safeguards needed to deploy technology 
                responsibly--and building and promoting those 
                safeguards with the rest of the world. My 
                Administration will engage with international allies 
                and partners in developing a framework to manage AI's 
                risks, unlock AI's potential for good, and promote 
                common approaches to shared challenges. The Federal 
                Government will seek to promote responsible AI safety 
                and security principles and actions with other nations, 
                including our competitors, while leading key global 
                conversations and collaborations to ensure that AI 
                benefits the whole world, rather than exacerbating 
                inequities, threatening human rights, and causing other 
                harms.

                Sec. 3. Definitions. For purposes of this order:

                    (a) The term ``agency'' means each agency described 
                in 44 U.S.C. 3502(1), except for the independent 
                regulatory agencies described in 44 U.S.C. 3502(5).
                    (b) The term ``artificial intelligence'' or ``AI'' 
                has the meaning set forth in 15 U.S.C. 9401(3): a 
                machine-based system that can, for a given set of 
                human-defined objectives, make predictions, 
                recommendations, or decisions influencing real or 
                virtual environments. Artificial intelligence systems 
                use machine- and human-based inputs to perceive real 
                and virtual environments; abstract such perceptions 
                into models through analysis in an automated manner; 
                and use model inference to formulate options for 
                information or action.
                    (c) The term ``AI model'' means a component of an 
                information system that implements AI technology and 
                uses computational, statistical, or machine-learning 
                techniques to produce outputs from a given set of 
                inputs.

[[Page 75194]]

                    (d) The term ``AI red-teaming'' means a structured 
                testing effort to find flaws and vulnerabilities in an 
                AI system, often in a controlled environment and in 
                collaboration with developers of AI. Artificial 
                Intelligence red-teaming is most often performed by 
                dedicated ``red teams'' that adopt adversarial methods 
                to identify flaws and vulnerabilities, such as harmful 
                or discriminatory outputs from an AI system, unforeseen 
                or undesirable system behaviors, limitations, or 
                potential risks associated with the misuse of the 
                system.
                    (e) The term ``AI system'' means any data system, 
                software, hardware, application, tool, or utility that 
                operates in whole or in part using AI.
                    (f) The term ``commercially available information'' 
                means any information or data about an individual or 
                group of individuals, including an individual's or 
                group of individuals' device or location, that is made 
                available or obtainable and sold, leased, or licensed 
                to the general public or to governmental or non-
                governmental entities.
                    (g) The term ``crime forecasting'' means the use of 
                analytical techniques to attempt to predict future 
                crimes or crime-related information. It can include 
                machine-generated predictions that use algorithms to 
                analyze large volumes of data, as well as other 
                forecasts that are generated without machines and based 
                on statistics, such as historical crime statistics.
                    (h) The term ``critical and emerging technologies'' 
                means those technologies listed in the February 2022 
                Critical and Emerging Technologies List Update issued 
                by the National Science and Technology Council (NSTC), 
                as amended by subsequent updates to the list issued by 
                the NSTC.
                    (i) The term ``critical infrastructure'' has the 
                meaning set forth in section 1016(e) of the USA PATRIOT 
                Act of 2001, 42 U.S.C. 5195c(e).
                    (j) The term ``differential-privacy guarantee'' 
                means protections that allow information about a group 
                to be shared while provably limiting the improper 
                access, use, or disclosure of personal information 
                about particular entities.
                    (k) The term ``dual-use foundation model'' means an 
                AI model that is trained on broad data; generally uses 
                self-supervision; contains at least tens of billions of 
                parameters; is applicable across a wide range of 
                contexts; and that exhibits, or could be easily 
                modified to exhibit, high levels of performance at 
                tasks that pose a serious risk to security, national 
                economic security, national public health or safety, or 
                any combination of those matters, such as by:

(i) substantially lowering the barrier of entry for non-experts to design, 
synthesize, acquire, or use chemical, biological, radiological, or nuclear 
(CBRN) weapons;

(ii) enabling powerful offensive cyber operations through automated 
vulnerability discovery and exploitation against a wide range of potential 
targets of cyber attacks; or

(iii) permitting the evasion of human control or oversight through means of 
deception or obfuscation.

                Models meet this definition even if they are provided 
                to end users with technical safeguards that attempt to 
                prevent users from taking advantage of the relevant 
                unsafe capabilities.

                    (l) The term ``Federal law enforcement agency'' has 
                the meaning set forth in section 21(a) of Executive 
                Order 14074 of May 25, 2022 (Advancing Effective, 
                Accountable Policing and Criminal Justice Practices To 
                Enhance Public Trust and Public Safety).
                    (m) The term ``floating-point operation'' means any 
                mathematical operation or assignment involving 
                floating-point numbers, which are a subset of the real 
                numbers typically represented on computers by an 
                integer of fixed precision scaled by an integer 
                exponent of a fixed base.
                    (n) The term ``foreign person'' has the meaning set 
                forth in section 5(c) of Executive Order 13984 of 
                January 19, 2021 (Taking Additional Steps

[[Page 75195]]

                To Address the National Emergency With Respect to 
                Significant Malicious Cyber-Enabled Activities).
                    (o) The terms ``foreign reseller'' and ``foreign 
                reseller of United States Infrastructure as a Service 
                Products'' mean a foreign person who has established an 
                Infrastructure as a Service Account to provide 
                Infrastructure as a Service Products subsequently, in 
                whole or in part, to a third party.
                    (p) The term ``generative AI'' means the class of 
                AI models that emulate the structure and 
                characteristics of input data in order to generate 
                derived synthetic content. This can include images, 
                videos, audio, text, and other digital content.
                    (q) The terms ``Infrastructure as a Service 
                Product,'' ``United States Infrastructure as a Service 
                Product,'' ``United States Infrastructure as a Service 
                Provider,'' and ``Infrastructure as a Service Account'' 
                each have the respective meanings given to those terms 
                in section 5 of Executive Order 13984.
                    (r) The term ``integer operation'' means any 
                mathematical operation or assignment involving only 
                integers, or whole numbers expressed without a decimal 
                point.
                    (s) The term ``Intelligence Community'' has the 
                meaning given to that term in section 3.5(h) of 
                Executive Order 12333 of December 4, 1981 (United 
                States Intelligence Activities), as amended.
                    (t) The term ``machine learning'' means a set of 
                techniques that can be used to train AI algorithms to 
                improve performance at a task based on data.
                    (u) The term ``model weight'' means a numerical 
                parameter within an AI model that helps determine the 
                model's outputs in response to inputs.
                    (v) The term ``national security system'' has the 
                meaning set forth in 44 U.S.C. 3552(b)(6).
                    (w) The term ``omics'' means biomolecules, 
                including nucleic acids, proteins, and metabolites, 
                that make up a cell or cellular system.
                    (x) The term ``Open RAN'' means the Open Radio 
                Access Network approach to telecommunications-network 
                standardization adopted by the O-RAN Alliance, Third 
                Generation Partnership Project, or any similar set of 
                published open standards for multi-vendor network 
                equipment interoperability.
                    (y) The term ``personally identifiable 
                information'' has the meaning set forth in Office of 
                Management and Budget (OMB) Circular No. A-130.
                    (z) The term ``privacy-enhancing technology'' means 
                any software or hardware solution, technical process, 
                technique, or other technological means of mitigating 
                privacy risks arising from data processing, including 
                by enhancing predictability, manageability, 
                disassociability, storage, security, and 
                confidentiality. These technological means may include 
                secure multiparty computation, homomorphic encryption, 
                zero-knowledge proofs, federated learning, secure 
                enclaves, differential privacy, and synthetic-data-
                generation tools. This is also sometimes referred to as 
                ``privacy-preserving technology.''
                    (aa) The term ``privacy impact assessment'' has the 
                meaning set forth in OMB Circular No. A-130.
                    (bb) The term ``Sector Risk Management Agency'' has 
                the meaning set forth in 6 U.S.C. 650(23).
                    (cc) The term ``self-healing network'' means a 
                telecommunications network that automatically diagnoses 
                and addresses network issues to permit self-
                restoration.
                    (dd) The term ``synthetic biology'' means a field 
                of science that involves redesigning organisms, or the 
                biomolecules of organisms, at the genetic level to give 
                them new characteristics. Synthetic nucleic acids are a 
                type of biomolecule redesigned through synthetic-
                biology methods.

[[Page 75196]]

                    (ee) The term ``synthetic content'' means 
                information, such as images, videos, audio clips, and 
                text, that has been significantly modified or generated 
                by algorithms, including by AI.
                    (ff) The term ``testbed'' means a facility or 
                mechanism equipped for conducting rigorous, 
                transparent, and replicable testing of tools and 
                technologies, including AI and PETs, to help evaluate 
                the functionality, usability, and performance of those 
                tools or technologies.
                    (gg) The term ``watermarking'' means the act of 
                embedding information, which is typically difficult to 
                remove, into outputs created by AI--including into 
                outputs such as photos, videos, audio clips, or text--
                for the purposes of verifying the authenticity of the 
                output or the identity or characteristics of its 
                provenance, modifications, or conveyance.

                Sec. 4. Ensuring the Safety and Security of AI 
                Technology.

                4.1. Developing Guidelines, Standards, and Best 
                Practices for AI Safety and Security. (a) Within 270 
                days of the date of this order, to help ensure the 
                development of safe, secure, and trustworthy AI 
                systems, the Secretary of Commerce, acting through the 
                Director of the National Institute of Standards and 
                Technology (NIST), in coordination with the Secretary 
                of Energy, the Secretary of Homeland Security, and the 
                heads of other relevant agencies as the Secretary of 
                Commerce may deem appropriate, shall:

(i) Establish guidelines and best practices, with the aim of promoting 
consensus industry standards, for developing and deploying safe, secure, 
and trustworthy AI systems, including:

  (A) developing a companion resource to the AI Risk Management Framework, 
NIST AI 100-1, for generative AI;

  (B) developing a companion resource to the Secure Software Development 
Framework to incorporate secure development practices for generative AI and 
for dual-use foundation models; and

  (C) launching an initiative to create guidance and benchmarks for 
evaluating and auditing AI capabilities, with a focus on capabilities 
through which AI could cause harm, such as in the areas of cybersecurity 
and biosecurity.

(ii) Establish appropriate guidelines (except for AI used as a component of 
a national security system), including appropriate procedures and 
processes, to enable developers of AI, especially of dual-use foundation 
models, to conduct AI red-teaming tests to enable deployment of safe, 
secure, and trustworthy systems. These efforts shall include:

  (A) coordinating or developing guidelines related to assessing and 
managing the safety, security, and trustworthiness of dual-use foundation 
models; and

  (B) in coordination with the Secretary of Energy and the Director of the 
National Science Foundation (NSF), developing and helping to ensure the 
availability of testing environments, such as testbeds, to support the 
development of safe, secure, and trustworthy AI technologies, as well as to 
support the design, development, and deployment of associated PETs, 
consistent with section 9(b) of this order.

                    (b) Within 270 days of the date of this order, to 
                understand and mitigate AI security risks, the 
                Secretary of Energy, in coordination with the heads of 
                other Sector Risk Management Agencies (SRMAs) as the 
                Secretary of Energy may deem appropriate, shall develop 
                and, to the extent permitted by law and available 
                appropriations, implement a plan for developing the 
                Department of Energy's AI model evaluation tools and AI 
                testbeds. The Secretary shall undertake this work using 
                existing solutions where possible, and shall develop 
                these tools and AI testbeds to be capable of assessing 
                near-term extrapolations of AI systems' capabilities. 
                At a minimum, the Secretary shall develop tools to 
                evaluate AI capabilities to generate outputs that may 
                represent nuclear, nonproliferation, biological, 
                chemical, critical infrastructure, and energy-security 
                threats or hazards. The Secretary shall

[[Page 75197]]

                do this work solely for the purposes of guarding 
                against these threats, and shall also develop model 
                guardrails that reduce such risks. The Secretary shall, 
                as appropriate, consult with private AI laboratories, 
                academia, civil society, and third-party evaluators, 
                and shall use existing solutions.

                4.2. Ensuring Safe and Reliable AI. (a) Within 90 days 
                of the date of this order, to ensure and verify the 
                continuous availability of safe, reliable, and 
                effective AI in accordance with the Defense Production 
                Act, as amended, 50 U.S.C. 4501 et seq., including for 
                the national defense and the protection of critical 
                infrastructure, the Secretary of Commerce shall 
                require:

(i) Companies developing or demonstrating an intent to develop potential 
dual-use foundation models to provide the Federal Government, on an ongoing 
basis, with information, reports, or records regarding the following:

  (A) any ongoing or planned activities related to training, developing, or 
producing dual-use foundation models, including the physical and 
cybersecurity protections taken to assure the integrity of that training 
process against sophisticated threats;

  (B) the ownership and possession of the model weights of any dual-use 
foundation models, and the physical and cybersecurity measures taken to 
protect those model weights; and

  (C) the results of any developed dual-use foundation model's performance 
in relevant AI red-team testing based on guidance developed by NIST 
pursuant to subsection 4.1(a)(ii) of this section, and a description of any 
associated measures the company has taken to meet safety objectives, such 
as mitigations to improve performance on these red-team tests and 
strengthen overall model security. Prior to the development of guidance on 
red-team testing standards by NIST pursuant to subsection 4.1(a)(ii) of 
this section, this description shall include the results of any red-team 
testing that the company has conducted relating to lowering the barrier to 
entry for the development, acquisition, and use of biological weapons by 
non-state actors; the discovery of software vulnerabilities and development 
of associated exploits; the use of software or tools to influence real or 
virtual events; the possibility for self-replication or propagation; and 
associated measures to meet safety objectives; and

(ii) Companies, individuals, or other organizations or entities that 
acquire, develop, or possess a potential large-scale computing cluster to 
report any such acquisition, development, or possession, including the 
existence and location of these clusters and the amount of total computing 
power available in each cluster.

                    (b) The Secretary of Commerce, in consultation with 
                the Secretary of State, the Secretary of Defense, the 
                Secretary of Energy, and the Director of National 
                Intelligence, shall define, and thereafter update as 
                needed on a regular basis, the set of technical 
                conditions for models and computing clusters that would 
                be subject to the reporting requirements of subsection 
                4.2(a) of this section. Until such technical conditions 
                are defined, the Secretary shall require compliance 
                with these reporting requirements for:

(i) any model that was trained using a quantity of computing power greater 
than 10\26\ integer or floating-point operations, or using primarily 
biological sequence data and using a quantity of computing power greater 
than 10\23\ integer or floating-point operations; and

(ii) any computing cluster that has a set of machines physically co-located 
in a single datacenter, transitively connected by data center networking of 
over 100 Gbit/s, and having a theoretical maximum computing capacity of 
10\20\ integer or floating-point operations per second for training AI.

                    (c) Because I find that additional steps must be 
                taken to deal with the national emergency related to 
                significant malicious cyber-enabled activities declared 
                in Executive Order 13694 of April 1, 2015 (Blocking the 
                Property of Certain Persons Engaging in Significant 
                Malicious Cyber-Enabled Activities), as amended by 
                Executive Order 13757 of December 28, 2016 (Taking

[[Page 75198]]

                Additional Steps to Address the National Emergency With 
                Respect to Significant Malicious Cyber-Enabled 
                Activities), and further amended by Executive Order 
                13984, to address the use of United States 
                Infrastructure as a Service (IaaS) Products by foreign 
                malicious cyber actors, including to impose additional 
                record-keeping obligations with respect to foreign 
                transactions and to assist in the investigation of 
                transactions involving foreign malicious cyber actors, 
                I hereby direct the Secretary of Commerce, within 90 
                days of the date of this order, to:

(i) Propose regulations that require United States IaaS Providers to submit 
a report to the Secretary of Commerce when a foreign person transacts with 
that United States IaaS Provider to train a large AI model with potential 
capabilities that could be used in malicious cyber-enabled activity (a 
``training run''). Such reports shall include, at a minimum, the identity 
of the foreign person and the existence of any training run of an AI model 
meeting the criteria set forth in this section, or other criteria defined 
by the Secretary in regulations, as well as any additional information 
identified by the Secretary.

(ii) Include a requirement in the regulations proposed pursuant to 
subsection 4.2(c)(i) of this section that United States IaaS Providers 
prohibit any foreign reseller of their United States IaaS Product from 
providing those products unless such foreign reseller submits to the United 
States IaaS Provider a report, which the United States IaaS Provider must 
provide to the Secretary of Commerce, detailing each instance in which a 
foreign person transacts with the foreign reseller to use the United States 
IaaS Product to conduct a training run described in subsection 4.2(c)(i) of 
this section. Such reports shall include, at a minimum, the information 
specified in subsection 4.2(c)(i) of this section as well as any additional 
information identified by the Secretary.

(iii) Determine the set of technical conditions for a large AI model to 
have potential capabilities that could be used in malicious cyber-enabled 
activity, and revise that determination as necessary and appropriate. Until 
the Secretary makes such a determination, a model shall be considered to 
have potential capabilities that could be used in malicious cyber-enabled 
activity if it requires a quantity of computing power greater than 10\26\ 
integer or floating-point operations and is trained on a computing cluster 
that has a set of machines physically co-located in a single datacenter, 
transitively connected by data center networking of over 100 Gbit/s, and 
having a theoretical maximum compute capacity of 10\20\ integer or 
floating-point operations per second for training AI.

                    (d) Within 180 days of the date of this order, 
                pursuant to the finding set forth in subsection 4.2(c) 
                of this section, the Secretary of Commerce shall 
                propose regulations that require United States IaaS 
                Providers to ensure that foreign resellers of United 
                States IaaS Products verify the identity of any foreign 
                person that obtains an IaaS account (account) from the 
                foreign reseller. These regulations shall, at a 
                minimum:

(i) Set forth the minimum standards that a United States IaaS Provider must 
require of foreign resellers of its United States IaaS Products to verify 
the identity of a foreign person who opens an account or maintains an 
existing account with a foreign reseller, including:

  (A) the types of documentation and procedures that foreign resellers of 
United States IaaS Products must require to verify the identity of any 
foreign person acting as a lessee or sub-lessee of these products or 
services;

  (B) records that foreign resellers of United States IaaS Products must 
securely maintain regarding a foreign person that obtains an account, 
including information establishing:

(1) the identity of such foreign person, including name and address;

[[Page 75199]]

(2) the means and source of payment (including any associated financial 
institution and other identifiers such as credit card number, account 
number, customer identifier, transaction identifiers, or virtual currency 
wallet or wallet address identifier);

(3) the electronic mail address and telephonic contact information used to 
verify a foreign person's identity; and

(4) the internet Protocol addresses used for access or administration and 
the date and time of each such access or administrative action related to 
ongoing verification of such foreign person's ownership of such an account; 
and

  (C) methods that foreign resellers of United States IaaS Products must 
implement to limit all third-party access to the information described in 
this subsection, except insofar as such access is otherwise consistent with 
this order and allowed under applicable law;

(ii) Take into consideration the types of accounts maintained by foreign 
resellers of United States IaaS Products, methods of opening an account, 
and types of identifying information available to accomplish the objectives 
of identifying foreign malicious cyber actors using any such products and 
avoiding the imposition of an undue burden on such resellers; and

(iii) Provide that the Secretary of Commerce, in accordance with such 
standards and procedures as the Secretary may delineate and in consultation 
with the Secretary of Defense, the Attorney General, the Secretary of 
Homeland Security, and the Director of National Intelligence, may exempt a 
United States IaaS Provider with respect to any specific foreign reseller 
of their United States IaaS Products, or with respect to any specific type 
of account or lessee, from the requirements of any regulation issued 
pursuant to this subsection. Such standards and procedures may include a 
finding by the Secretary that such foreign reseller, account, or lessee 
complies with security best practices to otherwise deter abuse of United 
States IaaS Products.

                    (e) The Secretary of Commerce is hereby authorized 
                to take such actions, including the promulgation of 
                rules and regulations, and to employ all powers granted 
                to the President by the International Emergency 
                Economic Powers Act, 50 U.S.C. 1701 et seq., as may be 
                necessary to carry out the purposes of subsections 
                4.2(c) and (d) of this section. Such actions may 
                include a requirement that United States IaaS Providers 
                require foreign resellers of United States IaaS 
                Products to provide United States IaaS Providers 
                verifications relative to those subsections.

                4.3. Managing AI in Critical Infrastructure and in 
                Cybersecurity. (a) To ensure the protection of critical 
                infrastructure, the following actions shall be taken:

(i) Within 90 days of the date of this order, and at least annually 
thereafter, the head of each agency with relevant regulatory authority over 
critical infrastructure and the heads of relevant SRMAs, in coordination 
with the Director of the Cybersecurity and Infrastructure Security Agency 
within the Department of Homeland Security for consideration of cross-
sector risks, shall evaluate and provide to the Secretary of Homeland 
Security an assessment of potential risks related to the use of AI in 
critical infrastructure sectors involved, including ways in which deploying 
AI may make critical infrastructure systems more vulnerable to critical 
failures, physical attacks, and cyber attacks, and shall consider ways to 
mitigate these vulnerabilities. Independent regulatory agencies are 
encouraged, as they deem appropriate, to contribute to sector-specific risk 
assessments.

(ii) Within 150 days of the date of this order, the Secretary of the 
Treasury shall issue a public report on best practices for financial 
institutions to manage AI-specific cybersecurity risks.

(iii) Within 180 days of the date of this order, the Secretary of Homeland 
Security, in coordination with the Secretary of Commerce and with SRMAs and 
other regulators as determined by the Secretary of Homeland Security, shall 
incorporate as appropriate the AI Risk Management Framework, NIST

[[Page 75200]]

AI 100-1, as well as other appropriate security guidance, into relevant 
safety and security guidelines for use by critical infrastructure owners 
and operators.

(iv) Within 240 days of the completion of the guidelines described in 
subsection 4.3(a)(iii) of this section, the Assistant to the President for 
National Security Affairs and the Director of OMB, in consultation with the 
Secretary of Homeland Security, shall coordinate work by the heads of 
agencies with authority over critical infrastructure to develop and take 
steps for the Federal Government to mandate such guidelines, or appropriate 
portions thereof, through regulatory or other appropriate action. 
Independent regulatory agencies are encouraged, as they deem appropriate, 
to consider whether to mandate guidance through regulatory action in their 
areas of authority and responsibility.

(v) The Secretary of Homeland Security shall establish an Artificial 
Intelligence Safety and Security Board as an advisory committee pursuant to 
section 871 of the Homeland Security Act of 2002 (Public Law 107-296). The 
Advisory Committee shall include AI experts from the private sector, 
academia, and government, as appropriate, and provide to the Secretary of 
Homeland Security and the Federal Government's critical infrastructure 
community advice, information, or recommendations for improving security, 
resilience, and incident response related to AI usage in critical 
infrastructure.

                    (b) To capitalize on AI's potential to improve 
                United States cyber defenses:

(i) The Secretary of Defense shall carry out the actions described in 
subsections 4.3(b)(ii) and (iii) of this section for national security 
systems, and the Secretary of Homeland Security shall carry out these 
actions for non-national security systems. Each shall do so in consultation 
with the heads of other relevant agencies as the Secretary of Defense and 
the Secretary of Homeland Security may deem appropriate.

(ii) As set forth in subsection 4.3(b)(i) of this section, within 180 days 
of the date of this order, the Secretary of Defense and the Secretary of 
Homeland Security shall, consistent with applicable law, each develop plans 
for, conduct, and complete an operational pilot project to identify, 
develop, test, evaluate, and deploy AI capabilities, such as large-language 
models, to aid in the discovery and remediation of vulnerabilities in 
critical United States Government software, systems, and networks.

(iii) As set forth in subsection 4.3(b)(i) of this section, within 270 days 
of the date of this order, the Secretary of Defense and the Secretary of 
Homeland Security shall each provide a report to the Assistant to the 
President for National Security Affairs on the results of actions taken 
pursuant to the plans and operational pilot projects required by subsection 
4.3(b)(ii) of this section, including a description of any vulnerabilities 
found and fixed through the development and deployment of AI capabilities 
and any lessons learned on how to identify, develop, test, evaluate, and 
deploy AI capabilities effectively for cyber defense.

                4.4. Reducing Risks at the Intersection of AI and CBRN 
                Threats. (a) To better understand and mitigate the risk 
                of AI being misused to assist in the development or use 
                of CBRN threats--with a particular focus on biological 
                weapons--the following actions shall be taken:

(i) Within 180 days of the date of this order, the Secretary of Homeland 
Security, in consultation with the Secretary of Energy and the Director of 
the Office of Science and Technology Policy (OSTP), shall evaluate the 
potential for AI to be misused to enable the development or production of 
CBRN threats, while also considering the benefits and application of AI to 
counter these threats, including, as appropriate, the results of work 
conducted under section 8(b) of this order. The Secretary of Homeland 
Security shall:

  (A) consult with experts in AI and CBRN issues from the Department of 
Energy, private AI laboratories, academia, and third-party model 
evaluators, as appropriate, to evaluate AI model capabilities to present 
CBRN

[[Page 75201]]

threats--for the sole purpose of guarding against those threats--as well as 
options for minimizing the risks of AI model misuse to generate or 
exacerbate those threats; and

  (B) submit a report to the President that describes the progress of these 
efforts, including an assessment of the types of AI models that may present 
CBRN risks to the United States, and that makes recommendations for 
regulating or overseeing the training, deployment, publication, or use of 
these models, including requirements for safety evaluations and guardrails 
for mitigating potential threats to national security.

(ii) Within 120 days of the date of this order, the Secretary of Defense, 
in consultation with the Assistant to the President for National Security 
Affairs and the Director of OSTP, shall enter into a contract with the 
National Academies of Sciences, Engineering, and Medicine to conduct--and 
submit to the Secretary of Defense, the Assistant to the President for 
National Security Affairs, the Director of the Office of Pandemic 
Preparedness and Response Policy, the Director of OSTP, and the Chair of 
the Chief Data Officer Council--a study that:

  (A) assesses the ways in which AI can increase biosecurity risks, 
including risks from generative AI models trained on biological data, and 
makes recommendations on how to mitigate these risks;

  (B) considers the national security implications of the use of data and 
datasets, especially those associated with pathogens and omics studies, 
that the United States Government hosts, generates, funds the creation of, 
or otherwise owns, for the training of generative AI models, and makes 
recommendations on how to mitigate the risks related to the use of these 
data and datasets;

  (C) assesses the ways in which AI applied to biology can be used to 
reduce biosecurity risks, including recommendations on opportunities to 
coordinate data and high-performance computing resources; and

  (D) considers additional concerns and opportunities at the intersection 
of AI and synthetic biology that the Secretary of Defense deems 
appropriate.

                    (b) To reduce the risk of misuse of synthetic 
                nucleic acids, which could be substantially increased 
                by AI's capabilities in this area, and improve 
                biosecurity measures for the nucleic acid synthesis 
                industry, the following actions shall be taken:

(i) Within 180 days of the date of this order, the Director of OSTP, in 
consultation with the Secretary of State, the Secretary of Defense, the 
Attorney General, the Secretary of Commerce, the Secretary of Health and 
Human Services (HHS), the Secretary of Energy, the Secretary of Homeland 
Security, the Director of National Intelligence, and the heads of other 
relevant agencies as the Director of OSTP may deem appropriate, shall 
establish a framework, incorporating, as appropriate, existing United 
States Government guidance, to encourage providers of synthetic nucleic 
acid sequences to implement comprehensive, scalable, and verifiable 
synthetic nucleic acid procurement screening mechanisms, including 
standards and recommended incentives. As part of this framework, the 
Director of OSTP shall:

  (A) establish criteria and mechanisms for ongoing identification of 
biological sequences that could be used in a manner that would pose a risk 
to the national security of the United States; and

  (B) determine standardized methodologies and tools for conducting and 
verifying the performance of sequence synthesis procurement screening, 
including customer screening approaches to support due diligence with 
respect to managing security risks posed by purchasers of biological 
sequences identified in subsection 4.4(b)(i)(A) of this section, and 
processes for the reporting of concerning activity to enforcement entities.

(ii) Within 180 days of the date of this order, the Secretary of Commerce, 
acting through the Director of NIST, in coordination with the Director

[[Page 75202]]

of OSTP, and in consultation with the Secretary of State, the Secretary of 
HHS, and the heads of other relevant agencies as the Secretary of Commerce 
may deem appropriate, shall initiate an effort to engage with industry and 
relevant stakeholders, informed by the framework developed under subsection 
4.4(b)(i) of this section, to develop and refine for possible use by 
synthetic nucleic acid sequence providers:

  (A) specifications for effective nucleic acid synthesis procurement 
screening;

  (B) best practices, including security and access controls, for managing 
sequence-of-concern databases to support such screening;

  (C) technical implementation guides for effective screening; and

  (D) conformity-assessment best practices and mechanisms.

(iii) Within 180 days of the establishment of the framework pursuant to 
subsection 4.4(b)(i) of this section, all agencies that fund life-sciences 
research shall, as appropriate and consistent with applicable law, 
establish that, as a requirement of funding, synthetic nucleic acid 
procurement is conducted through providers or manufacturers that adhere to 
the framework, such as through an attestation from the provider or 
manufacturer. The Assistant to the President for National Security Affairs 
and the Director of OSTP shall coordinate the process of reviewing such 
funding requirements to facilitate consistency in implementation of the 
framework across funding agencies.

(iv) In order to facilitate effective implementation of the measures 
described in subsections 4.4(b)(i)-(iii) of this section, the Secretary of 
Homeland Security, in consultation with the heads of other relevant 
agencies as the Secretary of Homeland Security may deem appropriate, shall:

  (A) within 180 days of the establishment of the framework pursuant to 
subsection 4.4(b)(i) of this section, develop a framework to conduct 
structured evaluation and stress testing of nucleic acid synthesis 
procurement screening, including the systems developed in accordance with 
subsections 4.4(b)(i)-(ii) of this section and implemented by providers of 
synthetic nucleic acid sequences; and

  (B) following development of the framework pursuant to subsection 
4.4(b)(iv)(A) of this section, submit an annual report to the Assistant to 
the President for National Security Affairs, the Director of the Office of 
Pandemic Preparedness and Response Policy, and the Director of OSTP on any 
results of the activities conducted pursuant to subsection 4.4(b)(iv)(A) of 
this section, including recommendations, if any, on how to strengthen 
nucleic acid synthesis procurement screening, including customer screening 
systems.

                4.5. Reducing the Risks Posed by Synthetic Content. To 
                foster capabilities for identifying and labeling 
                synthetic content produced by AI systems, and to 
                establish the authenticity and provenance of digital 
                content, both synthetic and not synthetic, produced by 
                the Federal Government or on its behalf:

                    (a) Within 240 days of the date of this order, the 
                Secretary of Commerce, in consultation with the heads 
                of other relevant agencies as the Secretary of Commerce 
                may deem appropriate, shall submit a report to the 
                Director of OMB and the Assistant to the President for 
                National Security Affairs identifying the existing 
                standards, tools, methods, and practices, as well as 
                the potential development of further science-backed 
                standards and techniques, for:

(i) authenticating content and tracking its provenance;

(ii) labeling synthetic content, such as using watermarking;

(iii) detecting synthetic content;

[[Page 75203]]

(iv) preventing generative AI from producing child sexual abuse material or 
producing non-consensual intimate imagery of real individuals (to include 
intimate digital depictions of the body or body parts of an identifiable 
individual);

(v) testing software used for the above purposes; and

(vi) auditing and maintaining synthetic content.

                    (b) Within 180 days of submitting the report 
                required under subsection 4.5(a) of this section, and 
                updated periodically thereafter, the Secretary of 
                Commerce, in coordination with the Director of OMB, 
                shall develop guidance regarding the existing tools and 
                practices for digital content authentication and 
                synthetic content detection measures. The guidance 
                shall include measures for the purposes listed in 
                subsection 4.5(a) of this section.
                    (c) Within 180 days of the development of the 
                guidance required under subsection 4.5(b) of this 
                section, and updated periodically thereafter, the 
                Director of OMB, in consultation with the Secretary of 
                State; the Secretary of Defense; the Attorney General; 
                the Secretary of Commerce, acting through the Director 
                of NIST; the Secretary of Homeland Security; the 
                Director of National Intelligence; and the heads of 
                other agencies that the Director of OMB deems 
                appropriate, shall--for the purpose of strengthening 
                public confidence in the integrity of official United 
                States Government digital content--issue guidance to 
                agencies for labeling and authenticating such content 
                that they produce or publish.
                    (d) The Federal Acquisition Regulatory Council 
                shall, as appropriate and consistent with applicable 
                law, consider amending the Federal Acquisition 
                Regulation to take into account the guidance 
                established under subsection 4.5 of this section.

                4.6. Soliciting Input on Dual-Use Foundation Models 
                with Widely Available Model Weights. When the weights 
                for a dual-use foundation model are widely available--
                such as when they are publicly posted on the internet--
                there can be substantial benefits to innovation, but 
                also substantial security risks, such as the removal of 
                safeguards within the model. To address the risks and 
                potential benefits of dual-use foundation models with 
                widely available weights, within 270 days of the date 
                of this order, the Secretary of Commerce, acting 
                through the Assistant Secretary of Commerce for 
                Communications and Information, and in consultation 
                with the Secretary of State, shall:

                    (a) solicit input from the private sector, 
                academia, civil society, and other stakeholders through 
                a public consultation process on potential risks, 
                benefits, other implications, and appropriate policy 
                and regulatory approaches related to dual-use 
                foundation models for which the model weights are 
                widely available, including:

(i) risks associated with actors fine-tuning dual-use foundation models for 
which the model weights are widely available or removing those models' 
safeguards;

(ii) benefits to AI innovation and research, including research into AI 
safety and risk management, of dual-use foundation models for which the 
model weights are widely available; and

(iii) potential voluntary, regulatory, and international mechanisms to 
manage the risks and maximize the benefits of dual-use foundation models 
for which the model weights are widely available; and

                    (b) based on input from the process described in 
                subsection 4.6(a) of this section, and in consultation 
                with the heads of other relevant agencies as the 
                Secretary of Commerce deems appropriate, submit a 
                report to the President on the potential benefits, 
                risks, and implications of dual-use foundation models 
                for which the model weights are widely available, as 
                well as policy and regulatory recommendations 
                pertaining to those models.

                4.7. Promoting Safe Release and Preventing the 
                Malicious Use of Federal Data for AI Training. To 
                improve public data access and manage security risks, 
                and consistent with the objectives of the Open, Public, 
                Electronic,

[[Page 75204]]

                and Necessary Government Data Act (title II of Public 
                Law 115-435) to expand public access to Federal data 
                assets in a machine-readable format while also taking 
                into account security considerations, including the 
                risk that information in an individual data asset in 
                isolation does not pose a security risk but, when 
                combined with other available information, may pose 
                such a risk:

                    (a) within 270 days of the date of this order, the 
                Chief Data Officer Council, in consultation with the 
                Secretary of Defense, the Secretary of Commerce, the 
                Secretary of Energy, the Secretary of Homeland 
                Security, and the Director of National Intelligence, 
                shall develop initial guidelines for performing 
                security reviews, including reviews to identify and 
                manage the potential security risks of releasing 
                Federal data that could aid in the development of CBRN 
                weapons as well as the development of autonomous 
                offensive cyber capabilities, while also providing 
                public access to Federal Government data in line with 
                the goals stated in the Open, Public, Electronic, and 
                Necessary Government Data Act (title II of Public Law 
                115-435); and
                    (b) within 180 days of the development of the 
                initial guidelines required by subsection 4.7(a) of 
                this section, agencies shall conduct a security review 
                of all data assets in the comprehensive data inventory 
                required under 44 U.S.C. 3511(a)(1) and (2)(B) and 
                shall take steps, as appropriate and consistent with 
                applicable law, to address the highest-priority 
                potential security risks that releasing that data could 
                raise with respect to CBRN weapons, such as the ways in 
                which that data could be used to train AI systems.

                4.8. Directing the Development of a National Security 
                Memorandum. To develop a coordinated executive branch 
                approach to managing AI's security risks, the Assistant 
                to the President for National Security Affairs and the 
                Assistant to the President and Deputy Chief of Staff 
                for Policy shall oversee an interagency process with 
                the purpose of, within 270 days of the date of this 
                order, developing and submitting a proposed National 
                Security Memorandum on AI to the President. The 
                memorandum shall address the governance of AI used as a 
                component of a national security system or for military 
                and intelligence purposes. The memorandum shall take 
                into account current efforts to govern the development 
                and use of AI for national security systems. The 
                memorandum shall outline actions for the Department of 
                Defense, the Department of State, other relevant 
                agencies, and the Intelligence Community to address the 
                national security risks and potential benefits posed by 
                AI. In particular, the memorandum shall:

                    (a) provide guidance to the Department of Defense, 
                other relevant agencies, and the Intelligence Community 
                on the continued adoption of AI capabilities to advance 
                the United States national security mission, including 
                through directing specific AI assurance and risk-
                management practices for national security uses of AI 
                that may affect the rights or safety of United States 
                persons and, in appropriate contexts, non-United States 
                persons; and
                    (b) direct continued actions, as appropriate and 
                consistent with applicable law, to address the 
                potential use of AI systems by adversaries and other 
                foreign actors in ways that threaten the capabilities 
                or objectives of the Department of Defense or the 
                Intelligence Community, or that otherwise pose risks to 
                the security of the United States or its allies and 
                partners.

                Sec. 5. Promoting Innovation and Competition.

                5.1. Attracting AI Talent to the United States. (a) 
                Within 90 days of the date of this order, to attract 
                and retain talent in AI and other critical and emerging 
                technologies in the United States economy, the 
                Secretary of State and the Secretary of Homeland 
                Security shall take appropriate steps to:

(i) streamline processing times of visa petitions and applications, 
including by ensuring timely availability of visa appointments, for 
noncitizens who seek to travel to the United States to work on, study, or 
conduct research in AI or other critical and emerging technologies; and

[[Page 75205]]

(ii) facilitate continued availability of visa appointments in sufficient 
volume for applicants with expertise in AI or other critical and emerging 
technologies.

                    (b) Within 120 days of the date of this order, the 
                Secretary of State shall:

(i) consider initiating a rulemaking to establish new criteria to designate 
countries and skills on the Department of State's Exchange Visitor Skills 
List as it relates to the 2-year foreign residence requirement for certain 
J-1 nonimmigrants, including those skills that are critical to the United 
States;

(ii) consider publishing updates to the 2009 Revised Exchange Visitor 
Skills List (74 FR 20108); and

(iii) consider implementing a domestic visa renewal program under 22 CFR 
41.111(b) to facilitate the ability of qualified applicants, including 
highly skilled talent in AI and critical and emerging technologies, to 
continue their work in the United States without unnecessary interruption.

                    (c) Within 180 days of the date of this order, the 
                Secretary of State shall:

(i) consider initiating a rulemaking to expand the categories of 
nonimmigrants who qualify for the domestic visa renewal program covered 
under 22 CFR 41.111(b) to include academic J-1 research scholars and F-1 
students in science, technology, engineering, and mathematics (STEM); and

(ii) establish, to the extent permitted by law and available 
appropriations, a program to identify and attract top talent in AI and 
other critical and emerging technologies at universities, research 
institutions, and the private sector overseas, and to establish and 
increase connections with that talent to educate them on opportunities and 
resources for research and employment in the United States, including 
overseas educational components to inform top STEM talent of nonimmigrant 
and immigrant visa options and potential expedited adjudication of their 
visa petitions and applications.

                    (d) Within 180 days of the date of this order, the 
                Secretary of Homeland Security shall:

(i) review and initiate any policy changes the Secretary determines 
necessary and appropriate to clarify and modernize immigration pathways for 
experts in AI and other critical and emerging technologies, including O-1A 
and EB-1 noncitizens of extraordinary ability; EB-2 advanced-degree holders 
and noncitizens of exceptional ability; and startup founders in AI and 
other critical and emerging technologies using the International 
Entrepreneur Rule; and

(ii) continue its rulemaking process to modernize the H-1B program and 
enhance its integrity and usage, including by experts in AI and other 
critical and emerging technologies, and consider initiating a rulemaking to 
enhance the process for noncitizens, including experts in AI and other 
critical and emerging technologies and their spouses, dependents, and 
children, to adjust their status to lawful permanent resident.

                    (e) Within 45 days of the date of this order, for 
                purposes of considering updates to the ``Schedule A'' 
                list of occupations, 20 CFR 656.5, the Secretary of 
                Labor shall publish a request for information (RFI) to 
                solicit public input, including from industry and 
                worker-advocate communities, identifying AI and other 
                STEM-related occupations, as well as additional 
                occupations across the economy, for which there is an 
                insufficient number of ready, willing, able, and 
                qualified United States workers.
                    (f) The Secretary of State and the Secretary of 
                Homeland Security shall, consistent with applicable law 
                and implementing regulations, use their discretionary 
                authorities to support and attract foreign nationals 
                with special skills in AI and other critical and 
                emerging technologies seeking to work, study, or 
                conduct research in the United States.

[[Page 75206]]

                    (g) Within 120 days of the date of this order, the 
                Secretary of Homeland Security, in consultation with 
                the Secretary of State, the Secretary of Commerce, and 
                the Director of OSTP, shall develop and publish 
                informational resources to better attract and retain 
                experts in AI and other critical and emerging 
                technologies, including:

(i) a clear and comprehensive guide for experts in AI and other critical 
and emerging technologies to understand their options for working in the 
United States, to be published in multiple relevant languages on AI.gov; 
and

(ii) a public report with relevant data on applications, petitions, 
approvals, and other key indicators of how experts in AI and other critical 
and emerging technologies have utilized the immigration system through the 
end of Fiscal Year 2023.

                5.2. Promoting Innovation. (a) To develop and 
                strengthen public-private partnerships for advancing 
                innovation, commercialization, and risk-mitigation 
                methods for AI, and to help promote safe, responsible, 
                fair, privacy-protecting, and trustworthy AI systems, 
                the Director of NSF shall take the following steps:

(i) Within 90 days of the date of this order, in coordination with the 
heads of agencies that the Director of NSF deems appropriate, launch a 
pilot program implementing the National AI Research Resource (NAIRR), 
consistent with past recommendations of the NAIRR Task Force. The program 
shall pursue the infrastructure, governance mechanisms, and user interfaces 
to pilot an initial integration of distributed computational, data, model, 
and training resources to be made available to the research community in 
support of AI-related research and development. The Director of NSF shall 
identify Federal and private sector computational, data, software, and 
training resources appropriate for inclusion in the NAIRR pilot program. To 
assist with such work, within 45 days of the date of this order, the heads 
of agencies whom the Director of NSF identifies for coordination pursuant 
to this subsection shall each submit to the Director of NSF a report 
identifying the agency resources that could be developed and integrated 
into such a pilot program. These reports shall include a description of 
such resources, including their current status and availability; their 
format, structure, or technical specifications; associated agency expertise 
that will be provided; and the benefits and risks associated with their 
inclusion in the NAIRR pilot program. The heads of independent regulatory 
agencies are encouraged to take similar steps, as they deem appropriate.

(ii) Within 150 days of the date of this order, fund and launch at least 
one NSF Regional Innovation Engine that prioritizes AI-related work, such 
as AI-related research, societal, or workforce needs.

(iii) Within 540 days of the date of this order, establish at least four 
new National AI Research Institutes, in addition to the 25 currently funded 
as of the date of this order.

                    (b) Within 120 days of the date of this order, to 
                support activities involving high-performance and data-
                intensive computing, the Secretary of Energy, in 
                coordination with the Director of NSF, shall, in a 
                manner consistent with applicable law and available 
                appropriations, establish a pilot program to enhance 
                existing successful training programs for scientists, 
                with the goal of training 500 new researchers by 2025 
                capable of meeting the rising demand for AI talent.
                    (c) To promote innovation and clarify issues 
                related to AI and inventorship of patentable subject 
                matter, the Under Secretary of Commerce for 
                Intellectual Property and Director of the United States 
                Patent and Trademark Office (USPTO Director) shall:

(i) within 120 days of the date of this order, publish guidance to USPTO 
patent examiners and applicants addressing inventorship and the use of AI, 
including generative AI, in the inventive process, including illustrative

[[Page 75207]]

examples in which AI systems play different roles in inventive processes 
and how, in each example, inventorship issues ought to be analyzed;

(ii) subsequently, within 270 days of the date of this order, issue 
additional guidance to USPTO patent examiners and applicants to address 
other considerations at the intersection of AI and IP, which could include, 
as the USPTO Director deems necessary, updated guidance on patent 
eligibility to address innovation in AI and critical and emerging 
technologies; and

(iii) within 270 days of the date of this order or 180 days after the 
United States Copyright Office of the Library of Congress publishes its 
forthcoming AI study that will address copyright issues raised by AI, 
whichever comes later, consult with the Director of the United States 
Copyright Office and issue recommendations to the President on potential 
executive actions relating to copyright and AI. The recommendations shall 
address any copyright and related issues discussed in the United States 
Copyright Office's study, including the scope of protection for works 
produced using AI and the treatment of copyrighted works in AI training.

                    (d) Within 180 days of the date of this order, to 
                assist developers of AI in combatting AI-related IP 
                risks, the Secretary of Homeland Security, acting 
                through the Director of the National Intellectual 
                Property Rights Coordination Center, and in 
                consultation with the Attorney General, shall develop a 
                training, analysis, and evaluation program to mitigate 
                AI-related IP risks. Such a program shall:

(i) include appropriate personnel dedicated to collecting and analyzing 
reports of AI-related IP theft, investigating such incidents with 
implications for national security, and, where appropriate and consistent 
with applicable law, pursuing related enforcement actions;

(ii) implement a policy of sharing information and coordinating on such 
work, as appropriate and consistent with applicable law, with the Federal 
Bureau of Investigation; United States Customs and Border Protection; other 
agencies; State and local agencies; and appropriate international 
organizations, including through work-sharing agreements;

(iii) develop guidance and other appropriate resources to assist private 
sector actors with mitigating the risks of AI-related IP theft;

(iv) share information and best practices with AI developers and law 
enforcement personnel to identify incidents, inform stakeholders of current 
legal requirements, and evaluate AI systems for IP law violations, as well 
as develop mitigation strategies and resources; and

(v) assist the Intellectual Property Enforcement Coordinator in updating 
the Intellectual Property Enforcement Coordinator Joint Strategic Plan on 
Intellectual Property Enforcement to address AI-related issues.

                    (e) To advance responsible AI innovation by a wide 
                range of healthcare technology developers that promotes 
                the welfare of patients and workers in the healthcare 
                sector, the Secretary of HHS shall identify and, as 
                appropriate and consistent with applicable law and the 
                activities directed in section 8 of this order, 
                prioritize grantmaking and other awards, as well as 
                undertake related efforts, to support responsible AI 
                development and use, including:

(i) collaborating with appropriate private sector actors through HHS 
programs that may support the advancement of AI-enabled tools that develop 
personalized immune-response profiles for patients, consistent with section 
4 of this order;

(ii) prioritizing the allocation of 2024 Leading Edge Acceleration Project 
cooperative agreement awards to initiatives that explore ways to improve 
healthcare-data quality to support the responsible development of AI tools 
for clinical care, real-world-evidence programs, population health, public 
health, and related research; and

[[Page 75208]]

(iii) accelerating grants awarded through the National Institutes of Health 
Artificial Intelligence/Machine Learning Consortium to Advance Health 
Equity and Researcher Diversity (AIM-AHEAD) program and showcasing current 
AIM-AHEAD activities in underserved communities.

                    (f) To advance the development of AI systems that 
                improve the quality of veterans' healthcare, and in 
                order to support small businesses' innovative capacity, 
                the Secretary of Veterans Affairs shall:

(i) within 365 days of the date of this order, host two 3-month nationwide 
AI Tech Sprint competitions; and

(ii) as part of the AI Tech Sprint competitions and in collaboration with 
appropriate partners, provide participants access to technical assistance, 
mentorship opportunities, individualized expert feedback on products under 
development, potential contract opportunities, and other programming and 
resources.

                    (g) Within 180 days of the date of this order, to 
                support the goal of strengthening our Nation's 
                resilience against climate change impacts and building 
                an equitable clean energy economy for the future, the 
                Secretary of Energy, in consultation with the Chair of 
                the Federal Energy Regulatory Commission, the Director 
                of OSTP, the Chair of the Council on Environmental 
                Quality, the Assistant to the President and National 
                Climate Advisor, and the heads of other relevant 
                agencies as the Secretary of Energy may deem 
                appropriate, shall:

(i) issue a public report describing the potential for AI to improve 
planning, permitting, investment, and operations for electric grid 
infrastructure and to enable the provision of clean, affordable, reliable, 
resilient, and secure electric power to all Americans;

(ii) develop tools that facilitate building foundation models useful for 
basic and applied science, including models that streamline permitting and 
environmental reviews while improving environmental and social outcomes;

(iii) collaborate, as appropriate, with private sector organizations and 
members of academia to support development of AI tools to mitigate climate 
change risks;

(iv) take steps to expand partnerships with industry, academia, other 
agencies, and international allies and partners to utilize the Department 
of Energy's computing capabilities and AI testbeds to build foundation 
models that support new applications in science and energy, and for 
national security, including partnerships that increase community 
preparedness for climate-related risks, enable clean-energy deployment 
(including addressing delays in permitting reviews), and enhance grid 
reliability and resilience; and

(v) establish an office to coordinate development of AI and other critical 
and emerging technologies across Department of Energy programs and the 17 
National Laboratories.

                    (h) Within 180 days of the date of this order, to 
                understand AI's implications for scientific research, 
                the President's Council of Advisors on Science and 
                Technology shall submit to the President and make 
                publicly available a report on the potential role of 
                AI, especially given recent developments in AI, in 
                research aimed at tackling major societal and global 
                challenges. The report shall include a discussion of 
                issues that may hinder the effective use of AI in 
                research and practices needed to ensure that AI is used 
                responsibly for research.

                5.3. Promoting Competition. (a) The head of each agency 
                developing policies and regulations related to AI shall 
                use their authorities, as appropriate and consistent 
                with applicable law, to promote competition in AI and 
                related technologies, as well as in other markets. Such 
                actions include addressing risks arising from 
                concentrated control of key inputs, taking steps to 
                stop unlawful collusion and prevent dominant firms from 
                disadvantaging competitors, and working to provide new 
                opportunities for small businesses and

[[Page 75209]]

                entrepreneurs. In particular, the Federal Trade 
                Commission is encouraged to consider, as it deems 
                appropriate, whether to exercise the Commission's 
                existing authorities, including its rulemaking 
                authority under the Federal Trade Commission Act, 15 
                U.S.C. 41 et seq., to ensure fair competition in the AI 
                marketplace and to ensure that consumers and workers 
                are protected from harms that may be enabled by the use 
                of AI.

                    (b) To promote competition and innovation in the 
                semiconductor industry, recognizing that semiconductors 
                power AI technologies and that their availability is 
                critical to AI competition, the Secretary of Commerce 
                shall, in implementing division A of Public Law 117-
                167, known as the Creating Helpful Incentives to 
                Produce Semiconductors (CHIPS) Act of 2022, promote 
                competition by:

(i) implementing a flexible membership structure for the National 
Semiconductor Technology Center that attracts all parts of the 
semiconductor and microelectronics ecosystem, including startups and small 
firms;

(ii) implementing mentorship programs to increase interest and 
participation in the semiconductor industry, including from workers in 
underserved communities;

(iii) increasing, where appropriate and to the extent permitted by law, the 
availability of resources to startups and small businesses, including:

  (A) funding for physical assets, such as specialty equipment or 
facilities, to which startups and small businesses may not otherwise have 
access;

  (B) datasets--potentially including test and performance data--collected, 
aggregated, or shared by CHIPS research and development programs;

  (C) workforce development programs;

  (D) design and process technology, as well as IP, as appropriate; and

  (E) other resources, including technical and intellectual property 
assistance, that could accelerate commercialization of new technologies by 
startups and small businesses, as appropriate; and

(iv) considering the inclusion, to the maximum extent possible, and as 
consistent with applicable law, of competition-increasing measures in 
notices of funding availability for commercial research-and-development 
facilities focused on semiconductors, including measures that increase 
access to facility capacity for startups or small firms developing 
semiconductors used to power AI technologies.

                    (c) To support small businesses innovating and 
                commercializing AI, as well as in responsibly adopting 
                and deploying AI, the Administrator of the Small 
                Business Administration shall:

(i) prioritize the allocation of Regional Innovation Cluster program 
funding for clusters that support planning activities related to the 
establishment of one or more Small Business AI Innovation and 
Commercialization Institutes that provide support, technical assistance, 
and other resources to small businesses seeking to innovate, commercialize, 
scale, or otherwise advance the development of AI;

(ii) prioritize the allocation of up to $2 million in Growth Accelerator 
Fund Competition bonus prize funds for accelerators that support the 
incorporation or expansion of AI-related curricula, training, and technical 
assistance, or other AI-related resources within their programming; and

(iii) assess the extent to which the eligibility criteria of existing 
programs, including the State Trade Expansion Program, Technical and 
Business Assistance funding, and capital-access programs--such as the 7(a) 
loan program, 504 loan program, and Small Business Investment Company 
(SBIC) program--support appropriate expenses by small businesses related to 
the adoption of AI and, if feasible and appropriate, revise eligibility 
criteria to improve support for these expenses.

                    (d) The Administrator of the Small Business 
                Administration, in coordination with resource partners, 
                shall conduct outreach regarding, and raise

[[Page 75210]]

                awareness of, opportunities for small businesses to use 
                capital-access programs described in subsection 5.3(c) 
                of this section for eligible AI-related purposes, and 
                for eligible investment funds with AI-related 
                expertise--particularly those seeking to serve or with 
                experience serving underserved communities--to apply 
                for an SBIC license.

                Sec. 6. Supporting Workers. (a) To advance the 
                Government's understanding of AI's implications for 
                workers, the following actions shall be taken within 
                180 days of the date of this order:

(i) The Chairman of the Council of Economic Advisers shall prepare and 
submit a report to the President on the labor-market effects of AI.

(ii) To evaluate necessary steps for the Federal Government to address AI-
related workforce disruptions, the Secretary of Labor shall submit to the 
President a report analyzing the abilities of agencies to support workers 
displaced by the adoption of AI and other technological advancements. The 
report shall, at a minimum:

  (A) assess how current or formerly operational Federal programs designed 
to assist workers facing job disruptions--including unemployment insurance 
and programs authorized by the Workforce Innovation and Opportunity Act 
(Public Law 113-128)--could be used to respond to possible future AI-
related disruptions; and

  (B) identify options, including potential legislative measures, to 
strengthen or develop additional Federal support for workers displaced by 
AI and, in consultation with the Secretary of Commerce and the Secretary of 
Education, strengthen and expand education and training opportunities that 
provide individuals pathways to occupations related to AI.

                    (b) To help ensure that AI deployed in the 
                workplace advances employees' well-being:

(i) The Secretary of Labor shall, within 180 days of the date of this order 
and in consultation with other agencies and with outside entities, 
including labor unions and workers, as the Secretary of Labor deems 
appropriate, develop and publish principles and best practices for 
employers that could be used to mitigate AI's potential harms to employees' 
well-being and maximize its potential benefits. The principles and best 
practices shall include specific steps for employers to take with regard to 
AI, and shall cover, at a minimum:

  (A) job-displacement risks and career opportunities related to AI, 
including effects on job skills and evaluation of applicants and workers;

  (B) labor standards and job quality, including issues related to the 
equity, protected-activity, compensation, health, and safety implications 
of AI in the workplace; and

  (C) implications for workers of employers' AI-related collection and use 
of data about them, including transparency, engagement, management, and 
activity protected under worker-protection laws.

(ii) After principles and best practices are developed pursuant to 
subsection (b)(i) of this section, the heads of agencies shall consider, in 
consultation with the Secretary of Labor, encouraging the adoption of these 
guidelines in their programs to the extent appropriate for each program and 
consistent with applicable law.

(iii) To support employees whose work is monitored or augmented by AI in 
being compensated appropriately for all of their work time, the Secretary 
of Labor shall issue guidance to make clear that employers that deploy AI 
to monitor or augment employees' work must continue to comply with 
protections that ensure that workers are compensated for their hours 
worked, as defined under the Fair Labor Standards Act of 1938, 29 U.S.C. 
201 et seq., and other legal requirements.

                    (c) To foster a diverse AI-ready workforce, the 
                Director of NSF shall prioritize available resources to 
                support AI-related education and AI-related

[[Page 75211]]

                workforce development through existing programs. The 
                Director shall additionally consult with agencies, as 
                appropriate, to identify further opportunities for 
                agencies to allocate resources for those purposes. The 
                actions by the Director shall use appropriate 
                fellowship programs and awards for these purposes.

                Sec. 7. Advancing Equity and Civil Rights.

                7.1. Strengthening AI and Civil Rights in the Criminal 
                Justice System. (a) To address unlawful discrimination 
                and other harms that may be exacerbated by AI, the 
                Attorney General shall:

(i) consistent with Executive Order 12250 of November 2, 1980 (Leadership 
and Coordination of Nondiscrimination Laws), Executive Order 14091, and 28 
CFR 0.50-51, coordinate with and support agencies in their implementation 
and enforcement of existing Federal laws to address civil rights and civil 
liberties violations and discrimination related to AI;

(ii) direct the Assistant Attorney General in charge of the Civil Rights 
Division to convene, within 90 days of the date of this order, a meeting of 
the heads of Federal civil rights offices--for which meeting the heads of 
civil rights offices within independent regulatory agencies will be 
encouraged to join--to discuss comprehensive use of their respective 
authorities and offices to: prevent and address discrimination in the use 
of automated systems, including algorithmic discrimination; increase 
coordination between the Department of Justice's Civil Rights Division and 
Federal civil rights offices concerning issues related to AI and 
algorithmic discrimination; improve external stakeholder engagement to 
promote public awareness of potential discriminatory uses and effects of 
AI; and develop, as appropriate, additional training, technical assistance, 
guidance, or other resources; and

(iii) consider providing, as appropriate and consistent with applicable 
law, guidance, technical assistance, and training to State, local, Tribal, 
and territorial investigators and prosecutors on best practices for 
investigating and prosecuting civil rights violations and discrimination 
related to automated systems, including AI.

                    (b) To promote the equitable treatment of 
                individuals and adhere to the Federal Government's 
                fundamental obligation to ensure fair and impartial 
                justice for all, with respect to the use of AI in the 
                criminal justice system, the Attorney General shall, in 
                consultation with the Secretary of Homeland Security 
                and the Director of OSTP:

(i) within 365 days of the date of this order, submit to the President a 
report that addresses the use of AI in the criminal justice system, 
including any use in:

  (A) sentencing;

  (B) parole, supervised release, and probation;

  (C) bail, pretrial release, and pretrial detention;

  (D) risk assessments, including pretrial, earned time, and early release 
or transfer to home-confinement determinations;

  (E) police surveillance;

  (F) crime forecasting and predictive policing, including the ingestion of 
historical crime data into AI systems to predict high-density ``hot 
spots'';

  (G) prison-management tools; and

  (H) forensic analysis;

(ii) within the report set forth in subsection 7.1(b)(i) of this section:

  (A) identify areas where AI can enhance law enforcement efficiency and 
accuracy, consistent with protections for privacy, civil rights, and civil 
liberties; and

  (B) recommend best practices for law enforcement agencies, including 
safeguards and appropriate use limits for AI, to address the concerns

[[Page 75212]]

set forth in section 13(e)(i) of Executive Order 14074 as well as the best 
practices and the guidelines set forth in section 13(e)(iii) of Executive 
Order 14074; and

(iii) supplement the report set forth in subsection 7.1(b)(i) of this 
section as appropriate with recommendations to the President, including 
with respect to requests for necessary legislation.

                    (c) To advance the presence of relevant technical 
                experts and expertise (such as machine-learning 
                engineers, software and infrastructure engineering, 
                data privacy experts, data scientists, and user 
                experience researchers) among law enforcement 
                professionals:

(i) The interagency working group created pursuant to section 3 of 
Executive Order 14074 shall, within 180 days of the date of this order, 
identify and share best practices for recruiting and hiring law enforcement 
professionals who have the technical skills mentioned in subsection 7.1(c) 
of this section, and for training law enforcement professionals about 
responsible application of AI.

(ii) Within 270 days of the date of this order, the Attorney General shall, 
in consultation with the Secretary of Homeland Security, consider those 
best practices and the guidance developed under section 3(d) of Executive 
Order 14074 and, if necessary, develop additional general recommendations 
for State, local, Tribal, and territorial law enforcement agencies and 
criminal justice agencies seeking to recruit, hire, train, promote, and 
retain highly qualified and service-oriented officers and staff with 
relevant technical knowledge. In considering this guidance, the Attorney 
General shall consult with State, local, Tribal, and territorial law 
enforcement agencies, as appropriate.

(iii) Within 365 days of the date of this order, the Attorney General shall 
review the work conducted pursuant to section 2(b) of Executive Order 14074 
and, if appropriate, reassess the existing capacity to investigate law 
enforcement deprivation of rights under color of law resulting from the use 
of AI, including through improving and increasing training of Federal law 
enforcement officers, their supervisors, and Federal prosecutors on how to 
investigate and prosecute cases related to AI involving the deprivation of 
rights under color of law pursuant to 18 U.S.C. 242.

                7.2. Protecting Civil Rights Related to Government 
                Benefits and Programs. (a) To advance equity and civil 
                rights, consistent with the directives of Executive 
                Order 14091, and in addition to complying with the 
                guidance on Federal Government use of AI issued 
                pursuant to section 10.1(b) of this order, agencies 
                shall use their respective civil rights and civil 
                liberties offices and authorities--as appropriate and 
                consistent with applicable law--to prevent and address 
                unlawful discrimination and other harms that result 
                from uses of AI in Federal Government programs and 
                benefits administration. This directive does not apply 
                to agencies' civil or criminal enforcement authorities. 
                Agencies shall consider opportunities to ensure that 
                their respective civil rights and civil liberties 
                offices are appropriately consulted on agency decisions 
                regarding the design, development, acquisition, and use 
                of AI in Federal Government programs and benefits 
                administration. To further these objectives, agencies 
                shall also consider opportunities to increase 
                coordination, communication, and engagement about AI as 
                appropriate with community-based organizations; civil-
                rights and civil-liberties organizations; academic 
                institutions; industry; State, local, Tribal, and 
                territorial governments; and other stakeholders.

                    (b) To promote equitable administration of public 
                benefits:

(i) The Secretary of HHS shall, within 180 days of the date of this order 
and in consultation with relevant agencies, publish a plan, informed by the 
guidance issued pursuant to section 10.1(b) of this order, addressing the 
use of automated or algorithmic systems in the implementation by States and 
localities of public benefits and services administered by the Secretary, 
such as to promote: assessment of access to benefits by qualified 
recipients; notice to recipients about the presence of such systems; 
regular

[[Page 75213]]

evaluation to detect unjust denials; processes to retain appropriate levels 
of discretion of expert agency staff; processes to appeal denials to human 
reviewers; and analysis of whether algorithmic systems in use by benefit 
programs achieve equitable and just outcomes.

(ii) The Secretary of Agriculture shall, within 180 days of the date of 
this order and as informed by the guidance issued pursuant to section 
10.1(b) of this order, issue guidance to State, local, Tribal, and 
territorial public-benefits administrators on the use of automated or 
algorithmic systems in implementing benefits or in providing customer 
support for benefit programs administered by the Secretary, to ensure that 
programs using those systems:

  (A) maximize program access for eligible recipients;

  (B) employ automated or algorithmic systems in a manner consistent with 
any requirements for using merit systems personnel in public-benefits 
programs;

  (C) identify instances in which reliance on automated or algorithmic 
systems would require notification by the State, local, Tribal, or 
territorial government to the Secretary;

  (D) identify instances when applicants and participants can appeal 
benefit determinations to a human reviewer for reconsideration and can 
receive other customer support from a human being;

  (E) enable auditing and, if necessary, remediation of the logic used to 
arrive at an individual decision or determination to facilitate the 
evaluation of appeals; and

  (F) enable the analysis of whether algorithmic systems in use by benefit 
programs achieve equitable outcomes.

                7.3. Strengthening AI and Civil Rights in the Broader 
                Economy. (a) Within 365 days of the date of this order, 
                to prevent unlawful discrimination from AI used for 
                hiring, the Secretary of Labor shall publish guidance 
                for Federal contractors regarding nondiscrimination in 
                hiring involving AI and other technology-based hiring 
                systems.

                    (b) To address discrimination and biases against 
                protected groups in housing markets and consumer 
                financial markets, the Director of the Federal Housing 
                Finance Agency and the Director of the Consumer 
                Financial Protection Bureau are encouraged to consider 
                using their authorities, as they deem appropriate, to 
                require their respective regulated entities, where 
                possible, to use appropriate methodologies including AI 
                tools to ensure compliance with Federal law and:

(i) evaluate their underwriting models for bias or disparities affecting 
protected groups; and

(ii) evaluate automated collateral-valuation and appraisal processes in 
ways that minimize bias.

                    (c) Within 180 days of the date of this order, to 
                combat unlawful discrimination enabled by automated or 
                algorithmic tools used to make decisions about access 
                to housing and in other real estate-related 
                transactions, the Secretary of Housing and Urban 
                Development shall, and the Director of the Consumer 
                Financial Protection Bureau is encouraged to, issue 
                additional guidance:

(i) addressing the use of tenant screening systems in ways that may violate 
the Fair Housing Act (Public Law 90-284), the Fair Credit Reporting Act 
(Public Law 91-508), or other relevant Federal laws, including how the use 
of data, such as criminal records, eviction records, and credit 
information, can lead to discriminatory outcomes in violation of Federal 
law; and

(ii) addressing how the Fair Housing Act, the Consumer Financial Protection 
Act of 2010 (title X of Public Law 111-203), or the Equal Credit 
Opportunity Act (Public Law 93-495) apply to the advertising of housing,

[[Page 75214]]

credit, and other real estate-related transactions through digital 
platforms, including those that use algorithms to facilitate advertising 
delivery, as well as on best practices to avoid violations of Federal law.

                    (d) To help ensure that people with disabilities 
                benefit from AI's promise while being protected from 
                its risks, including unequal treatment from the use of 
                biometric data like gaze direction, eye tracking, gait 
                analysis, and hand motions, the Architectural and 
                Transportation Barriers Compliance Board is encouraged, 
                as it deems appropriate, to solicit public 
                participation and conduct community engagement; to 
                issue technical assistance and recommendations on the 
                risks and benefits of AI in using biometric data as an 
                input; and to provide people with disabilities access 
                to information and communication technology and 
                transportation services.

                Sec. 8. Protecting Consumers, Patients, Passengers, and 
                Students. (a) Independent regulatory agencies are 
                encouraged, as they deem appropriate, to consider using 
                their full range of authorities to protect American 
                consumers from fraud, discrimination, and threats to 
                privacy and to address other risks that may arise from 
                the use of AI, including risks to financial stability, 
                and to consider rulemaking, as well as emphasizing or 
                clarifying where existing regulations and guidance 
                apply to AI, including clarifying the responsibility of 
                regulated entities to conduct due diligence on and 
                monitor any third-party AI services they use, and 
                emphasizing or clarifying requirements and expectations 
                related to the transparency of AI models and regulated 
                entities' ability to explain their use of AI models.

                    (b) To help ensure the safe, responsible deployment 
                and use of AI in the healthcare, public-health, and 
                human-services sectors:

(i) Within 90 days of the date of this order, the Secretary of HHS shall, 
in consultation with the Secretary of Defense and the Secretary of Veterans 
Affairs, establish an HHS AI Task Force that shall, within 365 days of its 
creation, develop a strategic plan that includes policies and frameworks--
possibly including regulatory action, as appropriate--on responsible 
deployment and use of AI and AI-enabled technologies in the health and 
human services sector (including research and discovery, drug and device 
safety, healthcare delivery and financing, and public health), and identify 
appropriate guidance and resources to promote that deployment, including in 
the following areas:

  (A) development, maintenance, and use of predictive and generative AI-
enabled technologies in healthcare delivery and financing--including 
quality measurement, performance improvement, program integrity, benefits 
administration, and patient experience--taking into account considerations 
such as appropriate human oversight of the application of AI-generated 
output;

  (B) long-term safety and real-world performance monitoring of AI-enabled 
technologies in the health and human services sector, including clinically 
relevant or significant modifications and performance across population 
groups, with a means to communicate product updates to regulators, 
developers, and users;

  (C) incorporation of equity principles in AI-enabled technologies used in 
the health and human services sector, using disaggregated data on affected 
populations and representative population data sets when developing new 
models, monitoring algorithmic performance against discrimination and bias 
in existing models, and helping to identify and mitigate discrimination and 
bias in current systems;

  (D) incorporation of safety, privacy, and security standards into the 
software-development lifecycle for protection of personally identifiable 
information, including measures to address AI-enhanced cybersecurity 
threats in the health and human services sector;

  (E) development, maintenance, and availability of documentation to help 
users determine appropriate and safe uses of AI in local settings in the 
health and human services sector;

[[Page 75215]]

  (F) work to be done with State, local, Tribal, and territorial health and 
human services agencies to advance positive use cases and best practices 
for use of AI in local settings; and

  (G) identification of uses of AI to promote workplace efficiency and 
satisfaction in the health and human services sector, including reducing 
administrative burdens.

(ii) Within 180 days of the date of this order, the Secretary of HHS shall 
direct HHS components, as the Secretary of HHS deems appropriate, to 
develop a strategy, in consultation with relevant agencies, to determine 
whether AI-enabled technologies in the health and human services sector 
maintain appropriate levels of quality, including, as appropriate, in the 
areas described in subsection (b)(i) of this section. This work shall 
include the development of AI assurance policy--to evaluate important 
aspects of the performance of AI-enabled healthcare tools--and 
infrastructure needs for enabling pre-market assessment and post-market 
oversight of AI-enabled healthcare-technology algorithmic system 
performance against real-world data.

(iii) Within 180 days of the date of this order, the Secretary of HHS 
shall, in consultation with relevant agencies as the Secretary of HHS deems 
appropriate, consider appropriate actions to advance the prompt 
understanding of, and compliance with, Federal nondiscrimination laws by 
health and human services providers that receive Federal financial 
assistance, as well as how those laws relate to AI. Such actions may 
include:

  (A) convening and providing technical assistance to health and human 
services providers and payers about their obligations under Federal 
nondiscrimination and privacy laws as they relate to AI and the potential 
consequences of noncompliance; and

  (B) issuing guidance, or taking other action as appropriate, in response 
to any complaints or other reports of noncompliance with Federal 
nondiscrimination and privacy laws as they relate to AI.

(iv) Within 365 days of the date of this order, the Secretary of HHS shall, 
in consultation with the Secretary of Defense and the Secretary of Veterans 
Affairs, establish an AI safety program that, in partnership with voluntary 
federally listed Patient Safety Organizations:

  (A) establishes a common framework for approaches to identifying and 
capturing clinical errors resulting from AI deployed in healthcare settings 
as well as specifications for a central tracking repository for associated 
incidents that cause harm, including through bias or discrimination, to 
patients, caregivers, or other parties;

  (B) analyzes captured data and generated evidence to develop, wherever 
appropriate, recommendations, best practices, or other informal guidelines 
aimed at avoiding these harms; and

  (C) disseminates those recommendations, best practices, or other informal 
guidance to appropriate stakeholders, including healthcare providers.

(v) Within 365 days of the date of this order, the Secretary of HHS shall 
develop a strategy for regulating the use of AI or AI-enabled tools in 
drug-development processes. The strategy shall, at a minimum:

  (A) define the objectives, goals, and high-level principles required for 
appropriate regulation throughout each phase of drug development;

  (B) identify areas where future rulemaking, guidance, or additional 
statutory authority may be necessary to implement such a regulatory system;

  (C) identify the existing budget, resources, personnel, and potential for 
new public/private partnerships necessary for such a regulatory system; and

  (D) consider risks identified by the actions undertaken to implement 
section 4 of this order.

[[Page 75216]]

                    (c) To promote the safe and responsible development 
                and use of AI in the transportation sector, in 
                consultation with relevant agencies:

(i) Within 30 days of the date of this order, the Secretary of 
Transportation shall direct the Nontraditional and Emerging Transportation 
Technology (NETT) Council to assess the need for information, technical 
assistance, and guidance regarding the use of AI in transportation. The 
Secretary of Transportation shall further direct the NETT Council, as part 
of any such efforts, to:

  (A) support existing and future initiatives to pilot transportation-
related applications of AI, as they align with policy priorities 
articulated in the Department of Transportation's (DOT) Innovation 
Principles, including, as appropriate, through technical assistance and 
connecting stakeholders;

  (B) evaluate the outcomes of such pilot programs in order to assess when 
DOT, or other Federal or State agencies, have sufficient information to 
take regulatory actions, as appropriate, and recommend appropriate actions 
when that information is available; and

  (C) establish a new DOT Cross-Modal Executive Working Group, which will 
consist of members from different divisions of DOT and coordinate 
applicable work among these divisions, to solicit and use relevant input 
from appropriate stakeholders.

(ii) Within 90 days of the date of this order, the Secretary of 
Transportation shall direct appropriate Federal Advisory Committees of the 
DOT to provide advice on the safe and responsible use of AI in 
transportation. The committees shall include the Advanced Aviation Advisory 
Committee, the Transforming Transportation Advisory Committee, and the 
Intelligent Transportation Systems Program Advisory Committee.

(iii) Within 180 days of the date of this order, the Secretary of 
Transportation shall direct the Advanced Research Projects Agency-
Infrastructure (ARPA-I) to explore the transportation-related opportunities 
and challenges of AI--including regarding software-defined AI enhancements 
impacting autonomous mobility ecosystems. The Secretary of Transportation 
shall further encourage ARPA-I to prioritize the allocation of grants to 
those opportunities, as appropriate. The work tasked to ARPA-I shall 
include soliciting input on these topics through a public consultation 
process, such as an RFI.

                    (d) To help ensure the responsible development and 
                deployment of AI in the education sector, the Secretary 
                of Education shall, within 365 days of the date of this 
                order, develop resources, policies, and guidance 
                regarding AI. These resources shall address safe, 
                responsible, and nondiscriminatory uses of AI in 
                education, including the impact AI systems have on 
                vulnerable and underserved communities, and shall be 
                developed in consultation with stakeholders as 
                appropriate. They shall also include the development of 
                an ``AI toolkit'' for education leaders implementing 
                recommendations from the Department of Education's AI 
                and the Future of Teaching and Learning report, 
                including appropriate human review of AI decisions, 
                designing AI systems to enhance trust and safety and 
                align with privacy-related laws and regulations in the 
                educational context, and developing education-specific 
                guardrails.
                    (e) The Federal Communications Commission is 
                encouraged to consider actions related to how AI will 
                affect communications networks and consumers, including 
                by:

(i) examining the potential for AI to improve spectrum management, increase 
the efficiency of non-Federal spectrum usage, and expand opportunities for 
the sharing of non-Federal spectrum;

(ii) coordinating with the National Telecommunications and Information 
Administration to create opportunities for sharing spectrum between Federal 
and non-Federal spectrum operations;

[[Page 75217]]

(iii) providing support for efforts to improve network security, 
resiliency, and interoperability using next-generation technologies that 
incorporate AI, including self-healing networks, 6G, and Open RAN; and

(iv) encouraging, including through rulemaking, efforts to combat unwanted 
robocalls and robotexts that are facilitated or exacerbated by AI and to 
deploy AI technologies that better serve consumers by blocking unwanted 
robocalls and robotexts.

                Sec. 9. Protecting Privacy. (a) To mitigate privacy 
                risks potentially exacerbated by AI--including by AI's 
                facilitation of the collection or use of information 
                about individuals, or the making of inferences about 
                individuals--the Director of OMB shall:

(i) evaluate and take steps to identify commercially available information 
(CAI) procured by agencies, particularly CAI that contains personally 
identifiable information and including CAI procured from data brokers and 
CAI procured and processed indirectly through vendors, in appropriate 
agency inventory and reporting processes (other than when it is used for 
the purposes of national security);

(ii) evaluate, in consultation with the Federal Privacy Council and the 
Interagency Council on Statistical Policy, agency standards and procedures 
associated with the collection, processing, maintenance, use, sharing, 
dissemination, and disposition of CAI that contains personally identifiable 
information (other than when it is used for the purposes of national 
security) to inform potential guidance to agencies on ways to mitigate 
privacy and confidentiality risks from agencies' activities related to CAI;

(iii) within 180 days of the date of this order, in consultation with the 
Attorney General, the Assistant to the President for Economic Policy, and 
the Director of OSTP, issue an RFI to inform potential revisions to 
guidance to agencies on implementing the privacy provisions of the E-
Government Act of 2002 (Public Law 107-347). The RFI shall seek feedback 
regarding how privacy impact assessments may be more effective at 
mitigating privacy risks, including those that are further exacerbated by 
AI; and

(iv) take such steps as are necessary and appropriate, consistent with 
applicable law, to support and advance the near-term actions and long-term 
strategy identified through the RFI process, including issuing new or 
updated guidance or RFIs or consulting other agencies or the Federal 
Privacy Council.

                    (b) Within 365 days of the date of this order, to 
                better enable agencies to use PETs to safeguard 
                Americans' privacy from the potential threats 
                exacerbated by AI, the Secretary of Commerce, acting 
                through the Director of NIST, shall create guidelines 
                for agencies to evaluate the efficacy of differential-
                privacy-guarantee protections, including for AI. The 
                guidelines shall, at a minimum, describe the 
                significant factors that bear on differential-privacy 
                safeguards and common risks to realizing differential 
                privacy in practice.
                    (c) To advance research, development, and 
                implementation related to PETs:

(i) Within 120 days of the date of this order, the Director of NSF, in 
collaboration with the Secretary of Energy, shall fund the creation of a 
Research Coordination Network (RCN) dedicated to advancing privacy research 
and, in particular, the development, deployment, and scaling of PETs. The 
RCN shall serve to enable privacy researchers to share information, 
coordinate and collaborate in research, and develop standards for the 
privacy-research community.

(ii) Within 240 days of the date of this order, the Director of NSF shall 
engage with agencies to identify ongoing work and potential opportunities 
to incorporate PETs into their operations. The Director of NSF shall, where 
feasible and appropriate, prioritize research--including efforts to 
translate research discoveries into practical applications--that encourage 
the adoption of leading-edge PETs solutions for agencies' use, including

[[Page 75218]]

through research engagement through the RCN described in subsection (c)(i) 
of this section.

(iii) The Director of NSF shall use the results of the United States-United 
Kingdom PETs Prize Challenge to inform the approaches taken, and 
opportunities identified, for PETs research and adoption.

                Sec. 10. Advancing Federal Government Use of AI.

                10.1. Providing Guidance for AI Management. (a) To 
                coordinate the use of AI across the Federal Government, 
                within 60 days of the date of this order and on an 
                ongoing basis as necessary, the Director of OMB shall 
                convene and chair an interagency council to coordinate 
                the development and use of AI in agencies' programs and 
                operations, other than the use of AI in national 
                security systems. The Director of OSTP shall serve as 
                Vice Chair for the interagency council. The interagency 
                council's membership shall include, at minimum, the 
                heads of the agencies identified in 31 U.S.C. 901(b), 
                the Director of National Intelligence, and other 
                agencies as identified by the Chair. Until agencies 
                designate their permanent Chief AI Officers consistent 
                with the guidance described in subsection 10.1(b) of 
                this section, they shall be represented on the 
                interagency council by an appropriate official at the 
                Assistant Secretary level or equivalent, as determined 
                by the head of each agency.

                    (b) To provide guidance on Federal Government use 
                of AI, within 150 days of the date of this order and 
                updated periodically thereafter, the Director of OMB, 
                in coordination with the Director of OSTP, and in 
                consultation with the interagency council established 
                in subsection 10.1(a) of this section, shall issue 
                guidance to agencies to strengthen the effective and 
                appropriate use of AI, advance AI innovation, and 
                manage risks from AI in the Federal Government. The 
                Director of OMB's guidance shall specify, to the extent 
                appropriate and consistent with applicable law:

(i) the requirement to designate at each agency within 60 days of the 
issuance of the guidance a Chief Artificial Intelligence Officer who shall 
hold primary responsibility in their agency, in coordination with other 
responsible officials, for coordinating their agency's use of AI, promoting 
AI innovation in their agency, managing risks from their agency's use of 
AI, and carrying out the responsibilities described in section 8(c) of 
Executive Order 13960 of December 3, 2020 (Promoting the Use of Trustworthy 
Artificial Intelligence in the Federal Government), and section 4(b) of 
Executive Order 14091;

(ii) the Chief Artificial Intelligence Officers' roles, responsibilities, 
seniority, position, and reporting structures;

(iii) for the agencies identified in 31 U.S.C. 901(b), the creation of 
internal Artificial Intelligence Governance Boards, or other appropriate 
mechanisms, at each agency within 60 days of the issuance of the guidance 
to coordinate and govern AI issues through relevant senior leaders from 
across the agency;

(iv) required minimum risk-management practices for Government uses of AI 
that impact people's rights or safety, including, where appropriate, the 
following practices derived from OSTP's Blueprint for an AI Bill of Rights 
and the NIST AI Risk Management Framework: conducting public consultation; 
assessing data quality; assessing and mitigating disparate impacts and 
algorithmic discrimination; providing notice of the use of AI; continuously 
monitoring and evaluating deployed AI; and granting human consideration and 
remedies for adverse decisions made using AI;

(v) specific Federal Government uses of AI that are presumed by default to 
impact rights or safety;

(vi) recommendations to agencies to reduce barriers to the responsible use 
of AI, including barriers related to information technology infrastructure, 
data, workforce, budgetary restrictions, and cybersecurity processes;

(vii) requirements that agencies identified in 31 U.S.C. 901(b) develop AI 
strategies and pursue high-impact AI use cases;

[[Page 75219]]

(viii) in consultation with the Secretary of Commerce, the Secretary of 
Homeland Security, and the heads of other appropriate agencies as 
determined by the Director of OMB, recommendations to agencies regarding:

  (A) external testing for AI, including AI red-teaming for generative AI, 
to be developed in coordination with the Cybersecurity and Infrastructure 
Security Agency;

  (B) testing and safeguards against discriminatory, misleading, 
inflammatory, unsafe, or deceptive outputs, as well as against producing 
child sexual abuse material and against producing non-consensual intimate 
imagery of real individuals (including intimate digital depictions of the 
body or body parts of an identifiable individual), for generative AI;

  (C) reasonable steps to watermark or otherwise label output from 
generative AI;

  (D) application of the mandatory minimum risk-management practices 
defined under subsection 10.1(b)(iv) of this section to procured AI;

  (E) independent evaluation of vendors' claims concerning both the 
effectiveness and risk mitigation of their AI offerings;

  (F) documentation and oversight of procured AI;

  (G) maximizing the value to agencies when relying on contractors to use 
and enrich Federal Government data for the purposes of AI development and 
operation;

  (H) provision of incentives for the continuous improvement of procured 
AI; and

  (I) training on AI in accordance with the principles set out in this 
order and in other references related to AI listed herein; and

(ix) requirements for public reporting on compliance with this guidance.

                    (c) To track agencies' AI progress, within 60 days 
                of the issuance of the guidance established in 
                subsection 10.1(b) of this section and updated 
                periodically thereafter, the Director of OMB shall 
                develop a method for agencies to track and assess their 
                ability to adopt AI into their programs and operations, 
                manage its risks, and comply with Federal policy on AI. 
                This method should draw on existing related efforts as 
                appropriate and should address, as appropriate and 
                consistent with applicable law, the practices, 
                processes, and capabilities necessary for responsible 
                AI adoption, training, and governance across, at a 
                minimum, the areas of information technology 
                infrastructure, data, workforce, leadership, and risk 
                management.
                    (d) To assist agencies in implementing the guidance 
                to be established in subsection 10.1(b) of this 
                section:

(i) within 90 days of the issuance of the guidance, the Secretary of 
Commerce, acting through the Director of NIST, and in coordination with the 
Director of OMB and the Director of OSTP, shall develop guidelines, tools, 
and practices to support implementation of the minimum risk-management 
practices described in subsection 10.1(b)(iv) of this section; and

(ii) within 180 days of the issuance of the guidance, the Director of OMB 
shall develop an initial means to ensure that agency contracts for the 
acquisition of AI systems and services align with the guidance described in 
subsection 10.1(b) of this section and advance the other aims identified in 
section 7224(d)(1) of the Advancing American AI Act (Public Law 117-263, 
div. G, title LXXII, subtitle B).

                    (e) To improve transparency for agencies' use of 
                AI, the Director of OMB shall, on an annual basis, 
                issue instructions to agencies for the collection, 
                reporting, and publication of agency AI use cases, 
                pursuant to section 7225(a) of the Advancing American 
                AI Act. Through these instructions, the Director shall, 
                as appropriate, expand agencies' reporting on how they 
                are managing risks from their AI use cases and update 
                or replace the guidance originally established in 
                section 5 of Executive Order 13960.

[[Page 75220]]

                    (f) To advance the responsible and secure use of 
                generative AI in the Federal Government:

(i) As generative AI products become widely available and common in online 
platforms, agencies are discouraged from imposing broad general bans or 
blocks on agency use of generative AI. Agencies should instead limit 
access, as necessary, to specific generative AI services based on specific 
risk assessments; establish guidelines and limitations on the appropriate 
use of generative AI; and, with appropriate safeguards in place, provide 
their personnel and programs with access to secure and reliable generative 
AI capabilities, at least for the purposes of experimentation and routine 
tasks that carry a low risk of impacting Americans' rights. To protect 
Federal Government information, agencies are also encouraged to employ 
risk-management practices, such as training their staff on proper use, 
protection, dissemination, and disposition of Federal information; 
negotiating appropriate terms of service with vendors; implementing 
measures designed to ensure compliance with record-keeping, cybersecurity, 
confidentiality, privacy, and data protection requirements; and deploying 
other measures to prevent misuse of Federal Government information in 
generative AI.

(ii) Within 90 days of the date of this order, the Administrator of General 
Services, in coordination with the Director of OMB, and in consultation 
with the Federal Secure Cloud Advisory Committee and other relevant 
agencies as the Administrator of General Services may deem appropriate, 
shall develop and issue a framework for prioritizing critical and emerging 
technologies offerings in the Federal Risk and Authorization Management 
Program authorization process, starting with generative AI offerings that 
have the primary purpose of providing large language model-based chat 
interfaces, code-generation and debugging tools, and associated application 
programming interfaces, as well as prompt-based image generators. This 
framework shall apply for no less than 2 years from the date of its 
issuance. Agency Chief Information Officers, Chief Information Security 
Officers, and authorizing officials are also encouraged to prioritize 
generative AI and other critical and emerging technologies in granting 
authorities for agency operation of information technology systems and any 
other applicable release or oversight processes, using continuous 
authorizations and approvals wherever feasible.

(iii) Within 180 days of the date of this order, the Director of the Office 
of Personnel Management (OPM), in coordination with the Director of OMB, 
shall develop guidance on the use of generative AI for work by the Federal 
workforce.

                    (g) Within 30 days of the date of this order, to 
                increase agency investment in AI, the Technology 
                Modernization Board shall consider, as it deems 
                appropriate and consistent with applicable law, 
                prioritizing funding for AI projects for the Technology 
                Modernization Fund for a period of at least 1 year. 
                Agencies are encouraged to submit to the Technology 
                Modernization Fund project funding proposals that 
                include AI--and particularly generative AI--in service 
                of mission delivery.
                    (h) Within 180 days of the date of this order, to 
                facilitate agencies' access to commercial AI 
                capabilities, the Administrator of General Services, in 
                coordination with the Director of OMB, and in 
                collaboration with the Secretary of Defense, the 
                Secretary of Homeland Security, the Director of 
                National Intelligence, the Administrator of the 
                National Aeronautics and Space Administration, and the 
                head of any other agency identified by the 
                Administrator of General Services, shall take steps 
                consistent with applicable law to facilitate access to 
                Federal Government-wide acquisition solutions for 
                specified types of AI services and products, such as 
                through the creation of a resource guide or other tools 
                to assist the acquisition workforce. Specified types of 
                AI capabilities shall include generative AI and 
                specialized computing infrastructure.

[[Page 75221]]

                    (i) The initial means, instructions, and guidance 
                issued pursuant to subsections 10.1(a)-(h) of this 
                section shall not apply to AI when it is used as a 
                component of a national security system, which shall be 
                addressed by the proposed National Security Memorandum 
                described in subsection 4.8 of this order.

                10.2. Increasing AI Talent in Government. (a) Within 45 
                days of the date of this order, to plan a national 
                surge in AI talent in the Federal Government, the 
                Director of OSTP and the Director of OMB, in 
                consultation with the Assistant to the President for 
                National Security Affairs, the Assistant to the 
                President for Economic Policy, the Assistant to the 
                President and Domestic Policy Advisor, and the 
                Assistant to the President and Director of the Gender 
                Policy Council, shall identify priority mission areas 
                for increased Federal Government AI talent, the types 
                of talent that are highest priority to recruit and 
                develop to ensure adequate implementation of this order 
                and use of relevant enforcement and regulatory 
                authorities to address AI risks, and accelerated hiring 
                pathways.

                    (b) Within 45 days of the date of this order, to 
                coordinate rapid advances in the capacity of the 
                Federal AI workforce, the Assistant to the President 
                and Deputy Chief of Staff for Policy, in coordination 
                with the Director of OSTP and the Director of OMB, and 
                in consultation with the National Cyber Director, shall 
                convene an AI and Technology Talent Task Force, which 
                shall include the Director of OPM, the Director of the 
                General Services Administration's Technology 
                Transformation Services, a representative from the 
                Chief Human Capital Officers Council, the Assistant to 
                the President for Presidential Personnel, members of 
                appropriate agency technology talent programs, a 
                representative of the Chief Data Officer Council, and a 
                representative of the interagency council convened 
                under subsection 10.1(a) of this section. The Task 
                Force's purpose shall be to accelerate and track the 
                hiring of AI and AI-enabling talent across the Federal 
                Government, including through the following actions:

(i) within 180 days of the date of this order, tracking and reporting 
progress to the President on increasing AI capacity across the Federal 
Government, including submitting to the President a report and 
recommendations for further increasing capacity;

(ii) identifying and circulating best practices for agencies to attract, 
hire, retain, train, and empower AI talent, including diversity, inclusion, 
and accessibility best practices, as well as to plan and budget adequately 
for AI workforce needs;

(iii) coordinating, in consultation with the Director of OPM, the use of 
fellowship programs and agency technology-talent programs and human-capital 
teams to build hiring capabilities, execute hires, and place AI talent to 
fill staffing gaps; and

(iv) convening a cross-agency forum for ongoing collaboration between AI 
professionals to share best practices and improve retention.

                    (c) Within 45 days of the date of this order, to 
                advance existing Federal technology talent programs, 
                the United States Digital Service, Presidential 
                Innovation Fellowship, United States Digital Corps, 
                OPM, and technology talent programs at agencies, with 
                support from the AI and Technology Talent Task Force 
                described in subsection 10.2(b) of this section, as 
                appropriate and permitted by law, shall develop and 
                begin to implement plans to support the rapid 
                recruitment of individuals as part of a Federal 
                Government-wide AI talent surge to accelerate the 
                placement of key AI and AI-enabling talent in high-
                priority areas and to advance agencies' data and 
                technology strategies.
                    (d) To meet the critical hiring need for qualified 
                personnel to execute the initiatives in this order, and 
                to improve Federal hiring practices for AI talent, the 
                Director of OPM, in consultation with the Director of 
                OMB, shall:

(i) within 60 days of the date of this order, conduct an evidence-based 
review on the need for hiring and workplace flexibility, including Federal 
Government-wide direct-hire authority for AI and related data-science and

[[Page 75222]]

technical roles, and, where the Director of OPM finds such authority is 
appropriate, grant it; this review shall include the following job series 
at all General Schedule (GS) levels: IT Specialist (2210), Computer 
Scientist (1550), Computer Engineer (0854), and Program Analyst (0343) 
focused on AI, and any subsequently developed job series derived from these 
job series;

(ii) within 60 days of the date of this order, consider authorizing the use 
of excepted service appointments under 5 CFR 213.3102(i)(3) to address the 
need for hiring additional staff to implement directives of this order;

(iii) within 90 days of the date of this order, coordinate a pooled-hiring 
action informed by subject-matter experts and using skills-based 
assessments to support the recruitment of AI talent across agencies;

(iv) within 120 days of the date of this order, as appropriate and 
permitted by law, issue guidance for agency application of existing pay 
flexibilities or incentive pay programs for AI, AI-enabling, and other key 
technical positions to facilitate appropriate use of current pay 
incentives;

(v) within 180 days of the date of this order, establish guidance and 
policy on skills-based, Federal Government-wide hiring of AI, data, and 
technology talent in order to increase access to those with nontraditional 
academic backgrounds to Federal AI, data, and technology roles;

(vi) within 180 days of the date of this order, establish an interagency 
working group, staffed with both human-resources professionals and 
recruiting technical experts, to facilitate Federal Government-wide hiring 
of people with AI and other technical skills;

(vii) within 180 days of the date of this order, review existing Executive 
Core Qualifications (ECQs) for Senior Executive Service (SES) positions 
informed by data and AI literacy competencies and, within 365 days of the 
date of this order, implement new ECQs as appropriate in the SES assessment 
process;

(viii) within 180 days of the date of this order, complete a review of 
competencies for civil engineers (GS-0810 series) and, if applicable, other 
related occupations, and make recommendations for ensuring that adequate AI 
expertise and credentials in these occupations in the Federal Government 
reflect the increased use of AI in critical infrastructure; and

(ix) work with the Security, Suitability, and Credentialing Performance 
Accountability Council to assess mechanisms to streamline and accelerate 
personnel-vetting requirements, as appropriate, to support AI and fields 
related to other critical and emerging technologies.

                    (e) To expand the use of special authorities for AI 
                hiring and retention, agencies shall use all 
                appropriate hiring authorities, including Schedule A(r) 
                excepted service hiring and direct-hire authority, as 
                applicable and appropriate, to hire AI talent and AI-
                enabling talent rapidly. In addition to participating 
                in OPM-led pooled hiring actions, agencies shall 
                collaborate, where appropriate, on agency-led pooled 
                hiring under the Competitive Service Act of 2015 
                (Public Law 114-137) and other shared hiring. Agencies 
                shall also, where applicable, use existing incentives, 
                pay-setting authorities, and other compensation 
                flexibilities, similar to those used for cyber and 
                information technology positions, for AI and data-
                science professionals, as well as plain-language job 
                titles, to help recruit and retain these highly skilled 
                professionals. Agencies shall ensure that AI and other 
                related talent needs (such as technology governance and 
                privacy) are reflected in strategic workforce planning 
                and budget formulation.
                    (f) To facilitate the hiring of data scientists, 
                the Chief Data Officer Council shall develop a 
                position-description library for data scientists (job 
                series 1560) and a hiring guide to support agencies in 
                hiring data scientists.
                    (g) To help train the Federal workforce on AI 
                issues, the head of each agency shall implement--or 
                increase the availability and use of--AI training and 
                familiarization programs for employees, managers, and 
                leadership in

[[Page 75223]]

                technology as well as relevant policy, managerial, 
                procurement, regulatory, ethical, governance, and legal 
                fields. Such training programs should, for example, 
                empower Federal employees, managers, and leaders to 
                develop and maintain an operating knowledge of emerging 
                AI technologies to assess opportunities to use these 
                technologies to enhance the delivery of services to the 
                public, and to mitigate risks associated with these 
                technologies. Agencies that provide professional-
                development opportunities, grants, or funds for their 
                staff should take appropriate steps to ensure that 
                employees who do not serve in traditional technical 
                roles, such as policy, managerial, procurement, or 
                legal fields, are nonetheless eligible to receive 
                funding for programs and courses that focus on AI, 
                machine learning, data science, or other related 
                subject areas.
                    (h) Within 180 days of the date of this order, to 
                address gaps in AI talent for national defense, the 
                Secretary of Defense shall submit a report to the 
                President through the Assistant to the President for 
                National Security Affairs that includes:

(i) recommendations to address challenges in the Department of Defense's 
ability to hire certain noncitizens, including at the Science and 
Technology Reinvention Laboratories;

(ii) recommendations to clarify and streamline processes for accessing 
classified information for certain noncitizens through Limited Access 
Authorization at Department of Defense laboratories;

(iii) recommendations for the appropriate use of enlistment authority under 
10 U.S.C. 504(b)(2) for experts in AI and other critical and emerging 
technologies; and

(iv) recommendations for the Department of Defense and the Department of 
Homeland Security to work together to enhance the use of appropriate 
authorities for the retention of certain noncitizens of vital importance to 
national security by the Department of Defense and the Department of 
Homeland Security.

                Sec. 11. Strengthening American Leadership Abroad. (a) 
                To strengthen United States leadership of global 
                efforts to unlock AI's potential and meet its 
                challenges, the Secretary of State, in coordination 
                with the Assistant to the President for National 
                Security Affairs, the Assistant to the President for 
                Economic Policy, the Director of OSTP, and the heads of 
                other relevant agencies as appropriate, shall:

(i) lead efforts outside of military and intelligence areas to expand 
engagements with international allies and partners in relevant bilateral, 
multilateral, and multi-stakeholder fora to advance those allies' and 
partners' understanding of existing and planned AI-related guidance and 
policies of the United States, as well as to enhance international 
collaboration; and

(ii) lead efforts to establish a strong international framework for 
managing the risks and harnessing the benefits of AI, including by 
encouraging international allies and partners to support voluntary 
commitments similar to those that United States companies have made in 
pursuit of these objectives and coordinating the activities directed by 
subsections (b), (c), (d), and (e) of this section, and to develop common 
regulatory and other accountability principles for foreign nations, 
including to manage the risk that AI systems pose.

                    (b) To advance responsible global technical 
                standards for AI development and use outside of 
                military and intelligence areas, the Secretary of 
                Commerce, in coordination with the Secretary of State 
                and the heads of other relevant agencies as 
                appropriate, shall lead preparations for a coordinated 
                effort with key international allies and partners and 
                with standards development organizations, to drive the 
                development and implementation of AI-related consensus 
                standards, cooperation and coordination, and 
                information sharing. In particular, the Secretary of 
                Commerce shall:

[[Page 75224]]

(i) within 270 days of the date of this order, establish a plan for global 
engagement on promoting and developing AI standards, with lines of effort 
that may include:

  (A) AI nomenclature and terminology;

  (B) best practices regarding data capture, processing, protection, 
privacy, confidentiality, handling, and analysis;

  (C) trustworthiness, verification, and assurance of AI systems; and

  (D) AI risk management;

(ii) within 180 days of the date the plan is established, submit a report 
to the President on priority actions taken pursuant to the plan; and

(iii) ensure that such efforts are guided by principles set out in the NIST 
AI Risk Management Framework and United States Government National 
Standards Strategy for Critical and Emerging Technology.

                    (c) Within 365 days of the date of this order, to 
                promote safe, responsible, and rights-affirming 
                development and deployment of AI abroad:

(i) The Secretary of State and the Administrator of the United States 
Agency for International Development, in coordination with the Secretary of 
Commerce, acting through the director of NIST, shall publish an AI in 
Global Development Playbook that incorporates the AI Risk Management 
Framework's principles, guidelines, and best practices into the social, 
technical, economic, governance, human rights, and security conditions of 
contexts beyond United States borders. As part of this work, the Secretary 
of State and the Administrator of the United States Agency for 
International Development shall draw on lessons learned from programmatic 
uses of AI in global development.

(ii) The Secretary of State and the Administrator of the United States 
Agency for International Development, in collaboration with the Secretary 
of Energy and the Director of NSF, shall develop a Global AI Research 
Agenda to guide the objectives and implementation of AI-related research in 
contexts beyond United States borders. The Agenda shall:

  (A) include principles, guidelines, priorities, and best practices aimed 
at ensuring the safe, responsible, beneficial, and sustainable global 
development and adoption of AI; and

  (B) address AI's labor-market implications across international contexts, 
including by recommending risk mitigations.

                    (d) To address cross-border and global AI risks to 
                critical infrastructure, the Secretary of Homeland 
                Security, in coordination with the Secretary of State, 
                and in consultation with the heads of other relevant 
                agencies as the Secretary of Homeland Security deems 
                appropriate, shall lead efforts with international 
                allies and partners to enhance cooperation to prevent, 
                respond to, and recover from potential critical 
                infrastructure disruptions resulting from incorporation 
                of AI into critical infrastructure systems or malicious 
                use of AI.

(i) Within 270 days of the date of this order, the Secretary of Homeland 
Security, in coordination with the Secretary of State, shall develop a plan 
for multilateral engagements to encourage the adoption of the AI safety and 
security guidelines for use by critical infrastructure owners and operators 
developed in section 4.3(a) of this order.

(ii) Within 180 days of establishing the plan described in subsection 
(d)(i) of this section, the Secretary of Homeland Security shall submit a 
report to the President on priority actions to mitigate cross-border risks 
to critical United States infrastructure.

                Sec. 12. Implementation. (a) There is established, 
                within the Executive Office of the President, the White 
                House Artificial Intelligence Council (White House AI 
                Council). The function of the White House AI Council is 
                to coordinate the activities of agencies across the 
                Federal Government to ensure the effective formulation, 
                development, communication, industry engagement

[[Page 75225]]

                related to, and timely implementation of AI-related 
                policies, including policies set forth in this order.

                    (b) The Assistant to the President and Deputy Chief 
                of Staff for Policy shall serve as Chair of the White 
                House AI Council.
                    (c) In addition to the Chair, the White House AI 
                Council shall consist of the following members, or 
                their designees:

(i) the Secretary of State;

(ii) the Secretary of the Treasury;

(iii) the Secretary of Defense;

(iv) the Attorney General;

(v) the Secretary of Agriculture;

(vi) the Secretary of Commerce;

(vii) the Secretary of Labor;

(viii) the Secretary of HHS;

(ix) the Secretary of Housing and Urban Development;

(x) the Secretary of Transportation;

(xi) the Secretary of Energy;

(xii) the Secretary of Education;

(xiii) the Secretary of Veterans Affairs;

(xiv) the Secretary of Homeland Security;

(xv) the Administrator of the Small Business Administration;

(xvi) the Administrator of the United States Agency for International 
Development;

(xvii) the Director of National Intelligence;

(xviii) the Director of NSF;

(xix) the Director of OMB;

(xx) the Director of OSTP;

(xxi) the Assistant to the President for National Security Affairs;

(xxii) the Assistant to the President for Economic Policy;

(xxiii) the Assistant to the President and Domestic Policy Advisor;

(xxiv) the Assistant to the President and Chief of Staff to the Vice 
President;

(xxv) the Assistant to the President and Director of the Gender Policy 
Council;

(xxvi) the Chairman of the Council of Economic Advisers;

(xxvii) the National Cyber Director;

(xxviii) the Chairman of the Joint Chiefs of Staff; and

(xxix) the heads of such other agencies, independent regulatory agencies, 
and executive offices as the Chair may from time to time designate or 
invite to participate.

                    (d) The Chair may create and coordinate subgroups 
                consisting of White House AI Council members or their 
                designees, as appropriate.

                Sec. 13. General Provisions. (a) Nothing in this order 
                shall be construed to impair or otherwise affect:

(i) the authority granted by law to an executive department or agency, or 
the head thereof; or

(ii) the functions of the Director of the Office of Management and Budget 
relating to budgetary, administrative, or legislative proposals.

                    (b) This order shall be implemented consistent with 
                applicable law and subject to the availability of 
                appropriations.

[[Page 75226]]

                    (c) This order is not intended to, and does not, 
                create any right or benefit, substantive or procedural, 
                enforceable at law or in equity by any party against 
                the United States, its departments, agencies, or 
                entities, its officers, employees, or agents, or any 
                other person.
                
                
                    (Presidential Sig.)

                THE WHITE HOUSE,

                    October 30, 2023.

[FR Doc. 2023-24283
Filed 10-31-23; 11:15 am]
Billing code 3395-F4-P
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.