Artificial Intelligence Risk Management Framework, 40810-40813 [2021-16176]

Download as PDF 40810 Federal Register / Vol. 86, No. 143 / Thursday, July 29, 2021 / Notices not participating in this review, the cash deposit will continue to be the company-specific rate published for the most recently completed segment of this proceeding; (3) if the exporter is not a firm covered in this review, or the original less-than-fair-value (LTFV) investigation, but the manufacturer is, then the cash deposit rate will be the rate established for the most recent segment for the manufacturer of the merchandise; and (4) the cash deposit rate for all other manufacturers or exporters will continue to be 20.33 percent,15 the all-others rate established in the LTFV investigation. These deposit requirements, when imposed, shall remain in effect until further notice. II. Background III. Scope of the Order IV. Changes Since the Preliminary Results V. Rate for Non-Examined Company VI. Discussion of the Issues Comment 1–A: Lawfulness of Commerce’s Interpretation of the Particular Market Situation (PMS) Provision Comment 1–B: Evidence of a PMS Comment 1–C: Quantification of PMS Adjustment Comment 2: Constructed Export Price (CEP) Offset for POSCO Comment 3: Correction of Calculation Errors Comment 4: Whether Hyundai’s Cost Accounting Merits Adverse Facts Available (AFA) Comment 5: Assignment of an Assessment Rate to a Certain U.S. Affiliate VII. Recommendation Notification to Importers This notice serves as a final reminder to importers of their responsibility under 19 CFR 351.402(f)(2) to file a certificate regarding the reimbursement of antidumping duties prior to liquidation of the relevant entries during this POR. Failure to comply with this requirement could result in Commerce’s presumption that reimbursement of antidumping duties occurred and the subsequent assessment of double antidumping duties. [FR Doc. 2021–16172 Filed 7–28–21; 8:45 am] Notification Regarding Administrative Protective Order This notice serves as the only reminder to parties subject to administrative protective order (APO) of their responsibility concerning the disposition of proprietary information disclosed under APO in accordance with 19 CFR 351.305(a)(3), which continues to govern business proprietary information in this segment of the proceeding. Timely written notification of the return or destruction of APO materials or conversion to judicial protective order is hereby requested. Failure to comply with the regulations and the terms of an APO is a sanctionable violation. Notification to Interested Parties Commerce is issuing and publishing this notice in accordance with sections 751(a)(1) and 777(i)(1) of the Act, and 19 CFR 351.221(b)(4). jbell on DSKJLSW7X2PROD with NOTICES Dated: July 23, 2021. Christian Marsh, Acting Assistant Secretary for Enforcement and Compliance. Appendix List of Topics Discussed in the Issues and Decision Memorandum I. Summary 15 See Order. VerDate Sep<11>2014 19:19 Jul 28, 2021 Jkt 253001 BILLING CODE 3510–DS–P DEPARTMENT OF COMMERCE National Institute of Standards and Technology [Docket Number: [210726–0151]] Artificial Intelligence Risk Management Framework National Institute of Standards and Technology, Department of Commerce. ACTION: Request for information. AGENCY: The National Institute of Standards and Technology (NIST) is developing a framework that can be used to improve the management of risks to individuals, organizations, and society associated with artificial intelligence (AI). The NIST Artificial Intelligence Risk Management Framework (AI RMF or Framework) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, and use, and evaluation of AI products, services, and systems. This notice requests information to help inform, refine, and guide the development of the AI RMF. The Framework will be developed through a consensus-driven, open, and collaborative process that will include public workshops and other opportunities for stakeholders to provide input. DATES: Comments in response to this notice must be received by 5:00 p.m. Eastern time on August 19, 2021. Written comments in response to the RFI should be submitted according to the instructions in the ADDRESSES and SUPPLEMENTARY INFORMATION sections below. Submissions received after that date may not be considered. SUMMARY: PO 00000 Frm 00009 Fmt 4703 Sfmt 4703 Comments may be submitted by any of the following methods: • Electronic submission: Submit electronic public comments via the Federal e-Rulemaking Portal. 1. Go to www.regulations.gov and enter NIST–2021–0004 in the search field, 2. Click the ‘‘Comment Now!’’ icon, complete the required fields, and 3. Enter or attach your comments. • Email: Comments in electronic form may also be sent to AIframework@ nist.gov in any of the following formats: HTML; ASCII; Word; RTF; or PDF. Please submit comments only and include your name, organization’s name (if any), and cite ‘‘AI Risk Management Framework’’ in all correspondence. FOR FURTHER INFORMATION CONTACT: For questions about this RFI contact: Mark Przybocki (mark.przybocki@nist.gov), U.S. National Institute of Standards and Technology, MS 20899, 100 Bureau Drive, Gaithersburg, MD 20899, telephone (301) 975–3347, email AIframework@nist.gov. Direct media inquiries to NIST’s Office of Public Affairs at (301) 975– 2762. Users of telecommunication devices for the deaf, or a text telephone, may call the Federal Relay Service, toll free at 1–800–877–8339. Accessible Format: On request to the contact person listed above, NIST will make the RFI available in alternate formats, such as Braille or large print, upon request by persons with disabilities. ADDRESSES: SUPPLEMENTARY INFORMATION: Genesis for Development of the AI Risk Management Framework Artificial intelligence (AI) is rapidly transforming our world. Surges in AI capabilities have led to a wide range of innovations. These new AI-enabled systems are benefitting many parts of society and economy from commerce and healthcare to transportation and cybersecurity. At the same time, new AI-based technologies, products, and services bring technical and societal challenges and risks, including ensuring that AI comports with ethical values. While there is no objective standard for ethical values, as they are grounded in the norms and legal expectations of specific societies or cultures, it is widely agreed that AI must be designed, developed, used, and evaluated in a trustworthy and responsible manner to foster public confidence and trust. Trust is established by ensuring that AI systems are cognizant of and are built to align with core values in society, and in ways E:\FR\FM\29JYN1.SGM 29JYN1 Federal Register / Vol. 86, No. 143 / Thursday, July 29, 2021 / Notices jbell on DSKJLSW7X2PROD with NOTICES which minimize harms to individuals, groups, communities, and societies at large. Defining trustworthiness in meaningful, actionable, and testable ways remains a work in progress. Inside and outside the United States there are diverse views about what that entails, including who is responsible for instilling trustworthiness during the stages of design, development,use, and evaluation. There also are different ideas about how to assure conformity with principles and characteristics of AI trustworthiness. NIST is among the institutions addressing these issues. NIST aims to cultivate the public’s trust in the design, development, use, and evaluation of AI technologies and systems in ways that enhance economic security, and improve quality of life. NIST focuses on improving measurement science, standards, technology, and related tools, including evaluation and data. NIST is developing forward-thinking approaches that support innovation and confidence in AI systems. The agency’s work on an AI RMF is consistent with recommendations by the National Security Commission on Artificial Intelligence 1 and the Plan for Federal Engagement in Developing AI Technical Standards and Related Tools.2 Congress has directed NIST to collaborate with the private and public sectors to develop a voluntary AI RMF.3 The Framework is intended to help designers, developers, users and evaluators of AI systems better manage risks across the AI lifecycle. For purposes of this RFI, ‘‘managing’’ means: Identifying, assessing, responding to, and communicating AI risks. ‘‘Responding’’ to AI risks means: Avoiding, mitigating, sharing, transferring, or accepting risk. ‘‘Communicating’’ AI risk means: Disclosing and negotiating risk and sharing with connected systems and actors in the domain of design, deployment and use. ‘‘Design, development, use, and evaluation’’ of AI systems includes procurement, 1 National Security Commission on Artificial Intelligence, Final Report, https://www.nscai.gov/ wp-content/uploads/2021/03/Full-Report-Digital1.pdf. 2 Plan for Federal Engagement in Developing AI Technical Standards and Related Tools, https:// www.nist.gov/system/files/documents/2019/08/10/ ai_standards_fedengagement_plan_9aug2019.pdf. 3 H. Rept. 116–455—COMMERCE, JUSTICE, SCIENCE, AND RELATED AGENCIES APPROPRIATIONS BILL, 2021, CRPT– 116hrpt455.pdf (congress.gov), and Section 5301 of the National Artificial Intelligence Initiative Act of 2020 (Pub. L. 116–283), https://www.congress.gov/ 116/bills/hr6395/BILLS-116hr6395enr.pdf. VerDate Sep<11>2014 19:19 Jul 28, 2021 Jkt 253001 monitoring, or sustainment of AI components and systems. The Framework aims to foster the development of innovative approaches to address characteristics of trustworthiness including accuracy, explainability and interpretability, reliability, privacy, robustness, safety, security (resilience), and mitigation of unintended and/or harmful bias, as well as of harmful uses. The Framework should consider and encompass principles such as transparency, fairness, and accountability during design, deployment, use, and evaluation of AI technologies and systems. With broad and complex uses of AI, the Framework should consider risks from unintentional, unanticipated, or harmful outcomes that arise from intended uses, secondary uses, and misuses of the AI. These characteristics and principles are generally considered as contributing to the trustworthiness of AI technologies and systems, products, and services. NIST is interested in whether stakeholders define or use other characteristics and principles. Among other purposes, the AI RMF is intended to be a tool that would complement and assist with broader aspects of enterprise risk management which could affect individuals, groups, organizations, or society. AI RMF Development and Attributes NIST is soliciting input from all interested stakeholders, seeking to understand how individuals, groups and organizations involved with designing, developing, using, or evaluating AI systems might be better able to address the full scope of AI risk and how a framework for managing AI risks might be constructed. Stakeholders include but are not limited to industry, civil society groups, academic institutions, federal agencies, state, local, territorial, tribal, and foreign governments, standards developing organizations and researchers. NIST intends the Framework to provide a prioritized, flexible, riskbased, outcome-focused, and costeffective approach that is useful to the community of AI designers, developers, users, evaluators, and other decision makers and is likely to be widely adopted. The Framework’s development process will involve several iterations to encourage robust and continuing engagement and collaboration with interested stakeholders. This will include open, public workshops, along with other forms of outreach and feedback. This RFI is an important part of that process. NIST believes that the AI RMF should have the following attributes: PO 00000 Frm 00010 Fmt 4703 Sfmt 4703 40811 1. Be consensus-driven and developed and regularly updated through an open, transparent process. All stakeholders should have the opportunity to contribute to the Framework’s development. NIST has a long track record of successfully and collaboratively working with a range of stakeholders to develop standards and guidelines. NIST will model its approach on the open, transparent, and collaborative approaches used to develop the Framework for Improving Critical Infrastructure Cybersecurity (‘‘Cybersecurity Framework’’) 4 as well as the Privacy Framework: A Tool for Improving Privacy through Enterprise Risk Management (‘‘Privacy Framework’’).5 2. Provide common definitions. The Framework should provide definitions and characterizations for aspects of AI risk and trustworthiness that are common and relevant across all sectors. The Framework should establish common AI risk taxonomy, terminology, and agreed-upon definitions, including that of trust and trustworthiness. 3. Use plain language that is understandable by a broad audience, including senior executives and those who are not AI professionals, while still of sufficient technical depth to be useful to practitioners across many domains. 4. Be adaptable to many different organizations, AI technologies, lifecycle phases, sectors, and uses. The Framework should be scalable to organizations of all sizes, public or private, in any sector, and operating within or across domestic borders. It should be platform- and technologyagnostic and customizable. It should meet the needs of AI designers, developers, users, and evaluators alike. 5. Be risk-based, outcome-focused, voluntary, and non-prescriptive. The Framework should focus on the value of trustworthiness and related needs, capabilities, and outcomes. It should provide a catalog of outcomes and approaches to be used voluntarily, rather than a set of one-size-fits-all requirements, in order to: Foster innovation in design, development, use and evaluation of trustworthy and responsible AI systems; inform education and workforce development; and promote research on and adoption of effective solutions. The Framework should assist those designing, developing, using, and evaluating AI to 4 Framework for Improving Critical Infrastructure Cybersecurity (‘‘Cybersecurity Framework’’), https://www.nist.gov/cyberframework. 5 Privacy Framework: A Tool for Improving Privacy through Enterprise Risk Management (‘‘Privacy Framework’’), https://www.nist.gov/ privacy-framework/privacy-framework. E:\FR\FM\29JYN1.SGM 29JYN1 40812 Federal Register / Vol. 86, No. 143 / Thursday, July 29, 2021 / Notices jbell on DSKJLSW7X2PROD with NOTICES better manage AI risks for their intended use cases or scenarios. 6. Be readily usable as part of any enterprise’s broader risk management strategy and processes. 7. Be consistent, to the extent possible, with other approaches to managing AI risk. The Framework should, when possible, take advantage of and provide greater awareness of existing standards, guidelines, best practices, methodologies, and tools for managing AI risks whether presented as frameworks or in other formats. It should be law- and regulation-agnostic to support organizations’ ability to operate under applicable domestic and international legal or regulatory regimes. 8. Be a living document. The Framework should be capable of being readily updated as technology, understanding, and approaches to AI trustworthiness and uses of AI change and as stakeholders learn from implementing AI risk management. NIST expects there may be aspects of AI trustworthiness that are not sufficiently developed for inclusion in the initial Framework. As noted below, NIST solicits comments on these and potentially other desired attributes of an AI RMF, as well as on high-priority gaps in organizations’ ability to manage AI risks. Goals of This Request for Information (RFI) This RFI invites stakeholders to submit ideas, based on their experience as well as their research, to assist in prioritizing elements and development of the AI RMF. Stakeholders include but are not limited to industry, civil society groups, academic institutions, federal agencies, state, local, territorial, tribal, and foreign governments, standards developing organizations and researchers. The Framework is intended to address AI risk management related to individuals, groups or organizations involved in the design, development, use, and evaluation of AI systems. The goals of the Framework development process, generally, and this RFI, specifically, are to: 1. Identify and better understand common challenges in the design, development, use, and evaluation of AI systems that might be addressed through a voluntary Framework; 2. gain a greater awareness about the extent to which organizations are identifying, assessing, prioritizing, responding to, and communicating AI risk or have incorporated AI risk management standards, guidelines, and best practices, into their policies and practices; and VerDate Sep<11>2014 19:19 Jul 28, 2021 Jkt 253001 3. specify high-priority gaps for which guidelines, best practices, and new or revised standards are needed and could be addressed by the AI RMF—or which would require further understanding, research, and development. Details About Responses to This Request for Information When addressing the topics below, respondents may describe the practices of their organization or organizations with which they are familiar. They also may provide information about the type, size, and location of those organization(s) if they desire. Providing such information is optional and will not affect NIST’s full consideration of the comment. Respondents are encouraged to provide generalized information based on research and potential practices as well as on current approaches and activities. Comments containing references, studies, research, and other empirical data that are not widely published (e.g., available on the internet) should include copies of the referenced materials. All submissions, including attachments and other supporting materials, will become part of the public record and subject to public disclosure. NIST reserves the right to publish relevant comments publicly, unedited and in their entirety. All relevant comments received by the deadline will be made publicly available at https:// www.nist.gov/itl/ai-risk-managementframework and at regulations.gov. Respondents are strongly encouraged to use the template available at: https:// www.nist.gov/itl/ai-risk-managementframework. Personally identifiable information (PII), such as street addresses, phone numbers, account numbers or Social Security numbers, or names of other individuals, should not be included. NIST asks commenters to avoid including PII as NIST has no plans to redact PII from comments. Do not submit confidential business information, or otherwise sensitive or protected information. Comments that contain profanity, vulgarity, threats, or other inappropriate language or content will not be considered. NIST requests that commenters, to the best of their ability, only submit attachments that are accessible to people who rely upon assistive technology. A good resource for document accessibility can be found at: section508.gov/create/documents. Specific Requests for Information The following statements are not intended to limit the topics that may be addressed. Responses may include any topic believed to have implications for PO 00000 Frm 00011 Fmt 4703 Sfmt 4703 the development of an AI RMF, regardless of whether the topic is included in this document. All relevant responses that comply with the requirements listed in the DATES and ADDRESSES sections of this RFI and set forth below will be considered. NIST is requesting information related to the following topics: 1. The greatest challenges in improving how AI actors manage AIrelated risks—where ‘‘manage’’ means identify, assess, prioritize, respond to, or communicate those risks; 2. How organizations currently define and manage characteristics of AI trustworthiness and whether there are important characteristics which should be considered in the Framework besides: Accuracy, explainability and interpretability, reliability, privacy, robustness, safety, security (resilience), and mitigation of harmful bias, or harmful outcomes from misuse of the AI; 3. How organizations currently define and manage principles of AI trustworthiness and whether there are important principles which should be considered in the Framework besides: Transparency, fairness, and accountability; 4. The extent to which AI risks are incorporated into different organizations’ overarching enterprise risk management—including, but not limited to, the management of risks related to cybersecurity, privacy, and safety; 5. Standards, frameworks, models, methodologies, tools, guidelines and best practices, and principles to identify, assess, prioritize, mitigate, or communicate AI risk and whether any currently meet the minimum attributes described above; 6. How current regulatory or regulatory reporting requirements (e.g., local, state, national, international) relate to the use of AI standards, frameworks, models, methodologies, tools, guidelines and best practices, and principles; 7. AI risk management standards, frameworks, models, methodologies, tools, guidelines and best practices, principles, and practices which NIST should consider to ensure that the AI RMF aligns with and supports other efforts; 8. How organizations take into account benefits and issues related to inclusiveness in AI design, development, use and evaluation—and how AI design and development may be carried out in a way that reduces or manages the risk of potential negative impact on individuals, groups, and society. E:\FR\FM\29JYN1.SGM 29JYN1 Federal Register / Vol. 86, No. 143 / Thursday, July 29, 2021 / Notices 9. The appropriateness of the attributes NIST has developed for the AI Risk Management Framework. (See above, ‘‘AI RMF Development and Attributes’’); 10. Effective ways to structure the Framework to achieve the desired goals, including, but not limited to, integrating AI risk management processes with organizational processes for developing products and services for better outcomes in terms of trustworthiness and management of AI risks. Respondents are asked to identify any current models which would be effective. These could include—but are not limited to—the NIST Cybersecurity Framework or Privacy Framework, which focus on outcomes, functions, categories and subcategories and also offer options for developing profiles reflecting current and desired approaches as well as tiers to describe degree of framework implementation; and 11. How the Framework could be developed to advance the recruitment, hiring, development, and retention of a knowledgeable and skilled workforce necessary to perform AI-related functions within organizations. 12. The extent to which the Framework should include governance issues, including but not limited to make up of design and development teams, monitoring and evaluation, and grievance and redress. Authority: 15 U.S.C. 272(b), (c), & (e); 15 U.S.C. 278g–3. Alicia Chambers, NIST Executive Secretariat. [FR Doc. 2021–16176 Filed 7–28–21; 8:45 am] BILLING CODE 3510–13–P DEPARTMENT OF COMMERCE National Institute of Standards and Technology Establishment of a Laboratory Accreditation Program for Laboratories Performing System Integration Testing and Operational/ User Acceptance Testing on Federal Warfare Systems Under the National Voluntary Laboratory Accreditation Program National Institute of Standards and Technology, Commerce. ACTION: Notice. jbell on DSKJLSW7X2PROD with NOTICES AGENCY: Under the National Voluntary Laboratory Accreditation Program (NVLAP) the National Institute of Standards and Technology (NIST) announces the establishment of a laboratory accreditation program and SUMMARY: VerDate Sep<11>2014 19:19 Jul 28, 2021 Jkt 253001 the availability of applications for accreditation of laboratories that perform System Integration Testing (SIT) and Operational/User Acceptance Testing (O/UAT) on Federal Warfare Systems. Laboratories may obtain NIST Handbook 150, NVLAP Procedures and General Requirements, NIST Handbook 150–872, Federal Warfare System(s), and an application for this program by visiting the NVLAP website at https://www.nist.gov/nvlap or by sending a request to NVLAP by mail at NIST/NVLAP, 100 Bureau Drive, Stop 2140, Gaithersburg, MD 20899–2140 or by email at nvlap@nist.gov. All applications for accreditation must be submitted to nvlap@nist.gov. FOR FURTHER INFORMATION CONTACT: Brad Moore, Program Manager, NIST/NVLAP, 100 Bureau Drive, Stop 2140, Gaithersburg, MD 20899–2140, Phone: (301) 975–5740 or email: bradley.moore@nist.gov. Information regarding NVLAP and the accreditation process can be obtained from https://www.nist.gov/nvlap. SUPPLEMENTARY INFORMATION: In response to the need for an improved capability to protype and experiment prior to generating requirements, the U–2 Federal Laboratory was established in accordance with 15 U.S.C. 3710 and 10 U.S.C. § 2500. The U–2 Federal Laboratory’s mission is to ‘‘[f]ast-field advanced technologies at a speed relevant to the warfighter,’’ in accordance with House Report 115–676 (2018) 1 and the Congressionallymandated 2018 National Defense Strategy. This is accomplished through vertical integration with one laboratory to effect ‘‘[c]onfluence of Warfighter, Developer, and Acquirer.’’ 2 On May 7, 2019, the U–2 Federal Laboratory formally requested in writing the Chief of NVLAP consider the establishment of a proposed new Laboratory Accreditation Program (LAP) entitled, ‘‘Federal Warfare System(s) LAP,’’ in accordance with NIST Handbook 150 Para 2.1.3. In compliance with NVLAP procedures (15 CFR part 285), NVLAP held a public workshop on November 19, 2019 to solicit further comments on the establishment of a Federal Warfare System(s) LAP and on ADDRESSES: 1 U.S., House, Committee on Armed Services, National Defense Authorization Act for Fiscal Year 2019 (H. Rpt. 115–676). Washington: Government Printing Office, 2018. 2 MAJOR Tierney, Raymond G., The Federal Warfare Systems Laboratory Executive Summary, Available at: https://www.nist.gov/system/files/ documents/2021/05/27/FWS%20LAB_ 2021%20White%20Paper_v17.2021APR19.pdf. Accessed: 7/13/2021. PO 00000 Frm 00012 Fmt 4703 Sfmt 4703 40813 the technical requirements to be associated with the LAP. Determination Under the framework of the Federal Warfare Systems Laboratory, advanced technologies can be developed or integrated to determine technical feasibility (‘‘Is it possible?’’). Embedded developers then hand the technology to the end-user (‘‘Warfighter’’) to determine operational utility (‘‘Is it useful?’’). This process continuously cycles between development and operations. The desired outcome is achieved when the technology has evolved to a high-Technology Readiness Level (TRL), Warfighter-useful solution. At this point, the technology generally transitions into the Joint Capabilities Integration and Development System and Defense Acquisition System (DoD Directive 5000.01 and DoD Instruction 5000.02) as a vetted, mature requirement. In this way, the acquisitions process is meaningfully compressed, and cost offsets realized, by (a) front-loading development with the end-user and (b) abating the problems of scope, understanding, and volatility associated with the requirement development process. Importantly, establishment of this LAP affords a means to standardize the traceability, competence, impartiality, and operational consistency of Federal Laboratories supporting warfare systems within the Department of Defense, as well as a means to meet a 2018 National Defense Strategy mandate that, ‘‘prototyping and experimentation should be used prior to defining requirements.’’ 3 The U.S. Air Force Air Combat Command (ACC) Office of the Chief Scientist is considering a commandwide plan for adoption of the Federal Warfare System Laboratory construct. Interest in this concept has also been expressed by senior military leaders. Based on careful analysis of comments received during the public workshop and a review of the Secretary of Defense’s strategies, instructions, and mandates, the Chief of NVLAP has determined that the establishment of a LAP for laboratories conducting SIT and O/UAT on Federal Warfare Systems best meets government needs. This notice is issued in accordance with NVLAP procedures and general requirements, found in 15 CFR part 285. NVLAP provides an unbiased, thirdparty evaluation and recognition of competence. NVLAP accreditation signifies that a laboratory has 3 Excerpt from the 2018 National Defense Strategy. E:\FR\FM\29JYN1.SGM 29JYN1

Agencies

[Federal Register Volume 86, Number 143 (Thursday, July 29, 2021)]
[Notices]
[Pages 40810-40813]
From the Federal Register Online via the Government Publishing Office [www.gpo.gov]
[FR Doc No: 2021-16176]


-----------------------------------------------------------------------

DEPARTMENT OF COMMERCE

National Institute of Standards and Technology

[Docket Number: [210726-0151]]


Artificial Intelligence Risk Management Framework

AGENCY: National Institute of Standards and Technology, Department of 
Commerce.

ACTION: Request for information.

-----------------------------------------------------------------------

SUMMARY: The National Institute of Standards and Technology (NIST) is 
developing a framework that can be used to improve the management of 
risks to individuals, organizations, and society associated with 
artificial intelligence (AI). The NIST Artificial Intelligence Risk 
Management Framework (AI RMF or Framework) is intended for voluntary 
use and to improve the ability to incorporate trustworthiness 
considerations into the design, development, and use, and evaluation of 
AI products, services, and systems. This notice requests information to 
help inform, refine, and guide the development of the AI RMF. The 
Framework will be developed through a consensus-driven, open, and 
collaborative process that will include public workshops and other 
opportunities for stakeholders to provide input.

DATES: Comments in response to this notice must be received by 5:00 
p.m. Eastern time on August 19, 2021. Written comments in response to 
the RFI should be submitted according to the instructions in the 
ADDRESSES and SUPPLEMENTARY INFORMATION sections below. Submissions 
received after that date may not be considered.

ADDRESSES: Comments may be submitted by any of the following methods:
     Electronic submission: Submit electronic public comments 
via the Federal e-Rulemaking Portal.
    1. Go to www.regulations.gov and enter NIST-2021-0004 in the search 
field,
    2. Click the ``Comment Now!'' icon, complete the required fields, 
and
    3. Enter or attach your comments.
     Email: Comments in electronic form may also be sent to 
[email protected] in any of the following formats: HTML; ASCII; 
Word; RTF; or PDF.
    Please submit comments only and include your name, organization's 
name (if any), and cite ``AI Risk Management Framework'' in all 
correspondence.

FOR FURTHER INFORMATION CONTACT: For questions about this RFI contact: 
Mark Przybocki (mark.przyboc[email protected]), U.S. National Institute of 
Standards and Technology, MS 20899, 100 Bureau Drive, Gaithersburg, MD 
20899, telephone (301) 975-3347, email [email protected].
    Direct media inquiries to NIST's Office of Public Affairs at (301) 
975-2762. Users of telecommunication devices for the deaf, or a text 
telephone, may call the Federal Relay Service, toll free at 1-800-877-
8339.
    Accessible Format: On request to the contact person listed above, 
NIST will make the RFI available in alternate formats, such as Braille 
or large print, upon request by persons with disabilities.

SUPPLEMENTARY INFORMATION:

Genesis for Development of the AI Risk Management Framework

    Artificial intelligence (AI) is rapidly transforming our world.
    Surges in AI capabilities have led to a wide range of innovations. 
These new AI-enabled systems are benefitting many parts of society and 
economy from commerce and healthcare to transportation and 
cybersecurity. At the same time, new AI-based technologies, products, 
and services bring technical and societal challenges and risks, 
including ensuring that AI comports with ethical values. While there is 
no objective standard for ethical values, as they are grounded in the 
norms and legal expectations of specific societies or cultures, it is 
widely agreed that AI must be designed, developed, used, and evaluated 
in a trustworthy and responsible manner to foster public confidence and 
trust. Trust is established by ensuring that AI systems are cognizant 
of and are built to align with core values in society, and in ways

[[Page 40811]]

which minimize harms to individuals, groups, communities, and societies 
at large.
    Defining trustworthiness in meaningful, actionable, and testable 
ways remains a work in progress. Inside and outside the United States 
there are diverse views about what that entails, including who is 
responsible for instilling trustworthiness during the stages of design, 
development,use, and evaluation. There also are different ideas about 
how to assure conformity with principles and characteristics of AI 
trustworthiness.
    NIST is among the institutions addressing these issues. NIST aims 
to cultivate the public's trust in the design, development, use, and 
evaluation of AI technologies and systems in ways that enhance economic 
security, and improve quality of life. NIST focuses on improving 
measurement science, standards, technology, and related tools, 
including evaluation and data. NIST is developing forward-thinking 
approaches that support innovation and confidence in AI systems. The 
agency's work on an AI RMF is consistent with recommendations by the 
National Security Commission on Artificial Intelligence \1\ and the 
Plan for Federal Engagement in Developing AI Technical Standards and 
Related Tools.\2\
---------------------------------------------------------------------------

    \1\ National Security Commission on Artificial Intelligence, 
Final Report, https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf.
    \2\ Plan for Federal Engagement in Developing AI Technical 
Standards and Related Tools, https://www.nist.gov/system/files/documents/2019/08/10/ai_standards_fedengagement_plan_9aug2019.pdf.
---------------------------------------------------------------------------

    Congress has directed NIST to collaborate with the private and 
public sectors to develop a voluntary AI RMF.\3\ The Framework is 
intended to help designers, developers, users and evaluators of AI 
systems better manage risks across the AI lifecycle. For purposes of 
this RFI, ``managing'' means: Identifying, assessing, responding to, 
and communicating AI risks. ``Responding'' to AI risks means: Avoiding, 
mitigating, sharing, transferring, or accepting risk. ``Communicating'' 
AI risk means: Disclosing and negotiating risk and sharing with 
connected systems and actors in the domain of design, deployment and 
use. ``Design, development, use, and evaluation'' of AI systems 
includes procurement, monitoring, or sustainment of AI components and 
systems.
---------------------------------------------------------------------------

    \3\ H. Rept. 116-455--COMMERCE, JUSTICE, SCIENCE, AND RELATED 
AGENCIES APPROPRIATIONS BILL, 2021, CRPT-116hrpt455.pdf 
(congress.gov), and Section 5301 of the National Artificial 
Intelligence Initiative Act of 2020 (Pub. L. 116-283), https://www.congress.gov/116/bills/hr6395/BILLS-116hr6395enr.pdf.
---------------------------------------------------------------------------

    The Framework aims to foster the development of innovative 
approaches to address characteristics of trustworthiness including 
accuracy, explainability and interpretability, reliability, privacy, 
robustness, safety, security (resilience), and mitigation of unintended 
and/or harmful bias, as well as of harmful uses. The Framework should 
consider and encompass principles such as transparency, fairness, and 
accountability during design, deployment, use, and evaluation of AI 
technologies and systems. With broad and complex uses of AI, the 
Framework should consider risks from unintentional, unanticipated, or 
harmful outcomes that arise from intended uses, secondary uses, and 
misuses of the AI. These characteristics and principles are generally 
considered as contributing to the trustworthiness of AI technologies 
and systems, products, and services. NIST is interested in whether 
stakeholders define or use other characteristics and principles.
    Among other purposes, the AI RMF is intended to be a tool that 
would complement and assist with broader aspects of enterprise risk 
management which could affect individuals, groups, organizations, or 
society.

AI RMF Development and Attributes

    NIST is soliciting input from all interested stakeholders, seeking 
to understand how individuals, groups and organizations involved with 
designing, developing, using, or evaluating AI systems might be better 
able to address the full scope of AI risk and how a framework for 
managing AI risks might be constructed. Stakeholders include but are 
not limited to industry, civil society groups, academic institutions, 
federal agencies, state, local, territorial, tribal, and foreign 
governments, standards developing organizations and researchers.
    NIST intends the Framework to provide a prioritized, flexible, 
risk-based, outcome-focused, and cost-effective approach that is useful 
to the community of AI designers, developers, users, evaluators, and 
other decision makers and is likely to be widely adopted. The 
Framework's development process will involve several iterations to 
encourage robust and continuing engagement and collaboration with 
interested stakeholders. This will include open, public workshops, 
along with other forms of outreach and feedback. This RFI is an 
important part of that process.
    NIST believes that the AI RMF should have the following attributes:
    1. Be consensus-driven and developed and regularly updated through 
an open, transparent process. All stakeholders should have the 
opportunity to contribute to the Framework's development. NIST has a 
long track record of successfully and collaboratively working with a 
range of stakeholders to develop standards and guidelines. NIST will 
model its approach on the open, transparent, and collaborative 
approaches used to develop the Framework for Improving Critical 
Infrastructure Cybersecurity (``Cybersecurity Framework'') \4\ as well 
as the Privacy Framework: A Tool for Improving Privacy through 
Enterprise Risk Management (``Privacy Framework'').\5\
---------------------------------------------------------------------------

    \4\ Framework for Improving Critical Infrastructure 
Cybersecurity (``Cybersecurity Framework''), https://www.nist.gov/cyberframework.
    \5\ Privacy Framework: A Tool for Improving Privacy through 
Enterprise Risk Management (``Privacy Framework''), https://www.nist.gov/privacy-framework/privacy-framework.
---------------------------------------------------------------------------

    2. Provide common definitions. The Framework should provide 
definitions and characterizations for aspects of AI risk and 
trustworthiness that are common and relevant across all sectors. The 
Framework should establish common AI risk taxonomy, terminology, and 
agreed-upon definitions, including that of trust and trustworthiness.
    3. Use plain language that is understandable by a broad audience, 
including senior executives and those who are not AI professionals, 
while still of sufficient technical depth to be useful to practitioners 
across many domains.
    4. Be adaptable to many different organizations, AI technologies, 
lifecycle phases, sectors, and uses. The Framework should be scalable 
to organizations of all sizes, public or private, in any sector, and 
operating within or across domestic borders. It should be platform- and 
technology- agnostic and customizable. It should meet the needs of AI 
designers, developers, users, and evaluators alike.
    5. Be risk-based, outcome-focused, voluntary, and non-prescriptive. 
The Framework should focus on the value of trustworthiness and related 
needs, capabilities, and outcomes. It should provide a catalog of 
outcomes and approaches to be used voluntarily, rather than a set of 
one-size-fits-all requirements, in order to: Foster innovation in 
design, development, use and evaluation of trustworthy and responsible 
AI systems; inform education and workforce development; and promote 
research on and adoption of effective solutions. The Framework should 
assist those designing, developing, using, and evaluating AI to

[[Page 40812]]

better manage AI risks for their intended use cases or scenarios.
    6. Be readily usable as part of any enterprise's broader risk 
management strategy and processes.
    7. Be consistent, to the extent possible, with other approaches to 
managing AI risk. The Framework should, when possible, take advantage 
of and provide greater awareness of existing standards, guidelines, 
best practices, methodologies, and tools for managing AI risks whether 
presented as frameworks or in other formats. It should be law- and 
regulation-agnostic to support organizations' ability to operate under 
applicable domestic and international legal or regulatory regimes.
    8. Be a living document. The Framework should be capable of being 
readily updated as technology, understanding, and approaches to AI 
trustworthiness and uses of AI change and as stakeholders learn from 
implementing AI risk management. NIST expects there may be aspects of 
AI trustworthiness that are not sufficiently developed for inclusion in 
the initial Framework.
    As noted below, NIST solicits comments on these and potentially 
other desired attributes of an AI RMF, as well as on high-priority gaps 
in organizations' ability to manage AI risks.

Goals of This Request for Information (RFI)

    This RFI invites stakeholders to submit ideas, based on their 
experience as well as their research, to assist in prioritizing 
elements and development of the AI RMF. Stakeholders include but are 
not limited to industry, civil society groups, academic institutions, 
federal agencies, state, local, territorial, tribal, and foreign 
governments, standards developing organizations and researchers. The 
Framework is intended to address AI risk management related to 
individuals, groups or organizations involved in the design, 
development, use, and evaluation of AI systems.
    The goals of the Framework development process, generally, and this 
RFI, specifically, are to:
    1. Identify and better understand common challenges in the design, 
development, use, and evaluation of AI systems that might be addressed 
through a voluntary Framework;
    2. gain a greater awareness about the extent to which organizations 
are identifying, assessing, prioritizing, responding to, and 
communicating AI risk or have incorporated AI risk management 
standards, guidelines, and best practices, into their policies and 
practices; and
    3. specify high-priority gaps for which guidelines, best practices, 
and new or revised standards are needed and could be addressed by the 
AI RMF--or which would require further understanding, research, and 
development.

Details About Responses to This Request for Information

    When addressing the topics below, respondents may describe the 
practices of their organization or organizations with which they are 
familiar. They also may provide information about the type, size, and 
location of those organization(s) if they desire. Providing such 
information is optional and will not affect NIST's full consideration 
of the comment. Respondents are encouraged to provide generalized 
information based on research and potential practices as well as on 
current approaches and activities.
    Comments containing references, studies, research, and other 
empirical data that are not widely published (e.g., available on the 
internet) should include copies of the referenced materials. All 
submissions, including attachments and other supporting materials, will 
become part of the public record and subject to public disclosure. NIST 
reserves the right to publish relevant comments publicly, unedited and 
in their entirety. All relevant comments received by the deadline will 
be made publicly available at https://www.nist.gov/itl/ai-risk-management-framework and at regulations.gov. Respondents are strongly 
encouraged to use the template available at: https://www.nist.gov/itl/ai-risk-management-framework.
    Personally identifiable information (PII), such as street 
addresses, phone numbers, account numbers or Social Security numbers, 
or names of other individuals, should not be included. NIST asks 
commenters to avoid including PII as NIST has no plans to redact PII 
from comments. Do not submit confidential business information, or 
otherwise sensitive or protected information. Comments that contain 
profanity, vulgarity, threats, or other inappropriate language or 
content will not be considered. NIST requests that commenters, to the 
best of their ability, only submit attachments that are accessible to 
people who rely upon assistive technology. A good resource for document 
accessibility can be found at: section508.gov/create/documents.

Specific Requests for Information

    The following statements are not intended to limit the topics that 
may be addressed. Responses may include any topic believed to have 
implications for the development of an AI RMF, regardless of whether 
the topic is included in this document. All relevant responses that 
comply with the requirements listed in the DATES and ADDRESSES sections 
of this RFI and set forth below will be considered.
    NIST is requesting information related to the following topics:
    1. The greatest challenges in improving how AI actors manage AI-
related risks--where ``manage'' means identify, assess, prioritize, 
respond to, or communicate those risks;
    2. How organizations currently define and manage characteristics of 
AI trustworthiness and whether there are important characteristics 
which should be considered in the Framework besides: Accuracy, 
explainability and interpretability, reliability, privacy, robustness, 
safety, security (resilience), and mitigation of harmful bias, or 
harmful outcomes from misuse of the AI;
    3. How organizations currently define and manage principles of AI 
trustworthiness and whether there are important principles which should 
be considered in the Framework besides: Transparency, fairness, and 
accountability;
    4. The extent to which AI risks are incorporated into different 
organizations' overarching enterprise risk management--including, but 
not limited to, the management of risks related to cybersecurity, 
privacy, and safety;
    5. Standards, frameworks, models, methodologies, tools, guidelines 
and best practices, and principles to identify, assess, prioritize, 
mitigate, or communicate AI risk and whether any currently meet the 
minimum attributes described above;
    6. How current regulatory or regulatory reporting requirements 
(e.g., local, state, national, international) relate to the use of AI 
standards, frameworks, models, methodologies, tools, guidelines and 
best practices, and principles;
    7. AI risk management standards, frameworks, models, methodologies, 
tools, guidelines and best practices, principles, and practices which 
NIST should consider to ensure that the AI RMF aligns with and supports 
other efforts;
    8. How organizations take into account benefits and issues related 
to inclusiveness in AI design, development, use and evaluation--and how 
AI design and development may be carried out in a way that reduces or 
manages the risk of potential negative impact on individuals, groups, 
and society.

[[Page 40813]]

    9. The appropriateness of the attributes NIST has developed for the 
AI Risk Management Framework. (See above, ``AI RMF Development and 
Attributes'');
    10. Effective ways to structure the Framework to achieve the 
desired goals, including, but not limited to, integrating AI risk 
management processes with organizational processes for developing 
products and services for better outcomes in terms of trustworthiness 
and management of AI risks. Respondents are asked to identify any 
current models which would be effective. These could include--but are 
not limited to--the NIST Cybersecurity Framework or Privacy Framework, 
which focus on outcomes, functions, categories and subcategories and 
also offer options for developing profiles reflecting current and 
desired approaches as well as tiers to describe degree of framework 
implementation; and
    11. How the Framework could be developed to advance the 
recruitment, hiring, development, and retention of a knowledgeable and 
skilled workforce necessary to perform AI-related functions within 
organizations.
    12. The extent to which the Framework should include governance 
issues, including but not limited to make up of design and development 
teams, monitoring and evaluation, and grievance and redress.
    Authority: 15 U.S.C. 272(b), (c), & (e); 15 U.S.C. 278g-3.

Alicia Chambers,
NIST Executive Secretariat.
[FR Doc. 2021-16176 Filed 7-28-21; 8:45 am]
BILLING CODE 3510-13-P


This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.