Search

NIST Launches the Trustworthy & Responsible Artificial Intelligence Resource Center

The News: The National Institute of Standards and Technology (NIST) has launched the new Trustworthy & Responsible Artificial Intelligence Resource Center which will serve as a repository for much of the current federal guidance on AI and is intended to provide easy access to previously published resources on creating responsible AI systems. Read more from Nextgov.

NIST Launches the Trustworthy & Responsible Artificial Intelligence Resource Center

Analyst Take: The NIST’s launch of the new Trustworthy & Responsible Artificial Intelligence Resource Center is timely, with AI development moving at pretty much the speed of light and the need to create responsible AI systems top of mind for many.

NIST’s announcement follows news from a few weeks ago that more than 1,100 technology experts, business leaders, and scientists, including Apple co-founder Steve Wozniak and SpaceX and Tesla CEO Elon Musk, have stepped up with warnings about labs performing large-scale experiments with artificial intelligence (AI) more powerful than ChatGPT, saying this technology poses a grave threat to humanity.

Many of these leaders signed an open letter calling for a pause of giant AI experiments published on March 22, 2023, by the Future of Life Institute, whose mission is centered on “steering transformative technology toward benefitting life and away from extreme large-scale risks. The four major risks Future of Life focus on are artificial intelligence, biotech, nuclear weapons, and climate change — which says a lot about the significance of AI. Note that there are now more than 27,000 signatures on this open letter, including researchers and noted academics from all over the world, CEOs and other leaders — check out the list here for a glimpse into the folks who are concerned about the rapid advancement of AI.

NIST’s Trustworthy & Responsible Artificial Intelligence Resource Center

NIST’s Trustworthy & Responsible Artificial Intelligence Resource Center will serve as a repository for much of the current federal guidance on AI, while also providing access to previously published materials. Building upon the previously released AI Resource Management Framework and AI Playbook, the Trustworthy & Responsible Artificial Intelligence Resource Center will support the AI industry by providing best practices for research and developing socially responsible AI and machine learning systems. It is believed that in the absence of overarching federal law, combined with the concerns expressed by tech experts, business leaders, researchers, and scientists that this will be a valuable resource.

The Trustworthy & Responsible Artificial Intelligence Resource Center provides quick access to:

  • The AI Risk Management Framework (AI RMF), which is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
  • The AI RMF Playbook, which provides suggested actions for achieving the outcomes laid out in the AI Risk Management Framework (AI RMF).
  • The Roadmap, which is designed to help identify key activities for advancing the AI RMF that could be carried out by NIST in collaboration with private and public sector organizations – or by those organizations independently. NIST adds that these could change as AI technology evolves.
  • A Glossary to provide interested parties with a broader awareness of the multiple meanings of commonly used terms within the interdisciplinary field of Trustworthy and Responsible AI.
  • Technical and Policy Documents — the Resource Center will provide direct links to NIST documents related to the AI RMF and NIST AI Publication Series, as well as NIST-funded external resources in the area of Trustworthy and Responsible AI.
  • Engagement and Events — Provides links to workshops, visiting AI Fellows, student programs, and grants.

NIST says it expects to add enhancements to the Trustworthy & Responsible Artificial Intelligence Resource Center, which will include new document links, access to an international standards hub, metrics resources for AI systems testing, and software tools.

Wrapping up, the launch of the new NIST Trustworthy & Responsible Artificial Intelligence Resource Center is exciting news for public and private sector organizations looking to develop and deploy trustworthy and responsible AI technologies. There is a wealth of resources on the Trustworthy & Responsible Artificial Intelligence Resource Center website that will grow even more robust as NIST adds enhancements to the site. This is a great resource for those developing AI technologies.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

CAIDP Files Complaint with FTC Against OpenAI’s GPT-4 for Violating Consumer Protection Rules

How Organizations are Using AI and Digitization to Operate More Efficiently, Safely, and Sustainably

Intel and Hugging Face Discuss Compute and Ethical Issues Associated with Generative AI

Author Information

Shelly Kramer is a Principal Analyst and Founding Partner at Futurum Research. A serial entrepreneur with a technology centric focus, she has worked alongside some of the world’s largest brands to embrace disruption and spur innovation, understand and address the realities of the connected customer, and help navigate the process of digital transformation. She brings 20 years' experience as a brand strategist to her work at Futurum, and has deep experience helping global companies with marketing challenges, GTM strategies, messaging development, and driving strategy and digital transformation for B2B brands across multiple verticals. Shelly's coverage areas include Collaboration/CX/SaaS, platforms, ESG, and Cybersecurity, as well as topics and trends related to the Future of Work, the transformation of the workplace and how people and technology are driving that transformation. A transplanted New Yorker, she has learned to love life in the Midwest, and has firsthand experience that some of the most innovative minds and most successful companies in the world also happen to live in “flyover country.”

SHARE:

Latest Insights:

The Futurum Group’s Dr. Bob Sutor looks at five generative AI Python code generators to see how well they follow instructions and whether their outputs check for errors and are functionally complete.
Cerebras CS-3 Powered by 3rd Gen WSE-3 Delivers Breakthrough AI Supercomputer Capabilities Matching Up Very Favorably Against the NVIDIA Blackwell Platform
The Futurum Group’s Ron Westfall assesses why the Cerebras CS-3, powered by the WSE-3, can be viewed as the fastest AI chip across the entire AI ecosystem including the NVIDIA Blackwell platform.
Rubrik Files an S-1 with the US SEC for Initial Public Offering
Krista Macomber, Research Director at The Futurum Group, shares her insights on Rubrik’s S-1 filing with the United States Security and Exchange Commission (SEC) to go public.
The Futurum Group’s Steven Dickens provides his take on Equinix's latest announcement as Equinix pivots to a digital infrastructure leader, investing in AI-ready data centers to meet growing technological demands with a new facility in California.