
With AI RMF, NIST addresses artificial intelligence risks
The new framework could have wide-ranging implications for the private and public sectors. NIST is seeking comments on the current draft by April 29, 2022.
Business and government organizations are rapidly embracing an expanding variety of artificial intelligence (AI) applications: automating activities to function more efficiently, reshaping shopping recommendations, credit approval, image processing, predictive policing, and much more.
Like any digital technology, AI can suffer from a range of traditional security weaknesses and other emerging concerns such as privacy, bias, inequality, and safety issues. The National Institute of Standards and Technology (NIST) is developing a voluntary framework to better manage risks associated with AI called the Artificial Intelligence Risk Management Framework (AI RMF). The framework’s goal is to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
The initial draft of the framework builds on a concept paper released by NIST in December 2021. NIST hopes the AI RMF will describe how the risks from AI-based systems differ from other domains and encourage and equip many different stakeholders in AI to address those risks purposefully. NIST said it can be used to map compliance considerations beyond those addressed in the framework, including existing regulations, laws, or other mandatory guidance.
This article appeared in CSO Online. To read the rest of the article please visit here.