Articles

With AI RMF, NIST addresses artificial intelligence risks

The new framework could have wide-ranging implications for the private and public sectors. NIST is seeking comments on the current draft by April 29, 2022.

Business and government organizations are rapidly embracing an expanding variety of artificial intelligence (AI) applications: automating activities to function more efficiently, reshaping shopping recommendations, credit approval, image processing, predictive policing, and much more.

Like any digital technology, AI can suffer from a range of traditional security weaknesses and other emerging concerns such as privacy, bias, inequality, and safety issues. The National Institute of Standards and Technology (NIST) is developing a voluntary framework to better manage risks associated with AI called the Artificial Intelligence Risk Management Framework (AI RMF). The framework’s goal is to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.

The initial draft of the framework builds on a concept paper released by NIST in December 2021. NIST hopes the AI RMF will describe how the risks from AI-based systems differ from other domains and encourage and equip many different stakeholders in AI to address those risks purposefully. NIST said it can be used to map compliance considerations beyond those addressed in the framework, including existing regulations, laws, or other mandatory guidance.

This article appeared in CSO Online. To read the rest of the article please visit here.

Articles

New AI privacy, security regulations likely coming with pending…

CISOs should prepare for new requirements to protect data collected for and generated by artificial intelligence algorithms.

Regulation surrounding artificial intelligence technologies will likely have a growing impact on how companies store, secure, and share data in the years ahead. The ethics of artificial intelligence (AI), particularly facial recognition, by law enforcement authorities, have received a lot of attention. Still, the US is just at the beginning of what will likely be a surge in federal and state legislation regarding what companies can and cannot do regarding algorithmically derived information.

“It’s really the wild west right now in terms of regulation of artificial intelligence,” Peter Stockburger, partner in the Data, Privacy, and Cybersecurity practice at global law firm Dentons, tells CSO. Much like the California Consumer Protection Act (CCPA), which spelled out notice requirements that companies must send to consumers regarding their privacy protections, “a lot of people think that’s where the AI legislation is going to go, that you should be getting giving users notification that there’s automated decision making happening and get the consent.”

AI encompasses a wide range of technical activities, from the creation of deepfakes to automated decision-making regarding credit scores, rental applications, job worthiness, and much more. On a day-to-day basis, many, if not most, companies now use formulas for business decision-making that could fall into the category of artificial intelligence.

This article appeared in CSO Online. To read the rest of the article please visit here.

Photo by Markus Winkler on Unsplash