CISOs should prepare for new requirements to protect data collected for and generated by artificial intelligence algorithms.
Regulation surrounding artificial intelligence technologies will likely have a growing impact on how companies store, secure, and share data in the years ahead. The ethics of artificial intelligence (AI), particularly facial recognition, by law enforcement authorities, have received a lot of attention. Still, the US is just at the beginning of what will likely be a surge in federal and state legislation regarding what companies can and cannot do regarding algorithmically derived information.
“It’s really the wild west right now in terms of regulation of artificial intelligence,” Peter Stockburger, partner in the Data, Privacy, and Cybersecurity practice at global law firm Dentons, tells CSO. Much like the California Consumer Protection Act (CCPA), which spelled out notice requirements that companies must send to consumers regarding their privacy protections, “a lot of people think that’s where the AI legislation is going to go, that you should be getting giving users notification that there’s automated decision making happening and get the consent.”
AI encompasses a wide range of technical activities, from the creation of deepfakes to automated decision-making regarding credit scores, rental applications, job worthiness, and much more. On a day-to-day basis, many, if not most, companies now use formulas for business decision-making that could fall into the category of artificial intelligence.
This article appeared in CSO Online. To read the rest of the article please visit here.