Headlines

Navigating the Complexities: Formulating Effective AI Risk Policy

Navigating the Complexities: Formulating Effective AI Risk Policywordpress,AIrisk,policyformulation,navigatingcomplexities

Artificial Intelligence Risk Management Takes Center Stage at Black Hat USA

By | August 10, 20XX

BLACK HAT USA – Las Vegas – In a long-awaited shift, cybersecurity professionals are finally taking notice of the multilayered risks associated with artificial intelligence (AI). As discussions surrounding AI risk management gain momentum, CISOs, executives, the board, and AI/ML developers are all grappling with the question of how to set and enforce effective risk management policies.

Recognition of AI Risks

Hyrum Anderson, co-author of “Not with a Bug, But with a Sticker: Attacks on Machine Learning Systems and What To Do About Them” and a distinguished ML engineer at Robust Intelligence, is heartened by the newfound attention to AI risks. Anderson notes that just a year ago, AI risk was dismissed as science fiction. However, with AI security now in the spotlight, there is genuine excitement at this year’s Black Hat USA conference.

Emerging Threats and Challenges

The conference, which opened with keynotes from Jeff Moss (founder of Black Hat and DEF CON) and Maria Markstedter of Azeria Labs, features key briefings on research uncovering emerging threats stemming from the use of AI systems. These threats include vulnerabilities in generative AI that make them prone to compromise and manipulation, AI-enhanced social engineering attacks, and the ease with which AI training data can be poisoned to undermine the reliability of ML models.

Will Pearce, AI red team lead for Nvidia, presented research at the conference showing that most training data is sourced from online platforms that are easy to manipulate. He revealed that for a mere $60, it is possible to control enough data to poison any consequential model.

Tackling Technical Challenges

Hyrum Anderson is using his time at Black Hat Arsenal to address the technical challenges associated with discovering and quantifying risk in AI systems. He unveiled the newly open-sourced AI Risk Database, a project he worked on in collaboration with researchers from MITRE and Indiana University.

Policy Challenges and Lessons

While Anderson tackles technical challenges, his co-author Ram Shankar Siva Kumar, an affiliate at the Berkman Klein Center for Internet and Society at Harvard University, and Tech Policy Fellow at UC Berkeley, focuses on the policy aspects of AI risk management.

Siva Kumar, alongside Jonathon Penney, associate professor for the Osgoode Hall Law School at York University, presented a session at the conference titled “Risks of AI Risk Policy: Five Lessons.” As public and private sector standards are released, such as the NIST AI RMF and the draft of the EU AI Act, enterprises will face difficult choices in adhering to these standards.

Complexity of AI Systems

Siva Kumar emphasized that AI systems are too complex to be managed by a single, unified standard. Adhering to just one standard is not sufficient to address the intricate nature of these systems.

Challenges for Engineers

The second challenge lies in the technical vagueness of AI policy standards, making them difficult for engineers to implement. Compliance with these policies is not as simple as following a checklist; engineers struggle to navigate the complexities involved.

Tradeoffs in AI Risk Policies

Implementing security and resilience measures in AI systems comes with significant tradeoffs. Increasing robustness in a system may have implications for AI bias, introducing tension between security and desired properties of AI.

Competing Interests

Organizations will have to navigate competing interests as they strive for good AI risk management. Leadership and clear AI goals are crucial for prioritizing risks amidst technical tradeoffs.

Organizational Culture

Lastly, enforcing AI risk policies requires a fundamental shift in organizational culture. Compliance alone is insufficient; organizations must embrace collaboration and decisive decision-making to effectively manage AI risks.

Conclusion

As the AI landscape continues to rapidly evolve, the need for robust risk management policies becomes increasingly important. While the challenges are complex and the path forward may not yet be entirely clear, the conversations and research taking place at Black Hat USA indicate that the cybersecurity community is committed to addressing the risks associated with AI. The collaboration between technical experts and policy professionals is critical for developing effective risk management strategies that protect us from potential AI vulnerabilities.

ArtificialIntelligence-wordpress,AIrisk,policyformulation,navigatingcomplexities


Navigating the Complexities: Formulating Effective AI Risk Policy
<< photo by Andrea De Santis >>
The image is for illustrative purposes only and does not depict the actual situation.

You might want to read !