The Biden-Harris Administration: Taking Actions Towards Cyber-Secure AI
The White House announcement on May 4, 2021, regarding Artificial Intelligence (AI) policies made cybersecurity top of mind on the spectrum of concerns around AI. Given the potential dangers of AI, including economic impact and its potential for discrimination, the administration organized an event at DEF CON 31, where the nation’s leading developers would expose their algorithms to rigorous vetting from the public.
DEF CON AI Village Event
The AI Village event brings independent commitment from some of the nation’s leading AI companies like Anthropic, Google, Hugging Face, Microsoft, Nvidia, OpenAI, and Stability AI for a public evaluation of their AI systems, that will be consistent with responsible disclosure principles. By shining a light on the algorithmic kinks enabling racial discrimination, cybersecurity risks , etc., it serves an essential purpose of testing AI models for its thorough evaluation, independent of government or the companies that develop them.
The Looming AI Threats
AI cybersecurity risk challenges go beyond continuously evolving cyber-malware and phishing threats worldwide. It poses an existentially threatening challenge to the future of a safe internet when, with AI, hackers or even non-technical people can spread malware attacks at an unprecedented scale. AI also enables malicious actors to modify phishing lures and create advanced malware for complex attack chains that expand technology’s cyberattack surface beyond its bloated state.
According to Chenxi Wang, the head of Rain Capital, the most significant danger of AI is disinformation, influencing decision-making and leading to a bad outcome with long-lasting impacts.
Government Actions on Cyber-Secure AI
Taking AI cybersecurity as a national security issue, the Biden-Harris administration outlined specific measures to address these AI threats. The National Science Foundation plans to fund seven new National AI Research Institutes, including AI cybersecurity studies, while the National Institute of Standards and Technology (NIST) released its AI Risk Management Framework, guiding responsible AI usage.
Moreover, the May 4 announcement also revealed the Office of Management and Budget’s plan to release draft policy guidance. This will encourage those developing and selling AI technologies to follow the policies’s guidelines, once policy guidance is released.
Conclusion
The DEF CON AI Village event and the government’s initiatives create awareness of the AI‘s impending dangers, demanding a thorough evaluation of AI models. By following these guidelines, AI could be developed securely, ensuring its ethical and reliable use without endangering a safe internet.
<< photo by Michael Dziedzic >>
You might want to read !
- “Ransomware Hackers Target Corporations: Inside the Dragos Employee Data Breach”
- Updating Legacy Systems: Mitigating the Risk of Old Vulnerabilities
- Bridging the Cybersecurity Divide: The Power of Public-Private Information Sharing
- “Unleashing the Potential and Pitfalls of AI Hacking at DEF CON 31”
- Oracle Property Management Software Puts Hotels at Risk of Bug Infestation
- “Collaborative Efforts of Consilient Inc. and Harex InfoTech Aim to Combat Financial Crime in South Korea”