Headlines

“Microsoft’s call for responsible AI: Why guidelines matter”

"Microsoft's call for responsible AI: Why guidelines matter"microsoft,responsibleAI,guidelines,artificialintelligence

Microsoft Urges Lawmakers to Adopt New Guidelines for Responsible AI

On Thursday, May 25, 2023, Microsoft‘s President, Brad Smith, proposed a blueprint for regulating artificial intelligence (AI) that calls for building on current structures to govern AI. Smith’s proposal is the latest from industry experts on how to regulate technology in need of a regulatory framework. Smith presented a five-point plan for governing AI:

Implementing and building upon existing frameworks

Smith mentioned the National Institute of Standards and Technology (NIST) framework as an example of what regulators can build on. He also proposed an executive order mandating federal government acquisitions to only come from companies that comply with responsible use principles.

Requiring effective brakes on AI deployments

Smith emphasized the need to encourage the responsible deployment of AI.

Developing a broader legal and regulatory framework

Smith suggested creating a legal framework to govern AI and supported OpenAI CEO Sam Altman’s recommendation of a licensing regime for AI firms, including AI specialists within regulatory agencies to evaluate products.

Promoting transparency

Smith supported producing an annual transparency report for Microsoft‘s AI products. He also called for greater transparency regarding AI-generated content.

Pursuing new public-private partnerships

Smith advised building global frameworks for responsible AI and encouraged partnerships between nations.

Smith hoped that lawmakers would pass federal privacy legislation this year. He also mentioned the necessity to address national security concerns such as deep fakes and their potential as foreign intelligence aids. Smith cited Ukraine as an example of the use of AI, mapping 3000 schools damaged by Russian forces in real-time to aid war crime investigations.

Internet Security and Commentary

The emphasis on responsible AI use is fundamental. The harmful effects that AI can have on society, from cyber-attacks to fraud against consumers and bias, are real concerns that require attention. The regulatory framework that Smith calls for should also be flexible and comprehensive to adapt to the rapidly growing AI industry’s changes. Microsoft‘s annual transparency report for its AI products will promote responsible AI use and increase reliability and trustworthiness among consumers and regulators.

Editorial

Smith’s proposal is a positive step forward. The government’s buying power to shape the AI industry is essential to promote responsible AI use. Implementing and building upon existing frameworks, promoting transparency, and developing a broader legal and regulatory framework will be critical to achieving effective AI regulation.

Advice

Many organizations can be potential victims of cyber threats. Therefore, organizations need to emphasize internet security measures at every stage of AI technology deployment. Creating a robust regulatory framework and adopting guidelines can help prevent the many threats associated with AI.

Artificial Intelligence.-microsoft,responsibleAI,guidelines,artificialintelligence


"Microsoft
<< photo by Shot by Cerqueira >>

You might want to read !