Headlines

The Evolution of Terrorism: Evaluating the Threats of Existential Terrorism and AI

The Evolution of Terrorism: Evaluating the Threats of Existential Terrorism and AIwordpress,terrorism,existentialterrorism,AI,threatanalysis,evolutionofterrorism

The Risks of Existential Terrorism and AI: Assessing the Threat of Human Extinction

Introduction

As the world becomes increasingly interconnected and technology continues to evolve at a rapid pace, the threat of terrorism has taken on a new dimension. While we often associate terrorism with acts of violence and destruction, the idea of “existential terrorism” is a concept that poses a far graver danger—one that could potentially lead to the extinction of humanity itself. This alarming notion has been the subject of a recent article published in the European Journal of Risk Regulation by Gary Ackerman, an associate professor at the State University of New York’s University at Albany College of Emergency Preparedness, Homeland Security, and Cybersecurity.

Defining Existential Terrorism

In this groundbreaking article, Ackerman and co-author Zachary Kallenborn examine the plausibility of terrorist organizations utilizing emerging technologies, particularly artificial intelligence (AI), to inflict catastrophic harm on humanity. They define existential terrorism as an act of terrorism that has the potential to cause the extinction of the human species or render it impossible for human flourishing to occur. This definition challenges the traditional perception of terrorism as being limited to localized violence and brings into focus a potential global existential threat.

The Role of AI in Existential Terrorism

Ackerman emphasizes that the likelihood of an individual or small group of terrorists directly causing human extinction is highly improbable without significant leverage. However, emerging technologies such as AI have the potential to act as force multipliers, enabling terrorists to exert substantial leverage and potentially cause harm at an extinction level. While the immediate concern of a malevolent AI capable of independently destroying humanity is speculative, the authors warn that the possibility cannot be entirely dismissed.

The current technology that terrorists could feasibly produce and deploy on their own to cause an existential threat is biotechnology. For instance, terrorists could create a self-replicating, highly contagious, and deadly pandemic disease. However, this would require exceptional technical knowledge and specialized equipment. Thus, the authors conclude that while direct causation of human extinction by terrorists is unlikely, terrorists could indirectly cause harm by sabotaging or removing safeguards that could prevent other existential risks.

The Importance of Exploring Extreme Scenarios

Ackerman acknowledges that some may consider the scenarios discussed in the article as far-fetched or even “crazy.” However, he argues that it is essential to consider such extreme scenarios to better prepare for future emerging threats, like AI. By exploring the most extreme possibilities, we gain a better understanding of lesser-known risks and can calibrate the likelihood of more plausible cases of terrorism.

Editorial: Vigilance, Preparation, and International Cooperation

The threat of existential terrorism, particularly through the utilization of AI, is a topic that demands serious attention from policymakers, security experts, and the general public. While the immediate risk may currently appear to be low, complacency and dismissiveness could be disastrous in the long run. Instead, we must adopt a proactive approach to risk assessment, preparedness, and prevention.

First and foremost, governments and organizations around the world must invest in comprehensive research and analysis to identify and understand the potential risks associated with emerging technologies such as AI. This includes exploring the vulnerabilities of existing AI systems and developing safeguards to prevent potential misuse or hacking by terrorist entities.

International collaboration is crucial in addressing the threat of existential terrorism. Regardless of geopolitical rivalries, countries must recognize the importance of collective action to safeguard humanity’s future. Initiatives like the Global Catastrophic Risk Institute’s call for a moratorium on the development of advanced AI systems demonstrate the need for a global perspective and coordinated efforts to mitigate the risks.

While the focus has primarily been on the direct risks posed by AI, the authors’ argument regarding spoilers—terrorist actions that remove or impede safeguards—highlights the importance of not neglecting other existential threats. Safeguarding against existential risks, whether natural or human-induced, requires comprehensive strategies that encompass technological advancements, regulation, and international cooperation.

Critical Reflection and Looking Ahead

Ackerman acknowledges that, based on their research, the immediate risk of terrorists directly causing the end of humanity is relatively low. However, he stresses the need for continued vigilance and ongoing research to stay ahead of potential threats. As technology advances and the capabilities of terrorist organizations evolve, we must remain adaptable and responsive to emerging challenges.

The work of Gary Ackerman and his colleagues at the State University of New York’s University at Albany College of Emergency Preparedness, Homeland Security, and Cybersecurity (CEHC) exemplifies the importance of academic institutions in addressing complex issues like existential terrorism. By pushing boundaries, conducting thought experiments, and engaging with speculative scenarios, scholars contribute to our collective understanding and preparedness.

As we embark on an increasingly interconnected and technologically driven future, society and its decision-makers must remain ever-vigilant. The risks associated with existential terrorism and the potential misuse of AI demand cautious navigation and proactive measures to preserve the safety and well-being of humanity.

Sources:
Ackerman, G., & Kallenborn, Z. (2023). Existential Terrorism: Can Terrorists Destroy Humanity? European Journal of Risk Regulation. DOI: 10.1017/err.2023.48
University at Albany. (2023, September 28). Q&A: Assessing the risks of existential terrorism and AI. Tech Xplore. Retrieved from https://techxplore.com/news/2023-09-qa-existential-terrorismai.html

Terrorismwordpress,terrorism,existentialterrorism,AI,threatanalysis,evolutionofterrorism


The Evolution of Terrorism: Evaluating the Threats of Existential Terrorism and AI
<< photo by Mike Bird >>
The image is for illustrative purposes only and does not depict the actual situation.

You might want to read !