There are ghosts in your machine: Cybersecurity researcher can make self-driving cars hallucinate
A New Threat to Autonomous Vehicles
In the world of cybersecurity, new threats are constantly emerging. However, a recent discovery has uncovered a particularly alarming vulnerability in autonomous vehicles. Kevin Fu, a professor of engineering and computer science at Northeastern University, has developed a new form of cyberattack that he calls “Poltergeist attacks.” These attacks exploit the optical image stabilization technology found in most modern cameras, including those used in self-driving cars. By manipulating the resonant frequencies of the camera sensors, Fu and his team were able to create false images that deceive the object detection algorithms in these vehicles.
The Power of Optical Illusions
Unlike many other cyberattacks that aim to jam or interfere with technology, Poltergeist attacks go a step further by creating “false coherent realities.” Using machine learning, the attacks produce optical illusions that trick the computers into misinterpreting their surroundings. While these blurred images may appear innocuous to the human eye, to an autonomous vehicle’s object detection algorithm, they can become people, stop signs, or any other object that the attacker desires.
The Implications and Dangers
The implications of Poltergeist attacks are significant, particularly for autonomous systems mounted on fast-moving vehicles. Fu warns that if left unaddressed, these vulnerabilities could have dire consequences. A driverless car could be made to see a stop sign where there isn’t one, potentially causing it to suddenly stop in a busy road. Conversely, an attacker could trick the car into removing an object, such as a person or another car, causing the car to continue moving forward and collide with the nonexistent object.
Building Trust in Autonomous Technologies
As machine learning and autonomous technologies become more prevalent, the need for robust cybersecurity measures becomes increasingly urgent. Fu emphasizes the importance of designing out these vulnerabilities at the engineering level to ensure the safety and trustworthiness of these technologies. If consumers do not have confidence in the security of autonomous systems, the adoption of these technologies may be hindered, leading to a setback in their widespread use.
Conclusion
The discovery of Poltergeist attacks serves as a stark reminder of the ever-evolving nature of cybersecurity threats. As we continue to integrate autonomous vehicles and other advanced technologies into our lives, it is crucial to prioritize security and invest in research and development to protect against emerging threats. Engineers and technologists must work together to address vulnerabilities and ensure the safe and reliable operation of these technologies. Failure to do so could have severe consequences for consumers, companies, and the advancement of technology as a whole.
<< photo by cottonbro studio >>
The image is for illustrative purposes only and does not depict the actual situation.
You might want to read !
- The Cult of the Dead Cow: Digital Mavericks Rescuing the Internet
- Gelsemium: Uncovering the Covert APT Targeting Southeast Asian Government
- The Hot Seat: Unveiling the Role of CISOs amid Evolving SEC Regulations
- Why Improving Cyber Hygiene is Crucial in the Fight Against Sophisticated Cyberattacks
- MOVEit Hack Exposes Massive Data Breach in 900 US Schools at National Student Clearinghouse
- The Rising Threat: Phishing Campaign Exploits Ukrainian Military Using Drone Manuals
- Predicting the Proliferation of Attacks: Server Takeover through Critical TeamCity Flaw
- The Ongoing Battle for Opposition: Egyptian Politician Unwittingly Becomes Target of Surveillance Software