Headlines

The Defenders’ Challenge: Preparing for the Era of Deepfakes

The Defenders' Challenge: Preparing for the Era of Deepfakesdeepfakes,cybersecurity,artificialintelligence,machinelearning,digitalforensics,medialiteracy,videomanipulation,authenticity,trust,verification

Imposter Scammers Utilizing Videoconferencing Technology Amidst the Pandemic

The year 2021 saw the rise of scammers utilizing video conferencing technology to pull off business email compromises. One of the most commonly used tactics involved impersonating a business executive, usually the CEO, using deepfakes to mimic their voice and appearance. By using a still picture of the business executive and audio generated by deep neural networks, fraudsters could get employees to send money or initiate wire transfers. As per the FBI’s 2022 Congressional report on wire fraud, these techniques helped scammers commit more than $2.7 billion in business email scams in 2022.

Defense Against Deepfakes

The growing use of generative AI technologies by scammers points to the need for more comprehensive software that can detect AI-generated images and audio. Pindrop, a renowned voice-identity security firm, has already made headway in this area with the company claiming to detect the engine that’s creating deepfakes with 99% accuracy. However, with the rapid evolution of deepfakes, Pindrop and other companies will need to work tirelessly to keep up with the latest advancements in fraudulent practices. As of now, human defenses are less effective than AI defenses when dealing with deepfakes, with humans only identifying deepfake videos 57% of the time. In comparison, a leading machine-learning detection model could accurately identify a deepfake 84% of the time, as per research published in MIT and Johns Hopkins University in January 2022.

Telltale Signs of Machine-Generation

Currently, humans can still accurately identify deepfakes as they leave behind artifacts that machines generate, such as unnatural hand movements and strange audio intonations. However, these artifacts are rapidly disappearing, and soon machines will be able to generate content without any traces of distortion or artificiality. This means that humans will have to rely on AI-assistants to combat AI-enabled fraudsters to prevent falling prey to deepfake scams.

Detecting Liveness to Prevent Replay Attacks

The state of the art defense against deepfakes is not facial or voice matching, but detecting whether a live human is present on the other side of the microphone or camera. Multimodal biometrics, multimodal liveness detection, and a host of anti-spoofing techniques can help analyze the metadata and environment of the image or audio file to detect spoofing and other types of fraud. However, despite the high value of such technology, many institutions’ verification processes do not include live tests to detect liveness. Journalists have highlighted instances where they have bypassed financial institutions’ voice identification systems in the past. Pindrop’s Balasubramaniyan claims that they have found many financial institutions vulnerable to replay attacks, reinforcing the need for vigilance in preventing this form of fraudulent activity.

Evolution of Tactics

As technology advances, criminals are also adapting to change their modus operandi. For example, pre-recording clips will make way for voice conversion attacks, where generative-AI systems train themselves to convert a fraudster’s voice into the target’s voice in real-time. Microsoft’s VALL-E offers a more potent attack, where it can create a credible deepfake with just a 3-second audio clip. As attackers invest more resources in creating deepfakes, the cat-and-mouse game between defenders and attackers will continue. Balasubramaniyan argues that undetectable deepfakes will become possible but only for attackers with significant resources and a considerable amount of material to create a deepfake. Hence, defenders will need to focus on preventing scalable deepfakes, raising the bar to such an extent that only experts will have the necessary expertise to create deepfakes.

Conclusion

The rising dependence on technology coupled with the increasing demand for contactless interactions has powered the popularity of deepfake technology for fraudsters. As AI defenses continue to advance, scammers are also coming up with more complex, innovative methods to trick people out of their money. Therefore, it is imperative to be aware of the latest technology and continually keep up to date with the latest developments to avoid falling victim to such attacks. Vigilance is the key to combat emerging technologies aimed at subverting institutional or personal security.

Deepfake Technology-deepfakes,cybersecurity,artificialintelligence,machinelearning,digitalforensics,medialiteracy,videomanipulation,authenticity,trust,verification


The Defenders
<< photo by ready made >>

You might want to read !