Since April 2025, the FBI has detected a cyberattack campaign in which malicious actors use artificial intelligence to generate fake audios (Deepfakes) and impersonate senior U.S. government officials. These attacks, that combine smishing techniques (Fraudulent text messages) and vishing (fake voice calls), They aim to access personal accounts of current and former officials, as well as their contacts. Once committed, These accounts can be used to obtain sensitive information or engage in financial fraud.
Attackers initiate contact through messages that appear to come from senior government officials, seeking to establish trust with the victim. Subsequently, They redirect the conversation to platforms controlled by cybercriminals, where they try to obtain login credentials or sensitive information. The FBI warns that, in the face of any message that claims to come from a high official, Its authenticity should not be assumed without proper verification.
The ease with which these fake audios can be created is concerning. With only a few seconds of original recording, current AI tools can replicate voices almost indistinguishable to the human ear. This represents a significant challenge for fraud detection and prevention, especially in an environment where AI technologies are advancing rapidly.
These types of attacks not only put the security of individuals at risk, but also threatens the integrity of government institutions and public trust. FBI Urges Vigilance for Suspicious Communications and Additional Security Measures, such as two-step verification and cybersecurity education, to mitigate the risks associated with these new forms of AI-powered fraud.
Fountain: Bleeping Computer