Voice assistants powered by artificial intelligence, such as those developed by Amazon, Apple, and Google, can be easily manipulated into sharing users’ private information due to a newly discovered security flaw, researchers have warned.
A team of computer scientists from the University of California, Irvine, and the University of California, Riverside, found that hackers could exploit a technique called spear phishing to trick AI assistants like Alexa, Siri, and Google Assistant into revealing sensitive details, including passwords, bank account information, and social security numbers.
Spear phishing involves sending tailored messages to individuals, often posing as a trusted contact or company, to deceive them into sharing personal data or clicking malicious links.
The researchers demonstrated that this method could be used to send fraudulent commands to voice assistants, exploiting their reliance on voice recognition and lack of robust user authentication.
Also read:
Instagram Friend Map feature sparks privacy questions
Concerns grow as AI-generated videos spread hate, racism online: ‘No safety rules’
CSIRO Develops Algorithm to Protect Against Sexualized AI Deepfakes
For instance, an attacker could send a text message from a spoofed contact, pretending to be a family member, asking the assistant to disclose a user’s credit card details or calendar schedule.
“We showed that voice assistants are susceptible to impersonation attacks, where a malicious actor can mimic a trusted contact to extract sensitive information,” said lead researcher Professor Mohammad Abdullah of UC Irvine.
“This is particularly concerning because these devices are integrated into millions of homes and trusted to manage personal data.”
The study tested eight leading voice assistants and found that all were vulnerable to some form of spear phishing attack.
The researchers used both text and voice-based commands, including seemingly innocuous requests like asking for a calendar event or a recent purchase, to manipulate the devices into revealing private details.
The findings were presented at the Network and Distributed System Security Symposium in San Diego in February 2025.
The researchers emphasized that the flaw stems from the assistants’ design, which prioritizes user convenience over stringent security measures, such as multi-factor authentication.
Amazon, Apple, and Google were informed of the vulnerabilities prior to the study’s publication. Amazon stated that it has “robust measures” to protect Alexa users and is “continuously improving” its security protocols.
Apple and Google did not immediately respond to requests for comment from the BBC.
The researchers urged users to be cautious about the information they share with voice assistants and to regularly review the privacy settings of their devices.
They also recommended disabling voice purchasing features and enabling additional authentication, such as PIN codes, where available.
Professor Abdullah highlighted the broader implications of the flaw, noting that as AI assistants become more integrated into daily life—managing everything from smart home devices to financial transactions—the risks of such vulnerabilities grow.
“These devices are trusted by users, but that trust can be exploited if security isn’t prioritized,” he said.
The study underscores the need for stronger safeguards in AI voice technology, particularly as their use expands in homes and businesses worldwide.
This article is based on a report by Shiona McCallum, published by BBC News on August 11, 2025. Read the original at BBC News. Additional context was drawn from posts on X discussing AI voice assistant security concerns.














