Skip to Content

DU Cyber Security Researcher Unveils Risks of Virtual Assistants

Back to News Listing

Author(s)

Connor Mokrzycki

Writer

Sanchari Das’ award-winning study shows the numerous vulnerabilities of increasingly common software tools like Amazon Alexa and Google Assistant.

News  •
View of the Ritchie School Of Engineering and Computer Science with tree branches visible in the foreground.

Virtual assistants make our lives easier—but are they safe?

Seeking an answer to this question, Sanchari Das, assistant professor of computer science in the Ritchie School of Engineering and Computer Science, examined widely used virtual assistant (VA) apps and found several concerning security and privacy practices that could potentially expose users’ private data to malicious actors. The paper she co-authored with research intern Borna Kalhor on the findings won the Best Position Paper Award at the 2024 International Conference on Information Systems Security and Privacy.

As VA apps have become more prevalent and expanded their ability to handle complex tasks, Das wanted to know how much user data the apps have access to and how many permissions a user must provide. Das, who has a background in user-focused privacy and security research, investigated eight of the most popular VAs for android phones, including Google Assistant, Amazon Alexa, Microsoft Cortana and more. “They gather a lot of data, and usually a person who is using a virtual assistant does not realize that,” she says.

Sanchari Das

By inspecting each app’s underlying code and functions, Das uncovered several concerning vulnerabilities, including the use of weak encryption methods; non-SSL (Secure Sockets Layer) certificates, providing an avenue for DNS hijacking attacks; and executing raw SQL queries, potentially allowing SQL injection attacks. Similar exploits have resulted in major cybersecurity incidents across numerous industries in the past. Das and her students also manually tested each of the applications in simulated environments. “We looked into what type of data is being asked by the users and how we can protect it,” she says.

Improper data handling, encryption and authentication practices were not the only findings of note, Das says. Many of the useful features provided by virtual assistants, like using voice commands to send a text, start or stop music, or navigate by way of Google or Apple maps, require access to information from the device they run on—often without the user being aware and offering malicious actors another potential avenue to exploit.

Also present in several of the VA apps were trackers—tools built into the app to gather data. Some are useful, Das notes, reporting on crashes and bugs, while other trackers collect information on a user’s interactions with the app and even location. While trackers can be helpful in improving an app’s performance and utility, they can also be used to predict users’ behavior and serve targeted advertisements. Personal information being collected could be used maliciously, Das says. “There are identification details and profiling details. We have seen previously that profiling information can be really concerning, in terms of the racial and other aspects as well.”

Limiting the vulnerability of your personal data is not as simple as choosing not to use VAs, with the apps pre-installed on most smartphones and computers, often without an option to fully remove them. While some data-gathering applications provide ways to limit what data is collected, they generally require users to opt out, rather than opting in to allow their data to be collected and used.

Patching the vulnerabilities in the code is just the start of addressing the security and privacy concerns found in VA software. Legislation like the European Union’s General Data Protection Regulation, the Colorado Privacy Act, and several state-wide data privacy laws have gone into effect in recent years, restricting how data can be collected and used. According to Das, the platforms that host the applications, like the Google Play Store, could also be stricter in not allowing insecure and unsafe applications to be distributed, so that “it's not even coming to the users to put their data at risk.”

And a major component, Das says, is ensuring that users are able to make informed decisions and requiring users to opt in rather than opt out of data collection. Communicating the implications of security and privacy vulnerabilities to users, regardless of their technical knowledge, is no easy task, but researchers in the field are working to develop ways to clearly indicate a safety and security score for every application, like a nutrition label on food items.

Das is continuing to research how applications collect and use user data, along with the risks they pose, but plans to expand the scope of the research beyond virtual assistants.  The goal, she says, is developing a framework to automatically determine security and privacy vulnerabilities so that software distributors, individual users, and organizational users alike know the risks and can make better-informed decisions.

Related Articles