
BLOG
BLOG
The rapid rise of AI-powered applications brings innovation, but also security blind spots. As AI systems become integral to our daily lives, their security must keep pace with their capabilities. This is the focus of our AI Security Testing Series, where we analyze popular AI applications for vulnerabilities that could put users at risk.
In our last analysis, we tested Deepseek’s Android app and uncovered critical security flaws. This time, we turn our attention to Perplexity AI, one of the most widely used AI-powered search assistants. While its AI capabilities are impressive, our security testing revealed multiple vulnerabilities—some of which mirror past mistakes seen in Deepseek, while others expose new risks.
Let’s find out how Perplexity AI measures up in terms of security in this blog: Will it fare better than Deepseek, or will it reveal similar—or even new—vulnerabilities? More importantly, what do these findings tell us about the broader state of security in AI-driven applications? Read on to find out more.
Our security assessment uncovered a few similar vulnerabilities previously identified in Deepseek’s Android App, which are:
These weaknesses further expose users to risks, including data theft, account takeovers, and reverse engineering attacks. Below, we break down the five newly identified vulnerabilities and their potential impact.
Credential exposure
Critical
Perplexity AI contains hardcoded secrets, such as API keys, directly within the application’s code. Attackers who decompile the app can easily extract these secrets and misuse them to gain unauthorized access to backend services, potentially leading to data leaks and system compromise.
In 2019, a major security breach affected an enterprise SaaS provider when attackers exploited hardcoded API keys in their mobile application. This allowed unauthorized access to sensitive customer data, exposing millions of records.
Cross-Site Request Forgery (CSRF) and data theft
High
Perplexity AI’s API responses contain wildcard origins (*), allowing any website to make requests to the app’s backend. This opens the door to malicious websites making unauthorized requests on behalf of users, potentially exposing sensitive data.
A 2022 data breach at a fintech company resulted from a CORS misconfiguration that allowed attackers to hijack user sessions. Malicious websites could send authenticated requests to the company’s backend, extracting financial data without users’ knowledge.
Man-in-the-Middle (MitM) attack
Critical
The Perplexity AI app does not implement SSL pinning, making it vulnerable to interception attacks where hackers can spoof secure connections and steal user data.
In 2017, a high-profile mobile banking app suffered from MitM attacks due to a lack of SSL pinning. Hackers intercepted communication between the app and the bank’s servers, enabling them to steal user credentials and transaction data.
Reverse engineering & code tampering
High
Lack of bytecode obfuscation makes Perplexity’s app logic easy to reverse-engineer, exposing critical code paths and vulnerabilities.
A popular ride-hailing app was reverse-engineered by attackers in 2020, enabling them to create fraudulent versions of the app that bypassed payment systems, resulting in substantial financial losses for the company.
Debugging exploitation
High
Perplexity does not detect if ADB debugging or developer options are enabled, making it easier for attackers to manipulate the app in a controlled environment.
In 2021, a mobile game was widely hacked when attackers used ADB debugging to modify in-game parameters. This led to fraudulent in-app purchases and significant revenue loss.
If you have the Perplexity app on your phone, these security flaws put your personal data at risk right now as:
Hackers can exploit these vulnerabilities to steal your personal data, including sensitive login credentials.
Weak security settings make it easier for malicious websites to siphon off your data silently.
Without robust protections, attackers can intercept your searches and even steal your login details.
The app lacks protections against hacking tools, leaving your device vulnerable to remote attacks.
Removing the Perplexity AI app from your phone is safest until these issues are addressed.
Stick to well-reviewed apps with strong security measures to protect your data.
Regularly update your phone’s software and enable security features like Play Protect to detect threats.
Here’s a breakdown of the key vulnerabilities found in both apps. This comparison highlights the areas where both apps share similar flaws and where Perplexity has unique risks, offering a clearer picture of each app's security posture.
Security Issue |
Deepseek Android |
Perplexity Android |
Impact |
Hardcoded Secrets |
🟥 Yes |
🟥 Yes |
Data Theft |
Data Exposure (CORS Misconfigurations) |
🟥 Yes |
🟥 Yes |
Data Theft |
SSL Pinning Not Implemented |
🟥 Yes |
🟥 Yes |
Account Takeover |
Bytecode Not Obfuscated |
🟥 Yes |
🟥 Yes |
Reverse Engineering |
ADB/Debugging Protections Missing |
🟥 Yes |
🟥 Yes |
Debugging Exploitation |
Unsecured Network Configuration |
🟥 Yes |
🟥 Yes |
Man-in-the-Middle (MitM) |
No SSL Validation or Certificate Pinning |
🟥 Yes |
🟥 Yes |
Impersonation Attack |
Weak Root Detection |
🟥 Yes |
🟥 Yes |
Privilege Escalation |
Susceptibility to the StrandHogg Vulnerability |
🟥 Yes |
🟥 Yes |
Phishing & Identity Theft |
Exposure to Janus Vulnerability |
🟥 Yes |
🟥 Yes |
APK Modification & Malware Injection |
Tapjacking Attacks |
🟥 Yes |
🟥 Yes |
UI Manipulation |
Perplexity doesn’t just repeat Deepseek’s mistakes—it builds on them.
Every vulnerability we found in Deepseek is also present in Perplexity, plus five additional weaknesses that widen the attack surface. This isn’t just an oversight; it’s a pattern.
AI applications are evolving fast, but their security isn’t keeping up.
With CORS misconfigurations, lack of bytecode obfuscation, and missing debugging protections, Perplexity is even more exposed than Deepseek.
These gaps make it easier for attackers to steal data, reverse-engineer the app, and manipulate its behavior.
If Deepseek was a warning sign, Perplexity is a full-blown security hazard.
While Perplexity has more vulnerabilities overall, Deepseek has its own set of critical flaws, like unsecured network configurations and exposure to advanced threats like StrandHogg and Janus. These risks make Deepseek a prime target for more sophisticated attacks that can hijack user sessions and inject malware.
The AI security gap is growing.
AI applications are becoming more powerful, but their security flaws are multiplying just as fast.
With both Deepseek and Perplexity failing fundamental security checks, it’s clear that security isn’t a priority in AI development. Until that changes, users will continue to bear the risks.
Subho Halder, CEO and Co-Founder, Appknox, thinks:
Our testing highlights critical vulnerabilities in Perplexity AI that expose users to a variety of risks, including data theft, reverse engineering, and exploitation. It’s crucial for the developers to address these issues swiftly.
In the meantime, users should be cautious about using the app, particularly for sensitive activities.
As AI applications like Perplexity continue to evolve, so too must our approach to securing them. The vulnerabilities uncovered in our analysis—whether common oversights like hardcoded secrets or more advanced threats like reverse engineering—highlight the pressing need for robust security measures in AI-powered solutions.
While Perplexity AI shares some critical flaws with Deepseek, it also introduces new risks that should not be overlooked.
For developers, this underscores the importance of proactive security practices—implementing SSL pinning, securing API keys, and addressing CORS misconfigurations are just the beginning.
For users, the message is clear: while AI applications bring tremendous value, they must be trusted only when their security is assured. Until these vulnerabilities are addressed, it’s prudent to remain cautious about using apps that expose sensitive data to unnecessary risk.
At Appknox, we are committed to uncovering security risks in AI-driven apps and helping improve the security posture of these emerging technologies. Stay tuned for future posts in this series, where we continue to analyze popular AI applications and their vulnerabilities.
AI security isn’t a future problem—it’s happening now. If you’re navigating these risks or have insights to share, join our LinkedIn group to discuss the latest threats, best practices, and real-world security challenges with industry peers.
Let’s push for a safer AI future—together.
Hackers never rest. Neither should your security!
Stay ahead of emerging threats, vulnerabilities, and best practices in mobile app security—delivered straight to your inbox.
Exclusive insights. Zero fluff. Absolute security.
Join the Appknox Security Insider Newsletter!