5 Minutes
AI browsers: convenience and a hidden security risk
Agentic AI browsers promise to automate routine web tasks: summarize emails, plan trips, compare products, and even make purchases on behalf of users. That convenience is driving rapid interest from major tech players, with Microsoft integrating Copilot into Edge, OpenAI testing Operator, and Google developing Project Mariner. But a new cybersecurity analysis suggests that these autonomous assistants could become a potent attack surface for fraud and phishing.
Guardio's Scamlexity report and real-world tests
Cybersecurity startup Guardio published a report called Scamlexity that explores how so-called agentic AI browsers perform when faced with common scams. The team focused on Perplexity's Comet, the most accessible agentic browser today, and put it through several staged attacks. In one scenario researchers spun up a convincing fake retail site mimicking a major brand and instructed Comet to buy a smartwatch. Despite obvious red flags such as an odd URL and a malformed logo, the AI completed the transaction and exposed payment information.
In another simulation a phishing email with a malicious link was opened, and Comet submitted banking credentials into a cloned bank login page. A third exploit used prompt injection embedded in a webpage to trick the agent into downloading a file. The experiments underline a worrying trend: AI assistants can be even more gullible than humans when it comes to classic scams.
Why agentic AI is uniquely vulnerable
The core problem is that current agentic AI systems are designed to follow instructions and automate decisions, but they lack the contextual common sense humans use to identify deception. Tasks that appear simple—navigating to a website, reading a logo, verifying a URL—contain many subtle signals that an AI may ignore or misinterpret. Prompt injection, credential stuffing, and fake storefronts are low-cost, high-reward tactics for attackers targeting these assistants.

Product features that matter for security
When evaluating AI browsers and assistants, security features should be central. Important capabilities include:
- Built-in phishing detection and URL verification
- Credential handling policies that prevent automatic form submission
- Prompt injection guards and sandboxed execution of web content
- Audit logs and user confirmation steps for financial transactions
Comparisons: agentic AI versus human users and traditional browsers
Compared with human users, agentic AI can execute tasks faster and at scale, but often lacks skepticism. Traditional browsers rely on plugins and reputation-based blacklists for protection; agentic AI needs integrated, context-aware scam detection to serve as a trustworthy assistant. In head-to-head scenarios, AI may outperform humans on data processing yet underperform when judgement and intuition are required.
Advantages and use cases of agentic AI—if secured properly
With robust security, agentic AI browsers can deliver real value: automated ecommerce purchasing, travel booking, research synthesis, and enterprise automation. For businesses, AI assistants can cut operational costs and speed procurement cycles. For consumers, they simplify multitasking and improve accessibility. But these advantages depend on embedding scam detection and safe defaults into product design.
Market relevance and what vendors must do
Major vendors are already racing to ship agentic AI features, which makes the issue urgent. Without proactive security controls, agentic AI could become a new, large-scale attack vector that fraudsters exploit. Developers should prioritize secure-by-design principles: integrate phishing heuristics, require explicit user approval before transactions, disable automatic credential submission, and implement prompt-injection protections. Regulators and enterprise buyers should demand transparency about safety measures before widespread deployment.
Practical recommendations for users and organizations
- Enable two-factor authentication and limit stored payment credentials in AI-enabled browsers.
- Require manual confirmation for purchases and sensitive actions performed by AI assistants.
- Keep browser extensions up to date and use specialized anti-phishing tools alongside agentic AI.
- For enterprises, sandbox AI agents and monitor audit trails for automated workflows.
Agentic AI browsers represent a major leap in productivity, but Guardio's Scamlexity report is a clear reminder that security lagging behind innovation can create costly vulnerabilities. As AI assistants become mainstream, developers, security teams, and users must work together to bake in scam detection and safe defaults before convenience becomes the weakest link.

Comments