<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Prompt Injection on Fondsites</title><link>https://fondsites.com/tags/prompt-injection/</link><description>Recent content in Prompt Injection on Fondsites</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Wed, 13 May 2026 16:10:13 +0300</lastBuildDate><atom:link href="https://fondsites.com/tags/prompt-injection/feed.xml" rel="self" type="application/rss+xml"/><item><title>AI Agent Prompt Injection: Working With Untrusted Content</title><link>https://fondsites.com/ai-agents/guidebooks/agent-prompt-injection-untrusted-content/</link><pubDate>Tue, 12 May 2026 00:00:00 +0000</pubDate><guid>https://fondsites.com/ai-agents/guidebooks/agent-prompt-injection-untrusted-content/</guid><description>&lt;p&gt;An AI agent does not only read instructions from the person who assigned the task. It also reads the world. It reads web pages, files, tickets, emails, database records, calendar invites, chat transcripts, search results, code comments, and tool outputs. Much of that material is useful evidence. Some of it is stale, mistaken, adversarial, or written for a different audience. A prompt injection problem begins when untrusted content stops being treated as material to inspect and starts being treated as an instruction to obey.&lt;/p&gt;</description></item></channel></rss>