<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Architecture on Fondsites</title><link>https://fondsites.com/tags/architecture/</link><description>Recent content in Architecture on Fondsites</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Wed, 06 May 2026 09:49:57 +0300</lastBuildDate><atom:link href="https://fondsites.com/tags/architecture/feed.xml" rel="self" type="application/rss+xml"/><item><title>How AI Agents Work: Models, Tools, Memory, and Guardrails</title><link>https://fondsites.com/ai-agents/guidebooks/how-ai-agents-work/</link><pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate><guid>https://fondsites.com/ai-agents/guidebooks/how-ai-agents-work/</guid><description>&lt;p&gt;&lt;img
 src="https://fondsites.com/ai-agents/images/guidebooks/how-ai-agents-work.avif"
 alt="A layered technical workspace showing an AI agent architecture with model core, memory files, tool connectors, sandboxed computer, approval gate, and monitoring console represented as abstract panels, realistic editorial technology photography, no readable text"
 loading="eager"
 decoding="async" fetchpriority="high"&gt;
&lt;/p&gt;
&lt;p&gt;An AI agent looks mysterious from the outside because it seems to move through a task by itself. Inside, the parts are understandable.&lt;/p&gt;
&lt;p&gt;There is a model that can interpret the goal. There are tools that let it act. There is context that tells it what happened so far. There are rules that limit what it may do. There is often an orchestrator that decides which agent or tool handles which part. There should be logs, evaluations, and a way for a person to interrupt.&lt;/p&gt;</description></item></channel></rss>