Setup
What will happen:
- Claude reads a sample inbox
- Picks the most important email
- Researches context via web search
- Drafts a reply
Watch the center panel to see each step in real time.
Sample inbox loaded:
- Sarah Chen — API rate limits question
- Marcus Johnson — Website redesign feedback
- Priya Patel — Series A follow-up
- GitHub — Issue #342 memory leak
- Alex Rivera — Mentorship session
Agent Loop Timeline
Click "Run Agent" to start the loop.
Each step will appear here as a card.
How Agents Work
Trigger
Every agent starts with a trigger — something that kicks off the loop. Here it's a button click. In production it could be a cron job, a webhook, a Slack message, or a new row in a database.
Tools
Tools are functions the LLM can request. The model doesn't execute them — it outputs a structured request saying "call this function with these arguments." Your code runs the function and feeds the result back.
tools = [
{ name: "fetch_emails", ... },
{ name: "draft_reply", ... },
]
Data / Memory
The conversation history is the agent's working memory. Each tool result gets appended to messages[] so Claude can reference it in the next iteration. No separate memory system needed.
MCPs vs APIs
Client-side tools (like fetch_emails): your code calls the external API and returns the result to Claude.
Server-side tools (like web_search): Anthropic runs the tool on their side. You just get the result back. This is the MCP pattern — the model connects to capabilities without your code as middleman.
The Loop
The whole agent is a while loop. That's it. The "intelligence" is Claude deciding which tool to call next (or whether to stop).
while not done:
response = claude(messages, tools)
for tool_call in response:
result = execute(tool_call)
messages.append(result)
if stop_reason == "end_turn":
done = True
End Action
The agent decides when it's done. When Claude returns stop_reason: "end_turn" instead of requesting another tool, the loop exits. The agent chose to stop — nobody told it to.