Skip to content

agno-agi/agno

What is Agno?

Agno is a high-performance SDK and runtime for multi-agent systems. Use it to build, run and manage multi-agent systems in your cloud.

Agno is the fastest framework for building agents with built-in memory, knowledge, session management, human in the loop and best-in-class MCP support. You can put agents together as multi-agent teams or step-based agentic workflows.

In 10 lines of code, we can build an Agent that uses tools to achieve a task.

from agno.agent import Agent
from agno.models.anthropic import Claude
from agno.tools.hackernews import HackerNewsTools

agent = Agent(
    model=Claude(id="claude-sonnet-4-5"),
    tools=[HackerNewsTools()],
    markdown=True,
)
agent.print_response("Write a report on trending startups and products.", stream=True)

But the real advantage of Agno is its AgentOS runtime:

  1. You get a pre-built FastAPI app for serving your agents, teams and workflows, meaning you start building your AI product on day one. This is a remarkable advantage over other solutions.
  2. You also get a UI that connects directly to the pre-built FastAPI app. Use it to test, monitor and manage your system. This gives you unmatched visibility and control.
  3. Your AgentOS runs in your cloud and you get complete privacy because no data ever leaves your system. This is incredible for security conscious enterprises that can't send data to external services.

For organizations building agents, Agno provides the complete solution. You get the fastest framework for building agents (speed of development and execution), a pre-built FastAPI app that get you building product on day one, and a control plane for managing your system.

We bring a novel architecture that no other framework provides, your AgentOS runs securely in your cloud, and the control plane connects directly to it from your browser. You don't need to send data to any external services or pay retention costs, you get complete privacy and control.

Getting started

If you're new to Agno, follow our quickstart to build your first Agent and run it using the AgentOS.

After that, checkout the examples gallery and build real-world applications with Agno.

Documentation, Community & More Examples

Setup Your Coding Agent to Use Agno

For LLMs and AI assistants to understand and navigate Agno's documentation, we provide an llms.txt or llms-full.txt file.

This file is built for AI systems to efficiently parse and reference our documentation.

IDE Integration

When building Agno agents, using Agno documentation as a source in your IDE is a great way to speed up your development. Here's how to integrate with Cursor:

  1. In Cursor, go to the "Cursor Settings" menu.
  2. Find the "Indexing & Docs" section.
  3. Add https://docs.agno.com/llms-full.txt to the list of documentation URLs.
  4. Save the changes.

Now, Cursor will have access to the Agno documentation. You can do the same with other IDEs like VSCode, Windsurf etc.

Performance

At Agno, we're obsessed with performance. Why? because even simple AI workflows can spawn thousands of Agents. Scale that to a modest number of users and performance becomes a bottleneck. Agno is designed for building highly performant agentic systems:

  • Agent instantiation: ~3μs on average
  • Memory footprint: ~6.5Kib on average

Tested on an Apple M4 MacBook Pro.

While an Agent's run-time is bottlenecked by inference, we must do everything possible to minimize execution time, reduce memory usage, and parallelize tool calls. These numbers may seem trivial at first, but our experience shows that they add up even at a reasonably small scale.

Instantiation Time

Let's measure the time it takes for an Agent with 1 tool to start up. We'll run the evaluation 1000 times to get a baseline measurement.

You should run the evaluation yourself on your own machine, please, do not take these results at face value.

# Setup virtual environment
./scripts/perf_setup.sh
source .venvs/perfenv/bin/activate
# OR Install dependencies manually
# pip install openai agno langgraph langchain_openai

# Agno
python cookbook/evals/performance/instantiate_agent_with_tool.py

# LangGraph
python cookbook/evals/performance/comparison/langgraph_instantiation.py

The following evaluation is run on an Apple M4 MacBook Pro. It also runs as a Github action on this repo.

LangGraph is on the right, let's start it first and give it a head start.

Agno is on the left, notice how it finishes before LangGraph gets 1/2 way through the runtime measurement, and hasn't even started the memory measurement. That's how fast Agno is.

agno_vs_langgraph_perf.mp4

Memory Usage

To measure memory usage, we use the tracemalloc library. We first calculate a baseline memory usage by running an empty function, then run the Agent 1000x times and calculate the difference. This gives a (reasonably) isolated measurement of the memory usage of the Agent.

We recommend running the evaluation yourself on your own machine, and digging into the code to see how it works. If we've made a mistake, please let us know.

Conclusion

Agno agents are designed for performance and while we do share some benchmarks against other frameworks, we should be mindful that accuracy and reliability are more important than speed.

Given that each framework is different and we won't be able to tune their performance like we do with Agno, for future benchmarks we'll only be comparing against ourselves.

Contributions

We welcome contributions, read our contributing guide to get started.

Telemetry

Agno logs which model an agent used so we can prioritize updates to the most popular providers. You can disable this by setting AGNO_TELEMETRY=false in your environment.

⬆️ Back to Top

About

High-performance SDK and runtime for multi-agent systems. Build, run and manage secure multi-agent systems in your cloud.

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Contributors 325

Languages