A sophisticated autonomous agent system that learns from experience, orchestrates parallel task execution, and adapts its behavior based on past successes and failures.
- π DeepAgents Framework: Built on deepagents for sophisticated agent orchestration
- π PostgreSQL Vector Storage: Production-ready vector database with pgvector for semantic memory search
- π Multi-Dimensional Learning: Captures tactical, strategic, and meta-level insights from every execution
- β‘ Parallel Sub-Agents: Four specialized sub-agents (learning-query, execution-specialist, reflection-analyst, planning-strategist)
- π Python Sandbox: Secure code execution with Pyodide WebAssembly environment
- π Visualization Support: Automatic capture of matplotlib plots, PIL images, and pandas DataFrames
- π REST API Server: FastAPI endpoints for memory retrieval and pattern analysis (port 8001)
- π Narrative Memory: Creates human-readable narratives of experiences with deep reflection
- π Execution Analysis: Automatic detection of inefficiencies, redundancies, and optimization opportunities
- ποΈ Virtual File System: Safe file operations through deepagents' virtual filesystem
- π LangSmith Integration: Complete observability of agent execution and learning processes
- π₯οΈ Web UI: React-based interface for visual interaction with the agent (port 10300)
- Python 3.11 or higher
- uv package manager (recommended) or pip
- OpenAI API key or other supported LLM provider
- Docker and Docker Compose (for UI)
# Clone the repository
git clone https://github.com/johannhartmann/learning-agent.git
cd learning-agent
# Install with uv (recommended)
uv sync --all-extras
# Or install with pip
pip install -e ".[dev]"
# Copy environment variables
cp .env.example .env
# Edit .env and add your API keys
import asyncio
from learning_agent.agent import create_learning_agent
async def main():
# Create the learning agent
agent = create_learning_agent()
# Process a task - the agent will plan, execute, and learn
state = {"messages": [{"role": "user", "content":
"Create a Python function to calculate fibonacci numbers and test it"}]}
result = await agent.ainvoke(state)
# Extract the last message as the summary
if result.get("messages"):
last_msg = result["messages"][-1]
print(f"Result: {last_msg.content if hasattr(last_msg, 'content') else last_msg}")
# The agent has now learned from this experience
# Future similar tasks will be executed more efficiently
asyncio.run(main())
The Learning Agent includes a React-based web UI for visual interaction with the agent system.
# Build and start all services (PostgreSQL, API server, LangGraph server, UI)
docker compose up -d
# Or use the Make command
make docker-up
# Access the services:
# - Web UI: http://localhost:10300
# - LangGraph Server: http://localhost:2024
# - API Server: http://localhost:8001
# - PostgreSQL: localhost:5433
# Start all services (PostgreSQL, API Server, LangGraph Server, UI)
docker compose up -d
# Start specific services
docker compose up -d postgres # Just database
docker compose up -d server # LangGraph + API server
docker compose up -d ui # Just UI
# View logs
docker compose logs -f # All services
docker compose logs -f postgres # Specific service
# Stop services
docker compose down
# Stop and remove volumes (caution: deletes data)
docker compose down -v
# Rebuild images
docker compose build
# Or use Make commands (wrapper around docker compose)
make docker-build # Build images
make docker-up # Start all services
make docker-logs # View logs
make docker-down # Stop services
make docker-clean # Clean up including volumes
from learning_agent.agent import create_learning_agent
from learning_agent.state import LearningAgentState
# Create a deepagents-based learning agent
agent = create_learning_agent()
# Create initial state
initial_state: LearningAgentState = {
"messages": [{"role": "user", "content": "Your task here"}],
"todos": [],
"files": {},
"memories": [],
"patterns": [],
"learning_queue": [],
}
# Execute task
result = await agent.ainvoke(initial_state)
The agent includes a secure Python sandbox powered by Pyodide (Python compiled to WebAssembly) for safe code execution:
# The agent can execute Python code safely
state = {"messages": [{"role": "user", "content":
"Analyze this data and create a visualization: [1,2,3,4,5]"}]}
result = await agent.ainvoke(state)
# The sandbox supports:
# - Data analysis with pandas, numpy, scipy
# - Matplotlib visualizations (captured as base64 images)
# - PIL/Pillow image processing
# - Stateful execution (variables persist between calls)
# - Package installation from PyPI (with allow_network=True)
Sandbox Features:
- Isolated Execution: Runs in WebAssembly environment, isolated from host system
- Visualization Capture: Automatically captures matplotlib plots and PIL images as base64
- Data Analysis: Pre-loaded with pandas, numpy, and scientific computing libraries
- Stateful Sessions: Variables and imports persist across multiple executions
- Safe Testing: Test algorithms and logic before writing to files
The Learning Agent combines DeepAgents framework with PostgreSQL vector storage, API server, and web UI:
graph TD
UI[React Web UI] -->|HTTP| API[FastAPI Server :8001]
UI -->|WebSocket| Server[LangGraph Server :2024]
CLI[CLI Interface] --> Agent[DeepAgents Agent]
Server --> Agent
Agent --> Tools[Core Tools]
Agent --> SubAgents[Sub-Agents via Task Tool]
Tools --> PT[Planning Tool - write_todos]
Tools --> FS[File System - read/write/edit]
Tools --> LT[Learning Tools]
Tools --> PS[Python Sandbox - code execution]
SubAgents --> LQ[learning-query]
SubAgents --> ES[execution-specialist]
SubAgents --> RA[reflection-analyst]
SubAgents --> PS[planning-strategist]
Agent -->|Background Learning| NL[NarrativeLearner]
NL -->|Stores| PG[(PostgreSQL + pgvector :5433)]
API -->|Queries| PG
LT -->|Search| PG
PG -->|Stores| Memories[Multi-Dimensional Memories]
PG -->|Stores| Patterns[Execution Patterns]
PG -->|Stores| Embeddings[Vector Embeddings]
- React Web UI (Port 10300): TypeScript-based interface for visual interaction
- FastAPI Server (Port 8001): REST API for memory retrieval and pattern analysis
- LangGraph Server (Port 2024): Serves the agent as an API endpoint
- PostgreSQL + pgvector (Port 5433): Production-ready vector database for semantic search
- DeepAgents Agent: Core agent built with
create_deep_agent()
that directly integrates learning - Python Sandbox: Pyodide-based WebAssembly environment for safe code execution with visualization support
- NarrativeLearner: Background processor that converts conversations into multi-dimensional learnings
- Specialized Sub-Agents: Four agents for different execution aspects
- Learning Tools: Memory search, pattern application, and learning queue management
The FastAPI server provides REST endpoints for accessing memories and patterns:
GET /memories
- Retrieve stored memories with optional searchGET /patterns
- Get identified patterns with confidence scoresGET /learning-queue
- View items queued for learningGET /execution-stats
- Get execution efficiency metrics
# Get recent memories
curl http://localhost:8001/memories?limit=10
# Search memories by content
curl http://localhost:8001/memories?search=fibonacci
# Get high-confidence patterns
curl http://localhost:8001/patterns?min_confidence=0.8
The project includes comprehensive test coverage with CI/CD integration:
# Run all tests
make test
# Run specific test categories
make test-unit # Unit tests
make test-integration # Integration tests
# Run with coverage
make test-coverage
# Type checking
make typecheck
# Linting
make lint
- GitHub Actions: Automated testing on every push
- Test Matrix: Python 3.11 and 3.12
- Quality Checks: Linting, type checking, security scanning
- Coverage: Unit tests with pytest-cov
- API Key Safety: Tests skip when API keys unavailable
The agent provides full observability through LangSmith:
- Trace Every Decision: Complete visibility into planning and execution
- Learning Metrics: Track pattern recognition and application
- Performance Monitoring: Latency, token usage, and cost tracking
- Anomaly Detection: Automatic detection of behavioral changes
Configure LangSmith in your .env
:
LANGSMITH_TRACING=true
LANGSMITH_API_KEY=your_api_key
LANGSMITH_PROJECT=learning-agent
# Install development dependencies
make install-dev
# Install pre-commit hooks
make pre-commit-install
# Run all quality checks
make check
make help # Show all available commands
make test # Run tests
make lint # Run linting
make format # Format code
make typecheck # Type checking
make security # Security scan
make deadcode # Find unused code
- Linting: Ruff with extensive rule set
- Type Checking: Strict mypy configuration
- Security: Bandit for security issues
- Dead Code: Vulture for unused code detection
- Coverage: Pytest-cov with 52% traditional coverage + LangSmith probabilistic testing
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Make your changes
- Run tests (
make test-all
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
The Learning Agent uses PostgreSQL with pgvector extension for production-ready vector storage:
- Multi-Dimensional Learning Storage: Stores tactical, strategic, and meta-level insights
- Semantic Search: Vector similarity search for finding relevant past experiences
- Execution Pattern Analysis: Tracks tool usage, efficiency scores, and anti-patterns
- Scalable: Production-ready database that can handle millions of memories
- Memories Table: Stores conversation memories with embeddings
- Patterns Table: Stores identified patterns and their confidence scores
- Learning Queue: Tracks items queued for explicit learning
- Execution Metadata: Stores tool sequences, timings, and efficiency metrics
DATABASE_URL=postgresql://learning_agent:learning_agent_pass@localhost:5433/learning_memories
The agent is highly configurable through environment variables:
# LLM Configuration
LLM_PROVIDER=openai
LLM_MODEL=gpt-4o-mini
LLM_TEMPERATURE=0.7
# API Keys
OPENAI_API_KEY=your_api_key
ANTHROPIC_API_KEY=your_api_key # Optional
LANGSMITH_API_KEY=your_api_key
# Database Configuration
DATABASE_URL=postgresql://learning_agent:learning_agent_pass@localhost:5433/learning_memories
# Learning Configuration
ENABLE_LEARNING=true
LEARNING_CONFIDENCE_THRESHOLD=0.9
PATTERN_RETENTION_DAYS=90
# Performance
MAX_PARALLEL_AGENTS=10
TASK_TIMEOUT_SECONDS=300
# Embedding Configuration
EMBEDDING_PROVIDER=openai
EMBEDDING_MODEL=text-embedding-3-small
See .env.example for all options.
This project is in active development.
- PostgreSQL vector storage with pgvector
- Multi-dimensional learning extraction (tactical, strategic, meta)
- Web UI with React/TypeScript
- REST API for memory access
- Python sandbox with Pyodide for safe code execution
- Automatic visualization capture (matplotlib, PIL, pandas)
- Execution pattern analysis
- Support for multiple LLM providers (OpenAI, Anthropic, Ollama, etc.)
- CI/CD pipeline with GitHub Actions
- Long-term memory consolidation
- Advanced pattern recognition algorithms
- Collaborative multi-agent learning
- Memory pruning and optimization
- Real-time learning dashboard
License to be determined.
Built with:
- LangChain - LLM orchestration
- LangGraph - State machine orchestration
- LangSmith - Observability and testing
- Rich - Terminal UI
Johann Hartmann - @johannhartmann
Project Link: https://github.com/johannhartmann/learning-agent