Define the next generation of human-robot interaction. Built by PerceptusLabs
Intentus is a scalable, self-learning orchestration server that brings audio-visual intelligence to robots and autonomous agents. With memory, modular tools, and a structured planning system, it can perceive, reason, and act—adapting intelligently with each interaction.
“The future isn’t just about robots that respond — it’s about robots that understand, plan, and grow.”
-
Vision-Language-Action (VLA) Pipeline Send structured text representations of audio/visual input; Intentus interprets, plans, executes, and adapts.
-
Web-Enabled Agent Includes a web-browsing toolchain for open-ended question answering, research, or data retrieval.
-
Tool-Expandable Add custom robot skills via natural language in under 100 lines. Tools live in
/tools
and can be hot-loaded. -
Memory-Backed Planning Multi-step agents with persistent memory and feedback-driven self-improvement.
-
Production-Ready API-based architecture, secure with API key support, logging, and environment configuration.
-
Asynchronous & Scalable Modern Python backend, fast async processing, with customizable agent parameters and execution lifecycles.
git clone https://github.com/Perceptus-Labs/Intentus.git
python -m venv venv
source venv/bin/activate
pip install -e .
python intentus/examples/agent_demo.py
Demo Task: “What is the capital of France?”
You’ll see:
- Multi-step reasoning
- Query analysis
- Final output
- Agent memory dump
- Execution time and path
💡 See real-time agent logs in
example.log
, outputs saved toexample_outputs/
.
- Your robot/system sends a structured intention (transcribed speech, vision cues, context, etc.)
- Intentus parses & analyzes the command with optional tools
- It plans actions, reasons over them, and executes tools
- Feedback updates its memory, and it continues iterating
- Returns a full report on results, reasoning, and future suggestions
Developers can extend robot abilities via natural language definitions and lightweight Python handlers:
# Create a new tool file in intentus/tools/
# Describe your tool's purpose in plain English
# Add a function with input/output contracts
Your robot can learn to:
- Navigate new terrain
- Run factory checks
- Diagnose mechanical issues
- Interface with APIs or microcontrollers
🔧 Tool architecture is fully modular and scalable.
The orchestrator receives intention payloads from your robot/Go backend:
pip install -r requirements.txt
export ORCHESTRATOR_API_KEY="your-api-key"
export ORCHESTRATOR_HOST="0.0.0.0"
export ORCHESTRATOR_PORT="8000"
python main.py
Now available at:
http://localhost:8000
Receives a structured intention payload:
{
"session_id": "session-123",
"intention_type": "user_query",
"description": "User asked about the weather",
"confidence": 0.95,
"transcript": "What's the weather like today?",
"environment_context": "User is in San Francisco, CA",
"timestamp": 1703123456
}
Returns:
{
"success": true,
"query_analysis": "...",
"base_response": "...",
"final_output": "...",
"execution_time": 2.5,
"steps_taken": 3,
"memory": ["Step 1: ...", "Step 2: ..."]
}
Authentication:
Authorization: Bearer your-api-key
Imagine a robot assistant in a kitchen:
-
Sees an image of spilled flour (visual cue)
-
Hears “What do I do now?” (audio command)
-
Intentus processes the environment:
- Identifies the spill
- Suggests cleanup plan
- Executes cleaning tool
- Stores feedback if plan failed/succeeded
This is closed-loop intention orchestration — and it’s just the beginning.
Customize the agent in main.py > get_agent()
:
AgentConfig(
llm_engine="gpt-4.1-mini",
enabled_tools=["Wikipedia_Knowledge_Searcher_Tool"],
verbose=True,
max_steps=5,
temperature=0.7,
)
python test_orchestrator.py
Make sure API_KEY
in the test script matches your environment variable.
/orchestrate
is protected by an API key- Set
ORCHESTRATOR_API_KEY
- Authentication is optional but highly recommended
Intentus is a foundational system in robot cognition:
- It’s tool-agnostic and can support a fleet of heterogeneous robots
- It bridges perception, language, and control
- It allows natural language tool creation — no low-level firmware work required
- It logs full memory and reasoning chains, giving you total auditability
- It is modular, API-first, and cloud-compatible
Let’s redefine how robots understand and respond to the world.
We are building the future of robotic cognition — creating agents that not only follow instructions, but think, adapt, and evolve.
Visit perceptuslabs.com for more.