A pipeline integration for Open WebUI that connects to the Letta AI API, providing streaming responses with event emitters and development tools.
- 🔄 Streaming responses with real-time updates
- 🤔 Reasoning steps displayed in status field
- 📊 Usage statistics tracking
- 🛠️ Development mode with detailed logging
- 🔧 Configurable settings via UI
- 🔌 Tool integration with Open WebUI
- ✅ Comprehensive test suite
The pipeline handles three types of messages:
assistant_message- Main response contentreasoning_message- Reasoning steps shown in statususage_statistics- Token usage and performance stats
Status updates use emojis for better visibility:
- 🔄 Processing request...
- 🤔 Reasoning steps
- ✓ Response complete
⚠️ Error messages
- Clone this repository:
git clone https://github.com/oculairmedia/letta-pipeline.git
cd letta-pipeline- Install dependencies:
pip install -r requirements.txt- Set up environment variables:
export LETTA_BASE_URL="https://letta2.oculair.ca"
export LETTA_AGENT_ID="your-agent-id"
export LETTA_PASSWORD="your-password"- Import and initialize the pipeline:
from letta import Pipe
pipe = Pipe()- Use the pipeline in Open WebUI:
# The pipeline will be automatically registered with Open WebUI
# and will appear in the model selection dropdown- Enable development mode:
pipe.valves.DEV_MODE = True
pipe.valves.LOG_RAW_CHUNKS = True
pipe.valves.LOG_PARSED_CHUNKS = True
pipe.valves.LOG_EVENTS = True
pipe.valves.SAVE_RESPONSES = True- Run tests:
python test_letta_dev.pyMIT License