This starter template helps you quickly get started with the BeeAI framework for Python
- 🔒 Safely execute an arbitrary Python Code via BeeAI Code Interpreter.
- ⚡ Fully fledged Python project setup with linting and formatting.
- Python Version 3.11+
- Poetry Version 2.0+ for Python package management - See installation guide
- Container system (with Compose support):
- LLM Provider - External WatsonX (OpenAI, Groq, ...) or local Ollama
- IDE/Code Editor (e.g., WebStorm, VSCode) - Optional but recommended for smooth smooth configuration handling
Step 1: Clone this repository or use it as a template
git clone https://github.com/i-am-bee/beeai-framework-py-starter.git
cd beeai-framework-py-starter
Step 2: Install dependencies
poetry install
Step 3: Install and start the poetry environment
poetry self add poetry-plugin-shell
poetry shell
Step 4: Create an .env
file with the contents from .env.template
Step 5: Ollama must be installed and running, with the llama3.1 model pulled.
ollama pull llama3.1
Step 6: Start all services related to beeai-code-interpreter
poe infra --type start
Note
beeai-code-interpreter runs on http://127.0.0.1:50081
Get complete visibility of the agent's inner workings via OpenInference Instrumentation for BeeAI.
Be sure the OpenInference Instrumentation for BeeAI supports the newest BeeAI framework before updating the framework version in this repository.
- (Optional) In order to see spans in Phoenix, begin running a Phoenix server. This can be done in one command using docker.
docker run -p 6006:6006 -i -t arizephoenix/phoenix
or via the command line:
brew install i-am-bee/beeai/arize-phoenix
brew services start arize-phoenix
see https://docs.beeai.dev/observability/agents-traceability for more details.
- Run the agent
python beeai_framework_starter/agent_observe.py
- You should see your spans exported in your console. If you've set up a locally running Phoenix server, head to localhost:6006 to see your spans.
Now that you’ve set up your project, let’s run the agent example located at /beeai_framework_starter/agent.py
.
You have two options:
Option 1: Interactive mode
python beeai_framework_starter/agent.py
Option 2: Define your prompt up front
python beeai_framework_starter/agent.py <<< "I am going out tomorrow morning to walk around Boston. What should I plan to wear?"
Note
Notice this prompts triggers the agent to call a tool.
Now let's run the code interpreter agent example located at /beeai_framework_starter/agent_code_interpreter.py
.
Try the Python Tool
and ask the agent to perform a complex calculation:
python beeai_framework_starter/agent_code_interpreter.py <<< "Calculate 534*342?"
Try the SandBox tool
and run a custom Python function get_riddle()
:
python beeai_framework_starter/agent_code_interpreter.py <<< "Generate a riddle"
This example demonstrates a multi-agent workflow where different agents work together to provide a comprehensive understanding of a location.
The workflow includes three agents:
- Researcher: Gathers information about the location using the Wikipedia tool.
- WeatherForecaster: Retrieves and reports weather details using the OpenMeteo API.
- DataSynthesizer: Combines the historical and weather data into a final summary.
To run the workflow:
python beeai_framework_starter/agent_workflow.py
For additional examples to try, check out the examples directory of BeeAI framework for Python repository here.
If you are developing with this repository as a base, or updating this template, see additional information in [DEVELOP.md].