An AI-powered code optimisation system that leverages large-language models (LLMs) to analyse a code-base, generate targeted tests and suggest performance improvements. The entire pipeline is orchestrated with Prefect 2, requiring no external message broker.
- Automated static analysis and optimisation suggestions driven by LLMs
- Prefect-based workflow orchestration for easy local and cloud execution
- Declarative deployments via
prefect deployment/.serve() - Prometheus metrics & OpenTelemetry tracing for full observability
- Python 3.9+
- OpenAI API key
- (Optional) Prefect CLI if you want the standalone UI:
pip install prefect
- Clone the repository:
git clone <repository-url> cd code_optima
- Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
- Install dependencies:
pip install -r requirements.txt
- Create a
.envfile with your configuration:# Required OPENAI_API_KEY=your_api_key_here # Optional – override defaults CODE_OPTIMA_API_BASE_URL=http://localhost:3000 # Persist stage results to a dashboard MAX_SPEND_PER_REPO=10.0 # USD budget cap for a single optimisation run PERF_TARGET_PCT=5.0 # Target performance uplift (percentage)
The quickest way to spin up the optimisation workflow locally is:
python -m src.serviceThis command will:
- Load environment variables from
.env. - Register a
prefectdeployment for theoptimization_flow. - Start an ephemeral Prefect API, worker and UI (available at http://127.0.0.1:4200).
- Begin listening for flow runs immediately.
If you would rather use an existing Prefect backend (e.g. Prefect Cloud or prefect server start) just export PREFECT_API_URL before launching the service and skip step 3.
Trigger an optimisation run from Python, the Prefect UI, or prefect deploy/run. Here's a quick programmatic example:
from prefect.deployments import run_deployment
result = run_deployment(
name="optimization_flow",
parameters={
"repo_path": "/absolute/path/to/repository",
"config": {
"max_spend": 5.0,
"perf_target": 10.0,
},
},
timeout=3600, # seconds
)
print(result.state) # FlowRunState object
print(result.result()) # Dict with stage outputsThe returned dictionary contains keys for each pipeline stage (analysis, tests, verification). During execution a concise version of each stage result is also POSTed to CODE_OPTIMA_API_BASE_URL (if defined) so external dashboards can store historical runs.
- Prefect UI: Real-time DAG visualisation and logs at http://127.0.0.1:4200 (or your Prefect Cloud workspace).
- Prometheus: Metrics exposed on
/metricsfor scraping. - OpenTelemetry: Spans exported to any OTLP backend via
OTEL_EXPORTER_OTLP_ENDPOINT.
- Install development dependencies:
pip install -r requirements-dev.txt
- Run tests:
pytest
- Run linting:
flake8 src tests
- Prefect Flow – Orchestrates the optimisation pipeline (
analysis → tests → verification) with retries and caching. - Agents –
CodexAnalyzerAgent: pinpoints hotspots & performance issues using LLM context.BaselineBuilderAgent: builds the project and executes/synthesises tests.- (More specialised agents coming soon.)
- Observability stack – Prometheus metrics and OpenTelemetry tracing built-in.
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
MIT License – see LICENSE file for details