Skip to content

Coffee-Boyy/code-optima-zer

Repository files navigation

Code Optima

An AI-powered code optimisation system that leverages large-language models (LLMs) to analyse a code-base, generate targeted tests and suggest performance improvements. The entire pipeline is orchestrated with Prefect 2, requiring no external message broker.

Features

  • Automated static analysis and optimisation suggestions driven by LLMs
  • Prefect-based workflow orchestration for easy local and cloud execution
  • Declarative deployments via prefect deployment / .serve()
  • Prometheus metrics & OpenTelemetry tracing for full observability

Prerequisites

  • Python 3.9+
  • OpenAI API key
  • (Optional) Prefect CLI if you want the standalone UI: pip install prefect

Installation

  1. Clone the repository:
    git clone <repository-url>
    cd code_optima
  2. Create a virtual environment:
    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  3. Install dependencies:
    pip install -r requirements.txt
  4. Create a .env file with your configuration:
    # Required
    OPENAI_API_KEY=your_api_key_here
    
    # Optional – override defaults
    CODE_OPTIMA_API_BASE_URL=http://localhost:3000  # Persist stage results to a dashboard
    MAX_SPEND_PER_REPO=10.0                         # USD budget cap for a single optimisation run
    PERF_TARGET_PCT=5.0                             # Target performance uplift (percentage)

Running the service (development)

The quickest way to spin up the optimisation workflow locally is:

python -m src.service

This command will:

  1. Load environment variables from .env.
  2. Register a prefect deployment for the optimization_flow.
  3. Start an ephemeral Prefect API, worker and UI (available at http://127.0.0.1:4200).
  4. Begin listening for flow runs immediately.

If you would rather use an existing Prefect backend (e.g. Prefect Cloud or prefect server start) just export PREFECT_API_URL before launching the service and skip step 3.

Usage

Trigger an optimisation run from Python, the Prefect UI, or prefect deploy/run. Here's a quick programmatic example:

from prefect.deployments import run_deployment

result = run_deployment(
    name="optimization_flow",
    parameters={
        "repo_path": "/absolute/path/to/repository",
        "config": {
            "max_spend": 5.0,
            "perf_target": 10.0,
        },
    },
    timeout=3600,  # seconds
)

print(result.state)          # FlowRunState object
print(result.result())       # Dict with stage outputs

The returned dictionary contains keys for each pipeline stage (analysis, tests, verification). During execution a concise version of each stage result is also POSTed to CODE_OPTIMA_API_BASE_URL (if defined) so external dashboards can store historical runs.

Monitoring & observability

  • Prefect UI: Real-time DAG visualisation and logs at http://127.0.0.1:4200 (or your Prefect Cloud workspace).
  • Prometheus: Metrics exposed on /metrics for scraping.
  • OpenTelemetry: Spans exported to any OTLP backend via OTEL_EXPORTER_OTLP_ENDPOINT.

Development

  1. Install development dependencies:
    pip install -r requirements-dev.txt
  2. Run tests:
    pytest
  3. Run linting:
    flake8 src tests

Architecture

  1. Prefect Flow – Orchestrates the optimisation pipeline (analysis → tests → verification) with retries and caching.
  2. Agents
    • CodexAnalyzerAgent: pinpoints hotspots & performance issues using LLM context.
    • BaselineBuilderAgent: builds the project and executes/synthesises tests.
    • (More specialised agents coming soon.)
  3. Observability stack – Prometheus metrics and OpenTelemetry tracing built-in.

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Submit a pull request

License

MIT License – see LICENSE file for details

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages