random experiments with llm
you need to run ollama serve to use deepseek and llama.
place a .env file at the root of the repository as shown in the example below:
# openai
OPENAI_API_KEY=***
# google for gemini
GOOGLE_API_KEY=***
# serper
SERPER_API_KEY=***
# tavily
TAVILY_API_KEY=***
# slack
SLACK_USER_TOKEN=***
# langsmith (optional)
LANGSMITH_TRACING=***
LANGSMITH_ENDPOINT=***
LANGSMITH_API_KEY=***
LANGSMITH_PROJECT=***then run:
make install && source .env && source .venv/bin/activateto start working with the swe team, run:
python -m llm_experiments --agent sweif you just want a simple agent, run:
python -m llm_experimentsadditionally, you can work with a more specialized agent by running:
python -m llm_experiments --agent slackto see all available agent options, run:
python -m llm_experiments --help