A CLI and GitHub Action that automatically generates pull requests using AI (specifically, Large Language Models or LLMs) from GitHub issues and pull requests.
This tool delivers the ultimate "Vibe Coding" experience, allowing humans to focus solely on writing issues while AI handles all implementation details. Our vision is to create a workflow where developers only need to describe what they want, and the AI translates those requirements into working code.
- Multiple AI Coding Tools: Supports various coding tools, including:
- Aider: An interactive AI pair programming tool
- Codex CLI: OpenAI's coding agent
- Claude Code: Anthropic's agentic coding tool
- Gemini CLI: Google's AI coding assistant
- Planning Capability: Uses LLMs to analyze source code and develop implementation strategies before making any changes.
- Flexible Integration: Works as both a CLI tool and a GitHub Action
Currently, we use gen-pr to create scaffolding pull requests. We have five different gen-pr workflows to generate PRs with various combinations of LLMs and coding tools.
We review the pull requests, select the best one, and then continue to write code on the PR manually. This approach has successfully reduced the amount of manually written code.
Below is an analysis report generated by calc-ai-contrib for June 2025, during the development of Exercode.
AI indicates the amount of code written fully automatically by gen-pr. We are very satisfied with the results and consider the AI contribution ratio a key metric for measuring our productivity.
╔══════════════════════════════════════════════════╗
║ CONTRIBUTION ANALYSIS REPORT ║
╠══════════════════════════════════════════════════╣
║ Date: 2025-06-01 to 2025-06-30 (PRs: 91) ║
║ Total Edits: 9,925 (+6,729 / -3,196) ║
╠══════════════════════════════════════════════════╣
║ AI vs Human: [█████░░░░░░░░░░░░░░░░░] 24% / 76% ║
║ Contributors: 1 AI, 6 Human ║
╚══════════════════════════════════════════════════╝
📊 DETAILED BREAKDOWN
────────────────────────────────────────
🤖 AI : [████░░░░░░░░░░░░] 24% | 2,365 Edits (+1,685 / -680)
👥 Human: [████████████░░░░] 76% | 7,560 Edits (+5,044 / -2,516)
- For development:
- For execution:
- Node.js and npx (for
@openai/codex,@anthropic-ai/claude-code, and@google/gemini-cli) - Python (for
aider) - gh
- Node.js and npx (for
See action.yml and .github/workflows/gen-pr-example.yml.
Here are some examples for creating PRs for issue #89.
Claude Code:
npx --yes gen-pr@latest --issue-number 89 --coding-tool claude-codeCodex:
npx --yes gen-pr@latest --issue-number 89 --coding-tool codex-cliGemini CLI:
npx --yes gen-pr@latest --issue-number 89 --coding-tool gemini-cligen-pr can generate an implementation plan by reading files in the target repository using Repomix.
This feature is particularly useful for non-agentic coding tools like Aider.
Gemini 2.5 Pro (gemini/gemini-2.5-pro) for planning and Aider for coding:
npx --yes gen-pr@latest --issue-number 89 --planning-model gemini/gemini-2.5-pro --reasoning-effort high --repomix-extra-args="--compress --remove-empty-lines --include 'src/**/*.ts'" --aider-extra-args="--model gemini/gemini-2.5-pro --edit-format diff-fenced --test-cmd='yarn check-for-ai' --auto-test"Claude Opus 4 on Bedrock (bedrock/us.anthropic.claude-opus-4-1-20250805-v1:0) for planning and Aider for coding:
npx --yes gen-pr@latest --issue-number 89 --planning-model bedrock/us.anthropic.claude-opus-4-1-20250805-v1:0 --reasoning-effort high --repomix-extra-args="--compress --remove-empty-lines --include 'src/**/*.ts'" --aider-extra-args="--model bedrock/us.anthropic.claude-opus-4-1-20250805-v1:0 --test-cmd='yarn check-for-ai' --auto-test"Gemini 2.5 Pro (gemini/gemini-2.5-pro) for planning and Claude Code for coding:
npx --yes gen-pr@latest --issue-number 89 --planning-model gemini/gemini-2.5-pro --reasoning-effort high --repomix-extra-args="--compress --remove-empty-lines --include 'src/**/*.ts'" --coding-tool claude-codeo4-mini (openai/o4-mini) for planning and Codex for coding:
npx --yes gen-pr@latest --issue-number 89 --planning-model openai/o4-mini --reasoning-effort high --repomix-extra-args="--compress --remove-empty-lines --include 'src/**/*.ts'" --coding-tool codex-cliDeepSeek R1 on OpenRouter (openrouter/deepseek/deepseek-r1-0528:free) for planning and Gemini CLI for coding:
npx --yes gen-pr@latest --issue-number 89 --planning-model openrouter/deepseek/deepseek-r1-0528:free --reasoning-effort high --repomix-extra-args="--compress --remove-empty-lines --include 'src/**/*.ts'" --coding-tool gemini-cliGrok 4 (xai/grok-4) for planning and Aider for coding:
npx --yes gen-pr@latest --issue-number 89 --planning-model xai/grok-4 --reasoning-effort high --repomix-extra-args="--compress --remove-empty-lines --include 'src/**/*.ts'" --aider-extra-args="--model gemini/gemini-2.5-pro --edit-format diff-fenced --test-cmd='yarn check-for-ai' --auto-test"Local Gemma 3n via Ollama (ollama/gemma3n) for planning and Aider for coding:
npx --yes gen-pr@latest --issue-number 89 --planning-model ollama/gemma3n --repomix-extra-args="--compress --remove-empty-lines --include 'src/**/*.ts'" --aider-extra-args="--model ollama/gemma3n --edit-format diff-fenced --test-cmd='yarn check-for-ai' --auto-test"For PR (#103)
Codex:
npx --yes gen-pr@latest --issue-number 103 --coding-tool codex-cliYou can create a YAML configuration file named gen-pr.config.yml or gen-pr.config.yaml in the root of your repository to set default values for options. This config file works for both CLI usage and GitHub Actions. Command-line flags (CLI) or workflow inputs (GitHub Actions) will override values in this file. For example:
repomix-extra-args: "--compress --remove-empty-lines --include 'src/**/*.ts'"
aider-extra-args: '--model gemini/gemini-2.5-pro --edit-format diff-fenced --no-gitignore'
coding-tool: claude-code
test-command: 'yarn check-for-ai'The tool requires model names defined on llmlite in the format provider/model-name:
- OpenAI:
openai/gpt-4.1,openai/o4-miniand more - Azure OpenAI:
azure/gpt-4.1,azure/o4-miniand more - Google Gemini:
gemini/gemini-2.5-pro,gemini/gemini-2.5-flashand more - Anthropic:
anthropic/claude-4-sonnet-latest,anthropic/claude-3-5-haiku-latestand more - AWS Bedrock:
bedrock/us.anthropic.claude-sonnet-4-20250514-v1:0,bedrock/us.anthropic.claude-3-5-haiku-20241022-v1:0and more - Google Vertex AI:
vertex/gemini-2.5-pro,vertex/gemini-2.5-flashand more - xAI:
xai/grok-4,xai/grok-3,xai/grok-3-miniand more - OpenRouter:
openrouter/deepseek/deepseek-r1-0528:free,openrouter/deepseek/deepseek-chat-v3-0324:freeand more - Ollama:
ollama/gemma3n,ollama/deepseek-r1,ollama/qwen3and more
Each provider uses standard environment variables for authentication:
- Coding Tools
- Codex CLI:
OPENAI_API_KEY - Claude Code:
ANTHROPIC_API_KEYorCLAUDE_CODE_OAUTH_TOKEN - Gemini CLI:
GEMINI_API_KEY(orGOOGLE_GENERATIVE_AI_API_KEY)
- Codex CLI:
- Planning Models
- OpenAI:
OPENAI_API_KEY - Anthropic:
ANTHROPIC_API_KEY - Google Gemini:
GEMINI_API_KEY(orGOOGLE_GENERATIVE_AI_API_KEY) - Azure OpenAI:
AZURE_OPENAI_API_KEY,AZURE_OPENAI_ENDPOINT,AZURE_OPENAI_API_VERSION - AWS Bedrock:
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_REGION(orAWS_REGION_NAME) - Google Vertex AI:
GOOGLE_APPLICATION_CREDENTIALSor default service account - xAI:
XAI_API_KEY - OpenRouter:
OPENROUTER_API_KEY - Ollama:
OLLAMA_BASE_URL(default:http://localhost:11434),OLLAMA_API_KEY(optional)
- OpenAI:
Apache License 2.0