This sample shows how to deploy an AI-powered GitHub repository chat tool using Mastra, a TypeScript AI framework. Mastra-nextjs allows you to chat with and understand any GitHub repository by fetching file trees, contents, pull requests, and issues, making it easy to navigate and understand codebases of any size.
- Repository Analysis: Enter a GitHub repository URL and instantly start a conversation about it
- Code Exploration: Navigate file trees, view file contents, and understand code structure
- PR & Issue Access: Query information about pull requests and issues directly in chat
- Large Codebase Support: Powered by Google's Gemini Flash model with its large context window
- Intuitive UI: Built with assistant-UI for a seamless chat experience with retries, copy, and message branching
- Download Defang CLI
- (Optional) If you are using Defang BYOC authenticate with your cloud provider account
- (Optional for local development) Docker CLI
To run the application locally for development, use the development compose file:
docker compose -f compose.dev.yaml upThis will:
- Start PostgreSQL with volume persistence for local development
- Expose PostgreSQL on port 5432 for direct access if needed
- Start the Next.js application on port 3000 with hot reload
You can access mastra-nextjs at http://localhost:3000 once the containers are running.
For this sample, you will need to provide the following configuration. Note that if you are using the 1-click deploy option, you can set these values as secrets in your GitHub repository and the action will automatically deploy them for you.
Your Google Generative AI API key for accessing the Gemini Flash model. You can get this from the Google AI Studio.
The password for your Postgres database. You need to set this before deploying for the first time.
You can easily set this to a random string using defang config set POSTGRES_PASSWORD --random
The PostgreSQL database connection string. This will be automatically configured when using BYOC managed database services. It should look something like this: postgresql://[user[:password]@][netloc][:port][/dbname][?param1=value1&...].
Set to true to enable SSL. Set to false to disable SSL. (Can be set directly in the Docker Compose file.)
A GitHub personal access token to increase API rate limits when fetching repository data. This is optional but recommended for better performance. Setting the permissions to public repositories only is sufficient, unless you want to access private repositories that you have access to.
- Enter a GitHub repository URL in the input field (e.g.,
https://github.com/DefangLabs/defang) - Start chatting with mastra-nextjs about the repository
- Use commands like:
- "Show me the file structure"
- "What are the recent pull requests?"
- "Explain the purpose of [filename]"
- "How many open issues are there?"
Mastra-nextjs uses a tool-based approach rather than traditional RAG systems, making it more efficient for large codebases. When you provide a repository URL, Mastra-nextjs uses tools to:
- Fetch the repository's file tree
- Access file contents on demand
- Retrieve information about pull requests and issues
- Store conversation history using Mastra's memory package
The large context window of Gemini Flash allows the agent to hold more code in memory, making the conversation more coherent and informed.
Note
Download Defang CLI
Deploy your application to the Defang Playground by opening up your terminal and typing:
defang compose upIf you want to deploy to your own cloud account, you can use Defang BYOC.
[!WARNING] > Extended deployment time: This sample creates a managed PostgreSQL database which may take upwards of 20 minutes to provision on first deployment. Subsequent deployments are much faster (2-5 minutes).
This sample was based off of mastra's repo-chat sample.
Title: Mastra & Next.js
Short Description: An AI-powered tool for chatting with GitHub repositories using Mastra and Google Gemini.
Tags: AI, GitHub, Mastra, Next.js, PostgreSQL, TypeScript
Languages: TypeScript, JavaScript, Docker