Multi-model AI chat app
- Multi-model support: Choose between OpenAI (
gpt-4o
,gpt-4-turbo
,gpt-3.5-turbo
) and Gemini (gemini-2.5-flash
,gemini-2.5-pro
,gemini-2.0-flash
) models for each chat message. - Microservice architecture: The backend model service is decoupled from the web frontend using go-micro.
- Modern web UI: Responsive, full-screen chat interface that works on desktop and mobile. Input is auto-focused for fast chatting.
- OpenAI & Gemini integration: The backend calls the OpenAI or Gemini API to generate responses from the selected model.
Set your OpenAI API key (for OpenAI models):
export OPENAI_API_KEY=xxx
Set your Gemini API key (for Gemini models):
export GEMINI_API_KEY=yyy
Run it using Micro
micro run
Go to http://localhost:8080
model/
- Go microservice for model inference (calls OpenAI or Gemini API)handler/handler.go
- Implements the model handler logicproto/
- Protobuf definitions and generated codemain.go
- Service entrypoint
web/
- Go web service for the chat UImain.go
- Serves the HTML/JS frontend and proxies chat requests to the model service
- Chat history
- User login
- So much more