Skip to content

A minimalistic and easy-to-use web GUI, built with FastAPI and Vue.js, that allows you to quickly and easily chat with local AI models through Ollama.

License

Notifications You must be signed in to change notification settings

wilmerm/ollama-webui

Repository files navigation

Ollama WebUI

Ollama WebUI is a minimalist, easy-to-use web interface built with FastAPI and Vue.js, designed to interact with local AI models via Ollama.

image

Features

  • Simple, responsive web interface.
  • System prompts (custom instructions) to define AI behavior.
  • FastAPI backend + Vue.js frontend.
  • Seamless integration with local Ollama models.
  • Docker support for quick deployment.

Installation

1. Clone the repository

git clone https://github.com/wilmerm/ollama-webui.git
cd ollama-webui

2. Backend setup

pip install -r requirements.txt

3. Frontend setup

cd frontend/vue-app
npm install
npm run build

4. Start the application

./start.sh

Running with Docker

docker compose build
docker compose up -d

Environment Variables (.env)

# Required:
DEFAULT_MODEL=llama3.1:latest     # See more models: https://ollama.com/library/
OLLAMA_BASE_URL=http://127.0.0.1:11434

# Optional:
DEFAULT_TIMEOUT=30                # Increase if model is heavy or system resources are limited.
DEFAULT_TEMPERATURE=0.5           # Controls creativity of AI responses.
GUNICORN_WORKERS=1                # Number of Gunicorn workers.
VITE_SERVER_BASE_URL=http://127.0.0.1:7000  # Only set if backend runs on a different URL/port.

Ollama Configuration

To allow the backend inside Docker to connect to Ollama, edit the Ollama systemd service file and add:

[Service]
Environment="OLLAMA_HOST=0.0.0.0"

Installing Ollama

Download from the official website: ➡️ https://ollama.com/download

Or install via shell:

curl -fsSL https://ollama.com/install.sh | sh

Useful Ollama Commands

Download a model:

ollama pull llama3.1:latest

Run a model:

ollama run llama3.1:latest

Note: You don’t need to run the model manually when using Ollama WebUI. The app will automatically start the model specified in .env if it’s not already running.


System Prompts (Custom Instructions)

Ollama WebUI supports system prompts to customize AI behavior. Click the "⚙️ Instrucciones del Sistema" section above the chat to:

  • Define custom AI personality and behavior
  • Set context that persists throughout the conversation
  • Use preset examples or create your own instructions
  • Enable/disable as needed during conversations

Example system prompts:

  • "You are an IT security expert. Always answer from a cybersecurity perspective."
  • "Always respond in Spanish and help me learn the language."
  • "You are a helpful assistant who answers concisely and clearly."

For detailed documentation, see docs/SYSTEM_PROMPTS.md.


Contributing

Contributions are welcome!

  1. Fork the repository.
  2. Create a new branch (git checkout -b feature/your-feature).
  3. Commit your changes (git commit -m 'Add feature').
  4. Push to your fork (git push origin feature/your-feature).
  5. Open a Pull Request.

License

This project is licensed under the MIT License — see the LICENSE file for details.

Credits

Wilmer Martinez
Wilmer Martinez

Author & Maintainer

About

A minimalistic and easy-to-use web GUI, built with FastAPI and Vue.js, that allows you to quickly and easily chat with local AI models through Ollama.

Topics

Resources

License

Security policy

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

 

Packages

No packages published

Contributors 3

  •  
  •  
  •