Supports completions, chat, retries, and robust error handling. Built for local dev, CI, and production.
For DeepSeek AI model capabilities, see DeepSeek documentation.
Below are screenshots showing the evolution of the DeepSeek Wrapper web UI and features over time:
Pre-release
Initial UI and feature set before public release.
Tool Status & Caching Panel
Enhanced tool status and caching panel: see per-tool status, cache stats, and manage tool caches directly from the UI.
- Current date & time information
- Multiple formats (ISO, US, EU)
- No external API required
- Tool integration framework
- Built-in tools (Weather, Calculator)
- Custom tool creation system
- Tool status dashboard: visualize tool health, API key status, and cache performance in real time
A modern, session-based chat interface for DeepSeek, built with FastAPI and Jinja2.
To run locally:
uvicorn src.deepseek_wrapper.web:app --reload
Then open http://localhost:8000 in your browser.
Web UI Features:
- Chat with DeepSeek LLM (session-based history)
- Async backend for fast, non-blocking responses
- Reset conversation button
- Timestamps, avatars, and chat bubbles
- Markdown rendering in assistant responses
- Loading indicator while waiting for LLM
- Error banner for API issues
- Tool configuration in settings panel with API key management
For a comprehensive guide to using the web interface, see the Web UI Guide.
pip install -r requirements.txt
pip install -e . # for local development
For detailed installation instructions, see the Getting Started Guide.
from deepseek_wrapper import DeepSeekClient
client = DeepSeekClient()
result = client.generate_text("Hello world!", max_tokens=32)
print(result)
# Async usage
import asyncio
async def main():
result = await client.async_generate_text("Hello async world!", max_tokens=32)
print(result)
# asyncio.run(main())
from deepseek_wrapper import DeepSeekClient
from deepseek_wrapper.utils import get_realtime_info
# Get real-time date information as JSON
realtime_data = get_realtime_info()
print(realtime_data) # Prints current date in multiple formats
# Create a client with real-time awareness
client = DeepSeekClient()
# Use in a system prompt
system_prompt = f"""You are a helpful assistant with real-time awareness.
Current date and time information:
{realtime_data}
"""
# Send a message with the real-time-aware system prompt
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "What's today's date?"}
]
response = client.chat_completion(messages)
print(response) # Will include the current date
from deepseek_wrapper import DeepSeekClient, DateTimeTool, WeatherTool, CalculatorTool
# Create a client and register tools
client = DeepSeekClient()
client.register_tool(DateTimeTool())
client.register_tool(WeatherTool())
client.register_tool(CalculatorTool())
# Create a conversation
messages = [
{"role": "user", "content": "What's the weather in London today? Also, what's the square root of 144?"}
]
# Get a response with tool usage
response, tool_usage = client.chat_completion_with_tools(messages)
# Print the final response
print(response)
# See which tools were used
for tool in tool_usage:
print(f"Used {tool['tool']} with args: {tool['arguments']}")
For a complete API reference and advanced usage, see the API Reference.
- Set
DEEPSEEK_API_KEY
in your.env
or environment - Optionally set
DEEPSEEK_BASE_URL
,timeout
,max_retries
- See
.env.example
Default model: deepseek-chat
(per DeepSeek docs)
For deployment options and environment configurations, see the Deployment Guide.
All methods accept extra keyword args for model parameters (e.g., temperature
, top_p
, etc).
pytest --cov=src/deepseek_wrapper
- Run
pre-commit install
to enable hooks
Note: The model selection feature is currently under development and IS NOT functional yet.
The DeepSeek Wrapper will soon support switching between different DeepSeek models:
- deepseek-chat
- deepseek-coder
- deepseek-llm-67b-chat
- deepseek-llm-7b-chat
- deepseek-reasoner
When complete, users will be able to:
- Select different models through the settings panel
- See the currently active model in the UI
- Configure model-specific settings, such as extracting only final answers from reasoning models