Skip to content

Async/sync Python client for DeepSeek LLM API. Supports completions, chat, streaming, function calling (tools), retries, and robust error handling. Easy API key management, pluggable logging, type hints, and Pydantic models. Built for scripts, web backends, CI/CD, and production.

Notifications You must be signed in to change notification settings

TMHSDigital/DeepSeek-Wrapper

Repository files navigation

DeepSeek Wrapper

CI Coverage Status License Python Versions

Issues Pull Requests GitHub stars GitHub forks Last Commit pre-commit


A modern, async/sync Python client for the DeepSeek LLM API.

Supports completions, chat, retries, and robust error handling. Built for local dev, CI, and production.

DeepSeek Wrapper UI

Documentation

Getting Started
Quick setup guide
Web UI Guide
Guide to using the web interface
Features
Detailed feature list
API Reference
API documentation for developers
Deployment Guide
Deployment options and configurations
FAQ
Frequently asked questions

For DeepSeek AI model capabilities, see DeepSeek documentation.

Version Progress

Below are screenshots showing the evolution of the DeepSeek Wrapper web UI and features over time:

Pre-release

Pre-release UI

Initial UI and feature set before public release.

Tool Status & Caching Panel

Tool status and caching panel

Enhanced tool status and caching panel: see per-tool status, cache stats, and manage tool caches directly from the UI.

Features

Modern API

  • Sync & async support
  • Type hints throughout
  • Clean error handling

Advanced Web UI

  • Session-based chat history
  • Markdown rendering
  • File uploads & processing

Real-Time Awareness

  • Current date & time information
  • Multiple formats (ISO, US, EU)
  • No external API required

Production Ready

  • Automatic retries with backoff
  • 100% test coverage
  • Environment variable config

Function Calling

  • Tool integration framework
  • Built-in tools (Weather, Calculator)
  • Custom tool creation system
  • Tool status dashboard: visualize tool health, API key status, and cache performance in real time

API Key Management

  • Integrated settings panel
  • Secure API key storage in .env
  • Tool configuration UI

Coming Soon

  • Model selection (in development)
  • Custom model parameters
  • Model-specific optimizations

Web UI (FastAPI)

A modern, session-based chat interface for DeepSeek, built with FastAPI and Jinja2.

To run locally:

uvicorn src.deepseek_wrapper.web:app --reload

Then open http://localhost:8000 in your browser.

Web UI Features:

  • Chat with DeepSeek LLM (session-based history)
  • Async backend for fast, non-blocking responses
  • Reset conversation button
  • Timestamps, avatars, and chat bubbles
  • Markdown rendering in assistant responses
  • Loading indicator while waiting for LLM
  • Error banner for API issues
  • Tool configuration in settings panel with API key management

For a comprehensive guide to using the web interface, see the Web UI Guide.

Installation

pip install -r requirements.txt
pip install -e .  # for local development

For detailed installation instructions, see the Getting Started Guide.

Usage (Python)

from deepseek_wrapper import DeepSeekClient
client = DeepSeekClient()
result = client.generate_text("Hello world!", max_tokens=32)
print(result)

# Async usage
import asyncio
async def main():
    result = await client.async_generate_text("Hello async world!", max_tokens=32)
    print(result)
# asyncio.run(main())

Real-Time Date Awareness

from deepseek_wrapper import DeepSeekClient
from deepseek_wrapper.utils import get_realtime_info

# Get real-time date information as JSON
realtime_data = get_realtime_info()
print(realtime_data)  # Prints current date in multiple formats

# Create a client with real-time awareness
client = DeepSeekClient()

# Use in a system prompt
system_prompt = f"""You are a helpful assistant with real-time awareness.
Current date and time information:
{realtime_data}
"""

# Send a message with the real-time-aware system prompt
messages = [
    {"role": "system", "content": system_prompt},
    {"role": "user", "content": "What's today's date?"}
]

response = client.chat_completion(messages)
print(response)  # Will include the current date

Function Calling with Tools

from deepseek_wrapper import DeepSeekClient, DateTimeTool, WeatherTool, CalculatorTool

# Create a client and register tools
client = DeepSeekClient()
client.register_tool(DateTimeTool())
client.register_tool(WeatherTool())
client.register_tool(CalculatorTool())

# Create a conversation
messages = [
    {"role": "user", "content": "What's the weather in London today? Also, what's the square root of 144?"}
]

# Get a response with tool usage
response, tool_usage = client.chat_completion_with_tools(messages)

# Print the final response
print(response)

# See which tools were used
for tool in tool_usage:
    print(f"Used {tool['tool']} with args: {tool['arguments']}")

For a complete API reference and advanced usage, see the API Reference.

Configuration

  • Set DEEPSEEK_API_KEY in your .env or environment
  • Optionally set DEEPSEEK_BASE_URL, timeout, max_retries
  • See .env.example

Default model: deepseek-chat (per DeepSeek docs)

For deployment options and environment configurations, see the Deployment Guide.

API Reference

All methods accept extra keyword args for model parameters (e.g., temperature, top_p, etc).

Testing

pytest --cov=src/deepseek_wrapper

Contributing

  • Run pre-commit install to enable hooks

Links

License

Contributing

Model Selection (Coming Soon)

Note: The model selection feature is currently under development and IS NOT functional yet.

The DeepSeek Wrapper will soon support switching between different DeepSeek models:

  • deepseek-chat
  • deepseek-coder
  • deepseek-llm-67b-chat
  • deepseek-llm-7b-chat
  • deepseek-reasoner

When complete, users will be able to:

  1. Select different models through the settings panel
  2. See the currently active model in the UI
  3. Configure model-specific settings, such as extracting only final answers from reasoning models

About

Async/sync Python client for DeepSeek LLM API. Supports completions, chat, streaming, function calling (tools), retries, and robust error handling. Easy API key management, pluggable logging, type hints, and Pydantic models. Built for scripts, web backends, CI/CD, and production.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published