Skip to content

Translation proxy server with OpenAI-compatible API and XUnity.AutoTranslator

Notifications You must be signed in to change notification settings

ReturnToFirst/XUnity.AutoTranslator-openai

Repository files navigation

OpenAI-Compatiable Translation provider for XUnity.AutoTranslator

A translation Custom provider for XUnity.AutoTranslator.
This project is compatible with openai API.

Installation

Requirements

Dependencies

You can install the requirements with:

pip3 install -r requirements.txt

Configuration

CLI-configuration

```bash
usage: main.py [-h] [--base-url BASE_URL] [--api-key API_KEY] [--model-name MODEL_NAME] [--temperature TEMPERATURE] [--max-tokens MAX_TOKENS]
            [--frequency-penalty FREQUENCY_PENALTY] [--presence-penalty PRESENCE_PENALTY] [--host HOST] [--port PORT] [--use-history]
            [--max-history MAX_HISTORY] [--use-latest-history] [--db-type DB_TYPE] [--cache-translation] [--use-cached-translation]
            [--use-latest-records] [--init-latest-records INIT_LATEST_RECORDS] [--postgres-host POSTGRES_HOST] [--postgres-port POSTGRES_PORT]
            [--postgres-user POSTGRES_USER] [--postgres-password POSTGRES_PASSWORD] [--postgres-db POSTGRES_DB] [--sqlite-db-path SQLITE_DB_PATH]
            [--log-file LOG_FILE] [--log-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}] [--task-template TASK_TEMPLATE] [--specify-language]
            [--language-template LANGUAGE_TEMPLATE] [--src-start SRC_START] [--src-end SRC_END] [--tgt-start TGT_START] [--tgt-end TGT_END]
            [--use-system-prompt] [--system-prompt SYSTEM_PROMPT] [--config CONFIG]

Application Configuration CLI

options:
-h, --help            show this help message and exit
--base-url BASE_URL   Base URL for OpenAI API
--api-key API_KEY     openai
--model-name MODEL_NAME
                        OpenAI model name
--temperature TEMPERATURE
                        Model temperature (randomness control)
--max-tokens MAX_TOKENS
                        Maximum number of tokens to generate
--frequency-penalty FREQUENCY_PENALTY
                        Penalty for repeated tokens
--presence-penalty PRESENCE_PENALTY
                        Penalty for new tokens
--host HOST           Server host address
--port PORT           Server port
--use-history         Enable history usage
--max-history MAX_HISTORY
                        Maximum number of history records
--use-latest-history  Use latest history records
--db-type DB_TYPE     Database type to use
--cache-translation   Enable translation caching
--use-cached-translation
                        Use cached translations if available
--use-latest-records  Use latest database records
--init-latest-records INIT_LATEST_RECORDS
                        Number of initial latest records
--postgres-host POSTGRES_HOST
                        PostgreSQL server host
--postgres-port POSTGRES_PORT
                        PostgreSQL server port
--postgres-user POSTGRES_USER
                        PostgreSQL username
--postgres-password POSTGRES_PASSWORD
                        PostgreSQL password
--postgres-db POSTGRES_DB
                        PostgreSQL database name
--sqlite-db-path SQLITE_DB_PATH
                        Path to the SQLite database file
--log-file LOG_FILE   Log file path
--log-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}
                        Logging level
--task-template TASK_TEMPLATE
                        Template for the translation task
--specify-language    Specify source and target languages in the prompt
--language-template LANGUAGE_TEMPLATE
                        Template for specifying languages
--src-start SRC_START
                        Start tag for the source language
--src-end SRC_END     End tag for the source language
--tgt-start TGT_START
                        Start tag for the target language
--tgt-end TGT_END     End tag for the target language
--use-system-prompt   Enable system prompt
--system-prompt SYSTEM_PROMPT
                        System prompt to be used
--config CONFIG       Path to the TOML configuration file
```

For docker, you can change args using Environment variables. Environment variables are listed in Dockerfile.

pre-configuration file

For docker, set CONFIG environment variable to target path, and you mount configuration file.

Usage

Local Usage

  1. Install Dependencies
  2. Configure the provider in XUnity.AutoTranslator:
    1. Change Provider to 'CustomTranslate' in [Service] section.
          [Service]
          Endpoint=CustomTranslate
      
    2. Set server URL in [Custom] section.
      [Custom]
      Url=http://<server_url>:<server_port>/translate
      
    3. start main.py Recommand args is
    python main.py --model-name <model_name_to_use> --base-url <model_url_to_use> --api-key <api_key_to_use> --use-latest-history --cache-translation --use-cached-translation --use-latest-records --specify-language --use-system-prompt

Docker Usage

docker run

If you want to run it with docker run, use command below.

docker run -d \
  --name xunity \
  -e BASE_URL="https://api.openai.com/v1" \
  -e API_KEY="api_key_here" \
  -e MODEL_NAME="gpt-3.5-turbo" \
  -e SQLITE_DB_PATH="translation.db" \
  -v ./translation.db:translation.db \
  -p 5000:5000 \
  ghcr.io/returntofirst/xunity-autotranslator-openai:latest

This command will server on port 5000.

docker compose

If you want to run this container with docker compose, use docker-compose.yaml to configure container

Kubernetes

If you want to run this container with kubernetes, use k8s-example.yaml to configure container

Features

Translation Provider

  • CustomTranslate Endpoint: Integrate with XUnity.AutoTranslator using a custom endpoint that leverages OpenAI's translation capabilities.

Advanced Translation Features

  • Translation Caching: Store translated text in a local database to avoid redundant API calls and improve performance.
  • Context-Aware Translation: Use previous and latest chat history to provide more contextually accurate translations.
  • Configurable Prompts: Customize the system and task prompts used in translation requests via configuration files.
  • Supported Languages: Support for multiple source and target languages that LLMs can handle.

API Endpoints

  • Translation: Perform translation using OpenAI's API.

    • Endpoint: /translate
    • Method: GET
    • Parameters: to={tgt_lang}&from={src_lang}&text={src_text}
    • Description: Sends a text translation request to the OpenAI API.
  • Reset: Reset the chat history to start fresh.

    • Endpoint: /reset
    • Method: GET
    • Description: Clears the current chat history.

TODO

  • Docs
    • Installation
    • Requirements
    • Usage
    • Configuration
  • Refactor
    • Server
    • Logger
    • Error handling
  • UI
    • WebUI
      • Control configs
      • Reset
      • Custom prompts
      • System prompt
      • Task template
      • Translation history
        • Pre-translated dictonary
        • Remove bad translation
  • Deployment
    • Github Actions
    • Docker-compose
    • Kubernetes

About

Translation proxy server with OpenAI-compatible API and XUnity.AutoTranslator

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages