A comprehensive Python toolkit for developers building AI-powered applications with OpenAI, ChatGPT, and other Large Language Models (LLMs). This package provides essential infrastructure for OpenAI function calling, conversation history management, token usage tracking, and AI service integration to accelerate your AI development workflow.
- 🔧 OpenAI Function Calling: Seamlessly register, validate, and execute AI tools with OpenAI's function calling API
- 🧠 LLM Integration: Connect with OpenAI GPT models and other AI services with robust error handling
- 📊 Conversation History: Track and manage multi-turn conversations with proper tool call handling
- 💰 Token Usage Tracking: Monitor and analyze token consumption with flexible billing integration
- 📝 Structured Logging: Comprehensive logging system for debugging and monitoring AI interactions
- 🔄 Development Mode: Rapid iteration with auto-reloading development server
- 🤖 Telegram Bot Example: Production-ready reference implementation showing real-world application
The project is organized as a proper Python package for easy installation and reuse:
ai_tools_core/ # Core package
├── __init__.py # Public API exports
├── tools.py # Tool registry implementation
├── logger.py # Logging utilities
├── cli/ # Command-line interface
├── history/ # Conversation history management
├── services/ # AI service integrations
└── utils/ # Utility functions
Once published, you can install the package directly from PyPI:
# Basic installation
pip install ai-tools-core
# With development dependencies
pip install ai-tools-core[dev]
# With Telegram bot integration
pip install ai-tools-core[telegram]You can also install the package directly from the repository:
pip install -e .Or with extra dependencies:
# For development
pip install -e ".[dev]"
# For Telegram bot integration
pip install -e ".[telegram]"Building applications with OpenAI and other LLMs presents unique challenges that this toolkit solves:
- OpenAI Function Calling Made Easy: Simplifies the complex process of implementing OpenAI's function calling API
- Token Efficiency: Optimized conversation management to reduce token usage and costs
- Production-Ready Architecture: Battle-tested components used in real-world applications
- Flexible Integration: Works with multiple AI providers (OpenAI, Anthropic, etc.) through a unified interface
- Modular Design: Use only the components you need for your specific application
- Best Practices Built-In: Implements industry standards for AI safety, error handling, and performance
- Developer Experience: Rapid development with hot-reloading and comprehensive debugging tools
- Python 3.8+
- OpenAI API Key
- Telegram Bot Token (optional, only for the bot example)
After installation, you can import and use the package in your Python code:
# Import core components
from ai_tools_core import ToolRegistry, get_logger
from ai_tools_core.services import get_openai_service
from ai_tools_core.history import get_history_manager, MessageRole
# Create a tool registry
registry = ToolRegistry()
# Register a tool
@registry.register()
def hello_world(name: str) -> str:
"""Say hello to someone."""
return f"Hello, {name}!"
# Use the OpenAI service
openai_service = get_openai_service()
response = openai_service.generate_response([
{"role": "user", "content": "Tell me a joke"}
])
print(response)-
Clone the repository:
git clone https://github.com/yourusername/ai-tools-playground.git cd ai-tools-playground -
Create a virtual environment:
python -m venv .venv
-
Activate the virtual environment:
- Windows:
.venv\Scripts\activate - macOS/Linux:
source .venv/bin/activate
- Windows:
-
Install dependencies:
pip install -r requirements.txt
-
Create a
.envfile with your configuration:OPENAI_API_KEY=your_openai_api_key TELEGRAM_BOT_TOKEN=your_telegram_bot_token LOG_LEVEL=INFO
-
Run the application:
python src/main.py
This toolkit is designed to be modular and flexible. Here are some ways to implement OpenAI function calling and build AI-powered applications:
from ai_tools_core import ToolRegistry, log_tool_execution
# Create a tool registry for OpenAI function calling
registry = ToolRegistry()
# Register a tool with the decorator pattern
@registry.register()
def get_weather(location: str, unit: str = "fahrenheit") -> str:
"""Get current weather information for a specific location.
Args:
location: City name or geographic location
unit: Temperature unit (celsius or fahrenheit)
Returns:
Weather information including temperature and conditions
"""
# In a real implementation, you would call a weather API
return f"Weather for {location}: Sunny, 75°F"
# Execute a tool directly
result = registry.execute_tool("get_weather", location="New York")
print(result) # Output: Weather for New York: Sunny, 75°F
# Get OpenAI-compatible function schemas for ChatGPT API
schemas = registry.get_openai_schemas()The toolkit includes a conversation history manager that properly handles tool calls:
from ai_tools_core.history import get_history_manager, MessageRole
# Get the history manager
history_manager = get_history_manager()
# Create a new conversation
user_id = "user123"
conversation_id = history_manager.create_conversation(user_id)
# Add messages to the conversation
history_manager.add_message(conversation_id, MessageRole.SYSTEM,
"You are a helpful assistant.")
history_manager.add_message(conversation_id, MessageRole.USER,
"What's the weather in New York?")
# Format messages for OpenAI
from ai_tools_core.history import create_message_formatter
formatter = create_message_formatter("openai")
openai_messages = formatter.format_messages(conversation)from ai_tools_core.services import get_openai_service, get_tool_service
from ai_tools_core.usage import InMemoryUsageTracker
# Create a usage tracker to monitor token consumption
usage_tracker = InMemoryUsageTracker()
# Get the OpenAI service with token tracking
openai_service = get_openai_service(usage_tracker=usage_tracker)
# Generate a response with GPT-4o or other models
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Tell me a joke about programming."}
]
response = openai_service.generate_response(
messages,
user_id="user123", # Optional tracking identifier
session_id="session456" # Optional session tracking
)
print(response)
# Process messages with OpenAI function calling
tool_service = get_tool_service()
tools = registry.get_openai_schemas()
response = tool_service.process_with_tools(messages, tools)
# Get token usage statistics
usage_stats = usage_tracker.get_current_usage()
print(f"Total tokens used: {usage_stats['total_tokens']}")The toolkit includes a reference implementation of a Telegram bot that demonstrates how to use all the components together:
from ai_tools_core import ToolRegistry
from ai_tools_core.services import get_openai_service
from bot.telegram_bot import create_bot
# Create your tools
registry = ToolRegistry()
@registry.register()
def hello_world(name: str) -> str:
return f"Hello, {name}!"
# Create the bot with your tools
bot = create_bot(registry)
bot.run()For faster development iterations:
python dev.pyThis starts the server with hot-reload capability, automatically restarting when you make changes to the code.
ai-tools-core/
├── .env # Environment variables (not in repo)
├── .env.example # Example environment variables
├── README.md # Project documentation
├── progress.md # Project progress tracking
├── pyproject.toml # Package configuration
├── setup.py # Package setup script
└── src/
├── ai_tools_core/ # Core package
│ ├── __init__.py # Package exports
│ ├── tools.py # Tool registry implementation
│ ├── logger.py # Logging utilities
│ ├── cli/ # Command-line interface
│ ├── history/ # Conversation history management
│ ├── services/ # AI service integrations
│ └── utils/ # Utility functions
├── bot/ # Example Telegram bot
│ ├── telegram_bot.py # Bot implementation
│ └── handlers.py # Message handlers
└── main.py # Example application entry pointAI Tools Core includes a flexible system for tracking token usage and integrating with your own billing systems:
from ai_tools_core.usage import UsageTracker, UsageEvent
from ai_tools_core.services import get_openai_service
# Create a custom usage tracker for your billing system
class MyBillingTracker(UsageTracker):
def track_usage(self, event: UsageEvent) -> None:
# Log the usage event
print(f"Model: {event.model}, Tokens: {event.input_tokens + event.output_tokens}")
print(f"Cost estimate: ${self.calculate_cost(event)}")
# In a real implementation, you would store this in a database
# or send it to your billing service
def calculate_cost(self, event: UsageEvent) -> float:
# Example pricing (adjust based on actual OpenAI pricing)
rates = {
"gpt-4o": {"input": 0.00001, "output": 0.00003},
"gpt-4o-mini": {"input": 0.000005, "output": 0.000015},
}
model_rates = rates.get(event.model, rates["gpt-4o-mini"])
input_cost = event.input_tokens * model_rates["input"]
output_cost = event.output_tokens * model_rates["output"]
return input_cost + output_cost
def get_current_usage(self, **kwargs) -> dict:
# Return usage statistics
return {"total_tokens": 1000, "estimated_cost": 0.02}
# Use your custom tracker with the OpenAI service
tracker = MyBillingTracker()
service = get_openai_service(usage_tracker=tracker)
# Now all API calls will be tracked through your billing system- OpenAI Function Calling Documentation
- OpenAI API Reference
- OpenAI Tokenizer
- Telegram Bot API
- Python Telegram Bot Library
Make sure the package is properly installed. Try reinstalling with:
pip uninstall ai-tools-core
pip install ai-tools-coreIf you encounter errors related to the OpenAI API key:
- Check that your API key is correctly set in the
.envfile - Verify that your API key has sufficient credits
- Ensure you're using the correct environment variable name:
OPENAI_API_KEY
If tools are failing to execute:
- Check the logs for detailed error messages
- Verify that tool parameters match the expected types
- Ensure the tool is properly registered in the registry
If you encounter issues with the ai-tools command:
- Make sure the package is installed in the active Python environment
- Try reinstalling with
pip install -e .from the repository root - Verify that your PATH includes the Python scripts directory
MIT
- OpenAI function calling Python
- ChatGPT API toolkit
- GPT-4 function calling implementation
- LLM application framework
- AI conversation management
- OpenAI token usage tracking
- AI tools registry Python
- Telegram ChatGPT bot example
- OpenAI API Python wrapper
- AI development toolkit
⭐ If you find this project helpful, please star it on GitHub! ⭐