Welcome to the Function Calling Examples repository! This collection of Python scripts demonstrates the power and flexibility of the Chat Completions API, specifically focusing on function calling capabilities. Whether you're an AI enthusiast or an engineer, these examples will help you understand and leverage function calling in your projects.
Function calling allows you to extend the capabilities of language models by integrating custom functions. The model can choose to call these functions based on user input, enabling dynamic and context-aware interactions. This is particularly useful for tasks like retrieving real-time data, performing calculations, or interacting with external APIs.
- Parallel and Sequential Function Calling: Learn how to call multiple functions either simultaneously or in a specific order.
- Generating Prompt Suggestions: See how the model can suggest prompts based on conversation history.
- Conversation Summarization: Automatically summarize chat history to maintain context.
- Timed Activation of Assistant Behavior: Implement functions that trigger actions at specific intervals.
- Asynchronous Programming and Streaming Responses: Handle complex interactions with asynchronous function calls and real-time streaming responses.
- JSON Mode: Utilize JSON Mode for structured outputs and seamless integration with Pydantic classes.
If you are unfamiliar with function calling, here are some resources to get you started:
- Prompting Guide / function_calling
- OpenAI / function-calling
- Azure OpenAI / function-calling
- Ollama / functions
- OpenAI / structured outputs
The steps in function calling, as represented by numbers 1 through 5 in the image, are as follows:
-
Tool Definitions + Messages:
The developer defines the tool (e.g.,get_weather(location)
) and sends a message to the model, such as "What's the weather in Paris?" -
Tool Calls:
The model identifies the appropriate tool to call and generates a function call, such asget_weather("paris")
. -
Execute Function Code:
The developer executes the function code (e.g.,get_weather("paris")
) and retrieves the result, such as{"temperature": 14}
. -
Results:
The result of the function execution ({"temperature": 14}
) is sent back to the model, along with all prior messages for context. -
Final Response:
The model incorporates the function result into its final response, such as "It's currently 14°C in Paris."
func_get_weather.py
: (Start here!) This is a simple program that has a single native function 'get_current_weather' defined. The model is made aware of this function. Given the user's input, it tells us to call the function/tool. Our code invokes our function and then we add the function's response back to the model, supplying it with additional context. Finally, the assistant responds to the user with the temperature in San Francisco, Tokyo, and Paris. This also utilizes parallel function calling.func_get_weather_streaming.py
: This is an example of how to stream the response from the model while also checking if the model wanted to make a function/tool call. It extends the 'func_get_weather' example.func_conversation_history.py
: This is a simple program that showcases some semantic functionality for: 1) summarizing conversation history, 2) providing prompt suggestions based on conversation history. This also shows how to utilize using JSON Mode.func_sequential_calls.py
: This serves as an example of sequential function calling. In certain scenarios, achieving the desired output requires calling multiple functions in a specific order, where the output of one function becomes the input for another function. By giving the model adequate tools, context and instructions, it can achieve complex operations by breaking them down into smaller, more manageable steps.func_timing_count_chat.py
: This example shows how to Do 'X' every 'frequency'. Shows how to manage state outside the conversation. There is a function that increments a counter using function calling, counting user inputs before the assistant says something specific to a user. Also shows how to do something once every week by checking if it has been a week and then editing system prompt.func_structured_outputs.py
: This script demonstrates how to parse raw text into structured JSON using GPT-4o. It includes Pydantic classes for defining the structure and prints the parsed menu in a formatted way.func_async_streaming_chat.py
: an example script that demonstrates handling of asynchronous client calls and streaming responses within a chat loop. It supports function calling, enabling dynamic and interactive conversations. This script is designed to provide a practical example of managing complex interactions in a chat-based interface.func_async_streaming_chat_server.py
: (Most complicated) an extension of the 'func_async_streaming_chat' script. It not only handles asynchronous client calls, function calling, and streaming responses within a chat loop, but also demonstrates an example of how to format and handle server-client payloads effectively. This script provides a practical example of managing complex interactions in a chat-based interface while ensuring proper communication between the server and client.
To use this project, follow these steps:
- Clone the Repository:
git clone <repository-url>
- Navigate to the Project Directory:
cd <project-directory>
- Set Up a Python Virtual Environment and Activate It:
python3 -m venv env source env/bin/activate
- Install the Required Packages:
pip install -r requirements.txt
- Copy the
.env.sample
File to a New File Called.env
:cp .env.sample .env
- Configure the Environment Settings:
- For Azure OpenAI:
API_HOST=azure AZURE_OPENAI_ENDPOINT=https://<YOUR-AZURE-OPENAI-SERVICE-NAME>.openai.azure.com AZURE_OPENAI_API_KEY=<YOUR-AZURE-OPENAI-API-KEY> AZURE_OPENAI_API_VERSION=2024-08-01-preview AZURE_OPENAI_DEPLOYMENT_NAME=<YOUR-AZURE-DEPLOYMENT-NAME>
- For OpenAI.com:
API_HOST=openai OPENAI_KEY=<YOUR-OPENAI-API-KEY> OPENAI_MODEL=gpt-4
- For Ollama:
API_HOST=ollama OLLAMA_ENDPOINT=http://localhost:11434/v1 OLLAMA_MODEL=llama2
- For Azure OpenAI:
- Run the Project:
python <program.py>
- Navigate to the Parent of the Project Directory:
cd ..\<project-directory>
- Open in VS Code:
code <project-folder-name>
Contributions are welcome! If you would like to contribute to this project, please follow these guidelines:
- Fork the Repository
- Create a New Branch:
git checkout -b <branch-name>
- Make Your Changes and Commit Them:
git commit -m 'Add some feature'
- Push to the Branch:
git push origin <branch-name>
- Submit a Pull Request
This project is licensed under the MIT License.