fix: preserve reasoning_effort parameter for GPT-5 models #241
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Fixes #237 - GPT-5-mini reasoning_effort Parameter Issue
Problem
When using
reasoning_effort
parameter with GPT-5 models, the following error occurred:Exception: OpenAI API error: Completions.create() got an unexpected keyword argument 'reasoning'
This prevented users from utilizing GPT-5's reasoning capabilities through LangExtract, making the library incompatible with OpenAI's latest models.
Root Cause
The
_normalize_reasoning_params()
method insrc/langextract/providers/openai.py
incorrectly transformed the parameter from:{"reasoning_effort": "minimal"} # ✅ Expected by OpenAI API
Into:
{"reasoning": {"effort": "minimal"}} # ❌ Invalid nested structure
This transformation caused OpenAI's API to reject the request with an "unexpected keyword argument" error.
Solution
_normalize_reasoning_params()
to detect GPT-5 models and preservereasoning_effort
as a top-level parametergpt-5
,gpt-5-mini
,gpt-5-nano
variantsverbosity
parameter support for GPT-5 modelsTechnical Changes
File:
src/langextract/providers/openai.py
_normalize_reasoning_params()
method with GPT-5 model detection_process_single_prompt()
to pass throughreasoning_effort
andverbosity
infer()
method to collect GPT-5-specific parametersFile:
tests/test_gpt5_reasoning_fix.py
(new)Impact
How Has This Been Tested?
Unit Testing
`$ python -m pytest tests/test_gpt5_reasoning_fix.py -v
platform win32 -- Python 3.12.10, pytest-8.4.2, pluggy-1.6.0
collected 4 items
tests/test_gpt5_reasoning_fix.py::TestGPT5ReasoningEffort::test_gpt5_reasoning_effort_preserved PASSED [ 25%]
tests/test_gpt5_reasoning_fix.py::TestGPT5ReasoningEffort::test_gpt4_reasoning_effort_removed PASSED [ 50%]
tests/test_gpt5_reasoning_fix.py::TestGPT5ReasoningEffort::test_gpt5_variants_supported PASSED [ 75%]
tests/test_gpt5_reasoning_fix.py::TestGPT5ReasoningEffort::test_api_call_with_reasoning_effort PASSED [100%]
========== 4 passed in 6.89s ==========`
Checklist
Integration Testing
Before Fix:
Exception: OpenAI API error: Completions.create() got an unexpected keyword argument 'reasoning'
After Fix:
$ python test_integration_fix.py LangExtract: model=gpt-5-mini [00:04]