-
-
Notifications
You must be signed in to change notification settings - Fork 869
fix: Add _raw_response to AdapterBase #1739
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
this simple code was failing for me: import instructor from pydantic import BaseModel MODEL = "openai/o3" client = instructor.from_provider(MODEL) class Name(BaseModel): name: str (completion, raw) = client.chat.completions.create_with_completion( model=MODEL.split("/")[-1], messages=[ {"role": "system", "content": "extract info when possible"}, {"content": "my name is Enrico", "role": "user"}, ], response_model=Name, max_retries=3, ) print(raw.usage)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed everything up to c6221a1 in 1 minute and 41 seconds. Click for details.
- Reviewed
20
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
2
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. instructor/processing/response.py:254
- Draft comment:
When setting _raw_response on model.content for AdapterBase, please ensure that model.content is not None (or verify it has the expected attribute) to avoid potential runtime errors. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%
<= threshold85%
This comment is asking the author to ensure that a certain condition is met to avoid runtime errors. It is not making a specific suggestion or pointing out a specific issue with the code. It is more of a general cautionary note, which violates the rule against asking the author to ensure or verify things.
2. instructor/processing/response.py:357
- Draft comment:
In the synchronous branch, the same logic attaches _raw_response to model.content. Consider verifying that model.content is valid, and if possible refactor this duplicated logic into a helper function. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 80% vs. threshold = 85% The comment is about code that was just changed in this PR - specifically the addition of model.content._raw_response = response in both functions. The duplication observation is accurate. The suggestion to refactor into a helper function is reasonable and actionable. The comment about verifying model.content is valid seems less important since this code path is specifically for AdapterBase instances which should guarantee content exists. The comment combines two separate suggestions - one about refactoring duplicated code (good) and one about validation (less relevant). Could the refactoring suggestion be made more specific? While the validation part is less useful, the core suggestion about reducing code duplication through refactoring is valuable and directly related to the changes. The duplicated logic is simple enough that the refactoring approach is fairly obvious. Keep the comment since it identifies real code duplication introduced by this change and suggests a reasonable refactoring solution. The validation suggestion is less important but doesn't detract significantly from the main point.
Workflow ID: wflow_m5CS3hmIvK2MtJXo
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
Can I help, in anything here? |
Hello! |
@edge7 I think there is an issue with the code you provided according to the docs the correct code should be import instructor
from pydantic import BaseModel
MODEL = "openai/o3"
client = instructor.from_provider(MODEL)
class Name(BaseModel):
name: str
(name, completion) = client.chat.completions.create_with_completion(
model=MODEL.split("/")[-1],
messages=[
{"role": "system", "content": "extract info when possible"},
{"content": "my name is Enrico", "role": "user"},
],
response_model=Name,
max_retries=3,
)
print(completion.usage) |
What is the difference? Does your code work without any change? Cheers |
Hmm indeed looking at it again it's not different yet locally it works
But I am not using Instead I use |
this simple code was failing for me:
Important
Adds
_raw_response
toAdapterBase
models inprocess_response_async
andprocess_response
to ensure raw response accessibility._raw_response
attribute tomodel.content
forAdapterBase
instances inprocess_response_async
andprocess_response
functions.AdapterBase
models, allowing access to metadata like token usage.process_response_async
andprocess_response
inresponse.py
to attach_raw_response
tomodel.content
forAdapterBase
.AdapterBase
in both functions.This description was created by
for c6221a1. You can customize this summary. It will automatically update as commits are pushed.