Skip to content

Conversation

@MuggleJinx
Copy link
Collaborator

@MuggleJinx MuggleJinx commented Oct 24, 2025

CAMEL Abstractions for OpenAI Responses API — Phase 0 & 1

Summary

This RFC proposes a model-agnostic messaging and response abstraction that
enables CAMEL to support the OpenAI Responses API while preserving full
backward compatibility with the existing Chat Completions plumbing. (issue #3028)

Phase 0 catalogs the current dependency surface. Phase 1 introduces new
abstractions and a Chat Completions adapter, delivering a pure refactor with
zero functional differences.

Motivation

The codebase directly consumes ChatCompletionMessageParam as request
messages and expects ChatCompletion responses (e.g., in ChatAgent).
The OpenAI Responses API uses segmented inputs and a Response object with
different streaming and parsing semantics. A direct swap would break agents,
memories, token counting, and tool handling.

We therefore introduce CAMEL-native types that can be adapted both to legacy
Chat Completions and to Responses, enabling a staged migration.

Goals

  • Keep all existing entry points on Chat Completions in Phase 1.
  • Provide model-agnostic CamelMessage and CamelModelResponse types.
  • Add an adapter to map Chat Completions <-> CAMEL abstractions.
  • Preserve all behaviours and tests (no functional diffs).
  • Lay groundwork for Phase 2/3 (backend and agent migration, then Responses backend).

Non-Goals (Phase 1)

  • No migration of backends or agents to emit/consume the new types by default.
  • No implementation of Responses streaming, structured parsing, or reasoning traces.

Design

New Modules

  • camel/core/messages.py

    • CamelContentPart — minimal content fragment (type: text|image_url).
    • CamelMessage — model-agnostic message with role, content parts, optional name/tool_call_id.
    • Converters:
      • openai_messages_to_camel(List[OpenAIMessage]) -> List[CamelMessage]
      • camel_messages_to_openai(List[CamelMessage]) -> List[OpenAIMessage]
  • camel/responses/model_response.py

    • CamelToolCall — normalized tool call (id, name, args).
    • CamelUsage — normalized usage with raw attached.
    • CamelModelResponse — id, model, created, output_messages, tool_call_requests, finish_reasons, usage, and raw (provider response).
  • camel/responses/adapters/chat_completions.py

    • adapt_chat_to_camel_response(ChatCompletion) -> CamelModelResponse.
    • Future hooks for streaming/structured parsing (not implemented in Phase 1).

Type Relaxation

camel/agents/_types.py:ModelResponse.response is relaxed to Any to decouple
agent plumbing from provider schemas. Existing tests pass MagicMock here, and
the change avoids tight coupling when adapters are introduced.

Compatibility

  • Phase 1 preserves behaviour: agents still receive ChatCompletion from the
    model backend; the adapter is exercised via unit tests and can be opted into
    in later phases.
  • No changes to BaseMessage or memory/token APIs in this phase.

Testing

  • test/responses/test_chat_adapter.py builds a minimal ChatCompletion via
    construct() and validates:
    • Text content mapping to CamelModelResponse.output_messages.
    • Tool call mapping to CamelToolCall.
    • Finish reasons and raw attachment.

Alternatives Considered

  • Migrating agents directly to Responses in one step — rejected due to scope
    and risk; the adapter path enables incremental, testable rollout.

Rollout Plan

  1. Phase 0 (this RFC): agreement on types, locations, adapter surface.
  2. Phase 1 (this PR): land abstractions, Chat adapter, unit tests, type relaxation.
  3. Phase 2: retrofit OpenAI backends and agents to consume/emit CAMEL types,
    adjust streaming/tool-calls to operate over CamelModelResponse, and migrate
    token counting to work from abstract messages.
  4. Phase 3: add OpenAIResponsesModel implementing client.responses.{create,parse,stream}
    with converters from CamelMessage segments and back into CamelModelResponse.

Future Work

  • Extend CamelContentPart to include audio/video and tool fragments.
  • Introduce unified streaming interfaces and structured parsing adapters.
  • Reasoning trace capture and parallel tool call normalization for Responses.

@github-actions github-actions bot added the Review Required PR need to be reviewed label Oct 24, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 24, 2025

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch response-api-phase-0-1

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@MuggleJinx MuggleJinx changed the title feat: add abstraction of camel layer feat: add CAMEL abstraction for future support of new API style Oct 24, 2025
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines 29 to 48
def _choice_tool_calls_to_camel(
choice_msg: Any,
) -> Optional[List[CamelToolCall]]:
tool_calls = getattr(choice_msg, "tool_calls", None)
if not tool_calls:
return None
result: List[CamelToolCall] = []
for tc in tool_calls:
func = getattr(tc, "function", None)
name = getattr(func, "name", None) if func else None
args_str = getattr(func, "arguments", "{}") if func else "{}"
try:
import json

args = json.loads(args_str) if isinstance(args_str, str) else {}
except Exception:
args = {}
result.append(
CamelToolCall(id=getattr(tc, "id", ""), name=name or "", args=args)
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve tool call IDs when adapting dict-based responses

The helper _choice_tool_calls_to_camel only reads tool call fields with getattr, which works for OpenAI SDK objects but silently drops data when the tool_calls list contains plain dicts. Many parts of the codebase synthesize ChatCompletion instances via ChatCompletion.construct(...) and pass dictionaries (see tests and model wrappers), so calling adapt_chat_to_camel_response on those objects yields CamelToolCall(id="", name="", args={}). Downstream consumers cannot route tool calls without the id or name. The adapter should also handle dict inputs (e.g. via tc.get("id")) to retain the tool call metadata.

Useful? React with 👍 / 👎.

@MuggleJinx
Copy link
Collaborator Author

Hi guys, please have some suggestions or comments on the design of the adapter CAMEL layer for compatible with current ChatCompletion style and future extensions

@Wendong-Fan
Copy link
Member

thanks @MuggleJinx for the RFC,

Currently in CAMEL all our messages are standardized and processed as ChatCompletion. Given this, our existing ChatCompletion format already seems to serve the function of the CamelMessage you're proposing. Is it necessary to introduce this new CamelMessage layer?

If there are interface alignment challenges with directly adapting the Response object, wouldn't the most straightforward approach be to add a conversion layer within OpenAIModel? This layer could simply transform the information from the Response interface back into our existing ChatCompletion format

@Wendong-Fan Wendong-Fan added Waiting for Update PR has been reviewed, need to be updated based on review comment and removed Review Required PR need to be reviewed labels Oct 25, 2025
@fengju0213
Copy link
Collaborator

thanks @MuggleJinx for the RFC,

Currently in CAMEL all our messages are standardized and processed as ChatCompletion. Given this, our existing ChatCompletion format already seems to serve the function of the CamelMessage you're proposing. Is it necessary to introduce this new CamelMessage layer?

If there are interface alignment challenges with directly adapting the Response object, wouldn't the most straightforward approach be to add a conversion layer within OpenAIModel? This layer could simply transform the information from the Response interface back into our existing ChatCompletion format

also agree with we can get response and directly transfer to ChatAgentResponse

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Waiting for Update PR has been reviewed, need to be updated based on review comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants