Skip to content

Conversation

jjmata
Copy link
Collaborator

@jjmata jjmata commented Oct 1, 2025

Summary

  • load the assistant system prompt from Langfuse when available and fall back to the existing hardcoded template
  • plumb prompt metadata through the assistant pipeline and include it when logging OpenAI generations to Langfuse
  • attach prompt content and version to Langfuse trace generations for better observability

⚠️ Work remaining:

  • the code pings Langfuse every single time a request is made, which adds unnecessary delays and is overkill
  • the code assumes Langfuse prompts are always "chat" and will break if "text" prompts are used (see here)
  • add tests using VCR to mimic Langfuse's responses after recording

Testing

  • bundle exec ruby -c app/models/assistant/configurable.rb
  • bundle exec ruby -c app/models/assistant.rb
  • bundle exec ruby -c app/models/assistant/responder.rb
  • bundle exec ruby -c app/models/provider/llm_concept.rb
  • bundle exec ruby -c app/models/provider/openai.rb

https://chatgpt.com/codex/tasks/task_e_68dd6b795bdc8332baa635ea3985454f

Summary by CodeRabbit

  • New Features

    • Assistant can include an optional managed prompt alongside instruction content; prompts can be sourced externally and adapt to your preferred currency and date format.
    • Prompt metadata (name, version, template) is now captured and attached when available.
  • Improvements

    • More consistent, higher-quality responses in streaming and non‑streaming chats.
    • Robust fallback when external prompt data is unavailable and improved logging of prompt context.

@jjmata jjmata added the codex Touched by OpenAI Codex somehow label Oct 1, 2025 — with ChatGPT Codex Connector
Copy link

coderabbitai bot commented Oct 1, 2025

Walkthrough

Adds an optional instructions_prompt through Assistant, Responder, and Provider layers; Assistant::Configurable now fetches/compiles Langfuse-backed prompts with a static fallback and exposes both instructions (content) and instructions_prompt (prompt object); OpenAI provider logs prompt metadata to Langfuse and chat_response signatures are extended accordingly.

Changes

Cohort / File(s) Summary of changes
Assistant core
app/models/assistant.rb
Added instructions_prompt to public readers and to initialize(chat, instructions: nil, instructions_prompt: nil, functions: []) signature.
Responder
app/models/assistant/responder.rb
initialize now accepts instructions_prompt: nil; stores and exposes @instructions_prompt; passes instructions_prompt into the LLM invocation.
Configurable defaults & Langfuse
app/models/assistant/configurable.rb
config_for now computes instructions and instructions_prompt via new helpers; added Langfuse integration helpers: langfuse_default_instructions, compile_langfuse_prompt, extract_prompt_content, interpolate_template, fallback_default_instructions, and langfuse_client with logging/error handling and a static fallback.
LLM interface
app/models/provider/llm_concept.rb
chat_response signature expanded to include instructions_prompt: nil (inserted between instructions and functions).
OpenAI provider & Langfuse logging
app/models/provider/openai.rb
chat_response accepts and forwards instructions_prompt; log_langfuse_generation accepts a prompt: param and conditionally attaches prompt metadata (id/name/version/template/content) to Langfuse generation traces; updated streaming and non-streaming flows to pass prompt metadata.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant UI as UI/Caller
  participant Asst as Assistant
  participant Cfg as Assistant::Configurable
  participant LF as Langfuse Client
  participant Resp as Assistant::Responder
  participant LLM as Provider::OpenAI

  UI->>Asst: initialize(chat, instructions: nil, instructions_prompt: nil)
  Asst->>Cfg: config_for(chat.user_prefs)
  alt Langfuse available
    Cfg->>LF: fetch prompt object (name/version/template)
    LF-->>Cfg: prompt object
    Cfg->>Cfg: compile(template, currency/date) --> extract content
    Cfg-->>Asst: {instructions: content, instructions_prompt: prompt object}
  else Fallback
    Cfg-->>Asst: {instructions: static content, instructions_prompt: nil}
  end

  UI->>Asst: respond(message)
  Asst->>Resp: initialize(message:, instructions:, instructions_prompt:, llm:)
  Resp->>LLM: chat_response(..., instructions:, instructions_prompt:, functions:, ...)
  LLM->>LLM: call OpenAI API (stream/non-stream)
  alt Langfuse tracing available
    LLM->>LF: trace.generation(..., prompt: prompt metadata)
  end
  LLM-->>Resp: response
  Resp-->>UI: reply
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

A rabbit taps prompts with delicate paws,
Langfuse hums templates without a pause.
Instructions tucked in a neat little scroll,
Compiled, logged, and sent — the warren on a roll.
Hop, respond, and trace the chat 🐇✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The title “Fetch assistant prompt configuration from Langfuse” clearly and concisely conveys the primary change of loading the assistant’s system prompt from Langfuse, which aligns with the pull request’s main objective of integrating Langfuse-driven prompt fetching. It is specific to the key behavior change without extraneous details or generic wording.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch codex/integrate-langfuse-prompt-tracking

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
app/models/provider/openai.rb (1)

141-145: Add error handling and memoization to match the pattern in Assistant::Configurable.

This implementation differs from the langfuse_client method in app/models/assistant/configurable.rb (lines 138-145):

  1. Missing error handling: Initialization failures will propagate as exceptions rather than returning nil
  2. Missing memoization: Uses direct assignment instead of ||=, creating a new client on every call

Apply this diff to align with the established pattern:

 def langfuse_client
   return unless ENV["LANGFUSE_PUBLIC_KEY"].present? && ENV["LANGFUSE_SECRET_KEY"].present?

-  @langfuse_client = Langfuse.new
+  @langfuse_client ||= Langfuse.new
+rescue => e
+  Rails.logger.warn("Langfuse client initialization failed: #{e.message}")
+  nil
 end
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cbc653a and 099b937.

📒 Files selected for processing (5)
  • app/models/assistant.rb (2 hunks)
  • app/models/assistant/configurable.rb (3 hunks)
  • app/models/assistant/responder.rb (3 hunks)
  • app/models/provider/llm_concept.rb (1 hunks)
  • app/models/provider/openai.rb (4 hunks)
🧰 Additional context used
📓 Path-based instructions (10)
app/models/**/*.rb

📄 CodeRabbit inference engine (.cursor/rules/project-design.mdc)

Domain models should not call Provider::Registry directly; use a Provided concern within the model’s namespace to select providers and expose convenience methods

Use ActiveRecord validations for forms and complex domain constraints.

app/models/**/*.rb: Place business logic in POROs and model classes under app/models
Models should answer questions about themselves (e.g., prefer account.balance_series over service objects)
Implement complex validations and business logic with ActiveRecord validations
Model-level validations may mirror DB constraints but are not strictly required

Files:

  • app/models/provider/llm_concept.rb
  • app/models/assistant.rb
  • app/models/assistant/responder.rb
  • app/models/provider/openai.rb
  • app/models/assistant/configurable.rb
app/models/provider/**/*.rb

📄 CodeRabbit inference engine (.cursor/rules/project-design.mdc)

Concrete provider classes must be under the Provider:: namespace, inherit from Provider, wrap calls with with_provider_response, and raise on invalid/unavailable data

Files:

  • app/models/provider/llm_concept.rb
  • app/models/provider/openai.rb
app/**/*.rb

📄 CodeRabbit inference engine (AGENTS.md)

Place Rails application Ruby code (models, controllers, services, jobs, mailers, components) under app/

Files:

  • app/models/provider/llm_concept.rb
  • app/models/assistant.rb
  • app/models/assistant/responder.rb
  • app/models/provider/openai.rb
  • app/models/assistant/configurable.rb
**/*.rb

📄 CodeRabbit inference engine (AGENTS.md)

Ruby style: 2-space indentation; snake_case for methods/variables; CamelCase for classes/modules

Files:

  • app/models/provider/llm_concept.rb
  • app/models/assistant.rb
  • app/models/assistant/responder.rb
  • app/models/provider/openai.rb
  • app/models/assistant/configurable.rb
**/*.{rb,js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/*.{rb,js,jsx,ts,tsx}: Make changes atomic, testable, and explain their impact briefly in code suggestions.
Respect existing tests and add tests when changing critical logic.

Files:

  • app/models/provider/llm_concept.rb
  • app/models/assistant.rb
  • app/models/assistant/responder.rb
  • app/models/provider/openai.rb
  • app/models/assistant/configurable.rb
**/app/**/*.{rb,erb,js,css}

📄 CodeRabbit inference engine (CLAUDE.md)

Push Rails to its limits before adding new dependencies; a strong technical/business reason is required for new dependencies.

Files:

  • app/models/provider/llm_concept.rb
  • app/models/assistant.rb
  • app/models/assistant/responder.rb
  • app/models/provider/openai.rb
  • app/models/assistant/configurable.rb
**/app/models/**/*.rb

📄 CodeRabbit inference engine (CLAUDE.md)

Business logic should primarily reside in models; use concerns and POROs for organization.

Files:

  • app/models/provider/llm_concept.rb
  • app/models/assistant.rb
  • app/models/assistant/responder.rb
  • app/models/provider/openai.rb
  • app/models/assistant/configurable.rb
**/*.{rb,erb,haml,slim}

📄 CodeRabbit inference engine (.cursor/rules/general-rules.mdc)

**/*.{rb,erb,haml,slim}: Use Current.user for the current user; do not use current_user
Use Current.family for the current family; do not use current_family
Ignore i18n methods; hardcode strings in English for now (do not use I18n.t, t, or similar)

Files:

  • app/models/provider/llm_concept.rb
  • app/models/assistant.rb
  • app/models/assistant/responder.rb
  • app/models/provider/openai.rb
  • app/models/assistant/configurable.rb
app/{helpers,models}/**/*.rb

📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)

Format currencies, numbers, and dates on the server side (Ruby) before sending to the client

Files:

  • app/models/provider/llm_concept.rb
  • app/models/assistant.rb
  • app/models/assistant/responder.rb
  • app/models/provider/openai.rb
  • app/models/assistant/configurable.rb
app/{models,controllers,views}/**/*.{rb,erb}

📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)

Avoid N+1 queries

Files:

  • app/models/provider/llm_concept.rb
  • app/models/assistant.rb
  • app/models/assistant/responder.rb
  • app/models/provider/openai.rb
  • app/models/assistant/configurable.rb
🧬 Code graph analysis (4)
app/models/assistant.rb (2)
app/models/assistant/responder.rb (3)
  • chat (102-104)
  • initialize (1-105)
  • initialize (2-8)
app/models/assistant/configurable.rb (1)
  • config_for (5-16)
app/models/assistant/responder.rb (2)
app/models/assistant.rb (2)
  • initialize (18-23)
  • function_tool_caller (75-81)
app/models/provider/openai/chat_config.rb (2)
  • initialize (1-36)
  • initialize (2-5)
app/models/provider/openai.rb (1)
app/models/assistant/configurable.rb (1)
  • langfuse_client (139-146)
app/models/assistant/configurable.rb (1)
app/models/provider/openai.rb (1)
  • langfuse_client (141-145)
🪛 GitHub Check: ci / lint
app/models/provider/openai.rb

[failure] 147-147:
Layout/IndentationConsistency: Inconsistent indentation detected.

app/models/assistant/configurable.rb

[failure] 67-67:
Layout/EndAlignment: end at 67, 18 is not aligned with content = case at 60, 8.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: ci / test
🔇 Additional comments (7)
app/models/assistant.rb (1)

4-36: Instructions prompt plumbing looks solid

Nice job threading instructions_prompt from configuration into the initializer and responder without disturbing existing behavior.

app/models/assistant/responder.rb (1)

2-69: Good propagation of prompt metadata

Passing instructions_prompt through the responder and into the LLM call keeps the new metadata flowing end-to-end.

app/models/provider/llm_concept.rb (1)

25-25: LGTM!

The instructions_prompt parameter is well-positioned in the signature and maintains backward compatibility with its nil default.

app/models/provider/openai.rb (4)

65-65: LGTM!

The instructions_prompt parameter addition matches the interface definition in Provider::LlmConcept.


111-119: LGTM!

Prompt metadata is correctly plumbed through the streaming path to Langfuse logging.


123-132: LGTM!

Prompt metadata is correctly plumbed through the non-streaming path to Langfuse logging, maintaining symmetry with the streaming implementation.


166-177: No action needed for version duplication. The top-level generation_options[:version] tracks metrics and the nested metadata[:prompt][:version] is purely contextual; per Langfuse docs this does not conflict.

trace.update(output: output)
rescue => e
Rails.logger.warn("Langfuse logging failed: #{e.message}")
def log_langfuse_generation(name:, model:, input:, output:, usage: nil, session_id: nil, user_identifier: nil, prompt: nil)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix indentation to align with Ruby style guidelines.

The method definition has inconsistent indentation.

Apply this diff:

-  def log_langfuse_generation(name:, model:, input:, output:, usage: nil, session_id: nil, user_identifier: nil, prompt: nil)
+def log_langfuse_generation(name:, model:, input:, output:, usage: nil, session_id: nil, user_identifier: nil, prompt: nil)

Based on static analysis hints.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def log_langfuse_generation(name:, model:, input:, output:, usage: nil, session_id: nil, user_identifier: nil, prompt: nil)
def log_langfuse_generation(name:, model:, input:, output:, usage: nil, session_id: nil, user_identifier: nil, prompt: nil)
🧰 Tools
🪛 GitHub Check: ci / lint

[failure] 147-147:
Layout/IndentationConsistency: Inconsistent indentation detected.

🤖 Prompt for AI Agents
In app/models/provider/openai.rb around line 147, the method definition `def
log_langfuse_generation...` is indented inconsistently; reformat the line to
follow Ruby style (2-space indentation) so it aligns with the other method
definitions in the class/module scope — move the `def` left to the same
indentation level as neighboring methods and ensure the parameter list and
closing `end` remain aligned accordingly.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
app/models/assistant/configurable.rb (1)

84-95: Fix case expression alignment (duplicate issue).

This alignment issue was already flagged in a previous review. The case expression needs to be reformatted so the when clauses and closing end align properly with the assignment.

Apply the previously suggested diff:

-        case compiled_prompt
-        when String
-          compiled_prompt
-        when Array
-          compiled_prompt.filter_map do |message|
-            message[:content] || message["content"]
-          end.join("\n\n")
-        else
-          nil
-        end
+        content =
+          case compiled_prompt
+          when String
+            compiled_prompt
+          when Array
+            compiled_prompt.filter_map do |message|
+              message[:content] || message["content"]
+            end.join("\n\n")
+          else
+            nil
+          end

Note: Line 55 assigns to content, so this diff should be adjusted for the context where extract_prompt_content returns the value directly.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 099b937 and d8804ea.

📒 Files selected for processing (1)
  • app/models/assistant/configurable.rb (3 hunks)
🧰 Additional context used
📓 Path-based instructions (9)
app/models/**/*.rb

📄 CodeRabbit inference engine (.cursor/rules/project-design.mdc)

Domain models should not call Provider::Registry directly; use a Provided concern within the model’s namespace to select providers and expose convenience methods

Use ActiveRecord validations for forms and complex domain constraints.

app/models/**/*.rb: Place business logic in POROs and model classes under app/models
Models should answer questions about themselves (e.g., prefer account.balance_series over service objects)
Implement complex validations and business logic with ActiveRecord validations
Model-level validations may mirror DB constraints but are not strictly required

Files:

  • app/models/assistant/configurable.rb
app/**/*.rb

📄 CodeRabbit inference engine (AGENTS.md)

Place Rails application Ruby code (models, controllers, services, jobs, mailers, components) under app/

Files:

  • app/models/assistant/configurable.rb
**/*.rb

📄 CodeRabbit inference engine (AGENTS.md)

Ruby style: 2-space indentation; snake_case for methods/variables; CamelCase for classes/modules

Files:

  • app/models/assistant/configurable.rb
**/*.{rb,js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/*.{rb,js,jsx,ts,tsx}: Make changes atomic, testable, and explain their impact briefly in code suggestions.
Respect existing tests and add tests when changing critical logic.

Files:

  • app/models/assistant/configurable.rb
**/app/**/*.{rb,erb,js,css}

📄 CodeRabbit inference engine (CLAUDE.md)

Push Rails to its limits before adding new dependencies; a strong technical/business reason is required for new dependencies.

Files:

  • app/models/assistant/configurable.rb
**/app/models/**/*.rb

📄 CodeRabbit inference engine (CLAUDE.md)

Business logic should primarily reside in models; use concerns and POROs for organization.

Files:

  • app/models/assistant/configurable.rb
**/*.{rb,erb,haml,slim}

📄 CodeRabbit inference engine (.cursor/rules/general-rules.mdc)

**/*.{rb,erb,haml,slim}: Use Current.user for the current user; do not use current_user
Use Current.family for the current family; do not use current_family
Ignore i18n methods; hardcode strings in English for now (do not use I18n.t, t, or similar)

Files:

  • app/models/assistant/configurable.rb
app/{helpers,models}/**/*.rb

📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)

Format currencies, numbers, and dates on the server side (Ruby) before sending to the client

Files:

  • app/models/assistant/configurable.rb
app/{models,controllers,views}/**/*.{rb,erb}

📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)

Avoid N+1 queries

Files:

  • app/models/assistant/configurable.rb
🧬 Code graph analysis (1)
app/models/assistant/configurable.rb (1)
app/models/provider/openai.rb (1)
  • langfuse_client (141-145)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: ci / test
🔇 Additional comments (5)
app/models/assistant/configurable.rb (5)

9-13: LGTM! Clean refactoring to support prompt metadata.

The extraction of instructions_config and splitting into :instructions and :instructions_prompt keys is a clean way to plumb prompt metadata through the pipeline while maintaining backward compatibility with the content-only use case.


28-42: LGTM! Clear fallback strategy.

The conditional logic cleanly separates Langfuse-backed prompts from the static fallback, and consistently returns a hash with :content and :prompt keys in both branches.


69-82: LGTM! Solid nil handling in variable preparation.

The transform_values approach converting nils to empty strings ensures the template compiler receives clean string values, preventing interpolation errors.


97-152: LGTM! Well-structured fallback prompt.

The static template provides comprehensive instructions and properly interpolates currency/date preferences. The heredoc format keeps the prompt readable and maintainable.


44-67: No action needed: prompt.prompt correctly returns the template
Langfuse Ruby SDK defines prompt.prompt as the raw prompt template (string for text prompts, array for chat prompts) per SDK docs (langfuse.com)

Comment on lines +154 to +161
def langfuse_client
return unless ENV["LANGFUSE_PUBLIC_KEY"].present? && ENV["LANGFUSE_SECRET_KEY"].present?

@langfuse_client ||= Langfuse.new
rescue => e
Rails.logger.warn("Langfuse client initialization failed: #{e.message}")
nil
end
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Extract duplicated langfuse_client to a shared concern.

This method is nearly identical to Provider::OpenAI#langfuse_client (lines 140-144 in app/models/provider/openai.rb). To follow DRY principles and maintain consistency, consider extracting this to a shared concern (e.g., LangfuseClientConcern) that both classes can include.

Create a new concern at app/models/concerns/langfuse_client_concern.rb:

module LangfuseClientConcern
  extend ActiveSupport::Concern

  class_methods do
    def langfuse_client
      return unless ENV["LANGFUSE_PUBLIC_KEY"].present? && ENV["LANGFUSE_SECRET_KEY"].present?

      @langfuse_client ||= Langfuse.new
    rescue => e
      Rails.logger.warn("Langfuse client initialization failed: #{e.message}")
      nil
    end
  end
end

Then include it in both Assistant::Configurable and Provider::OpenAI:

 module Assistant::Configurable
   extend ActiveSupport::Concern
+  include LangfuseClientConcern

   class_methods do
     # ...
-
-      def langfuse_client
-        return unless ENV["LANGFUSE_PUBLIC_KEY"].present? && ENV["LANGFUSE_SECRET_KEY"].present?
-
-        @langfuse_client ||= Langfuse.new
-      rescue => e
-        Rails.logger.warn("Langfuse client initialization failed: #{e.message}")
-        nil
-      end
   end
 end

As per coding guidelines.

🤖 Prompt for AI Agents
In app/models/assistant/configurable.rb around lines 154 to 161 and
app/models/provider/openai.rb around lines 140 to 144, the langfuse_client
method is duplicated; extract it into a new concern at
app/models/concerns/langfuse_client_concern.rb that defines the class-level
langfuse_client (using ActiveSupport::Concern with class_methods), move the
existing implementation there (keeping the same ENV checks, memoization and
rescue logging), then remove the method from both Assistant::Configurable and
Provider::OpenAI and include the new LangfuseClientConcern in each class so they
share the single implementation.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
app/models/provider/openai.rb (1)

141-145: Fix memoization in langfuse_client.

The method uses @langfuse_client = instead of @langfuse_client ||=, causing a new Langfuse client to be instantiated on every call. This defeats memoization and may create unnecessary connections.

Apply this diff:

 def langfuse_client
   return unless ENV["LANGFUSE_PUBLIC_KEY"].present? && ENV["LANGFUSE_SECRET_KEY"].present?

-  @langfuse_client = Langfuse.new
+  @langfuse_client ||= Langfuse.new
 end
♻️ Duplicate comments (2)
app/models/provider/openai.rb (1)

147-147: Fix indentation.

This is the same indentation issue flagged in the previous review. The method definition should align with the class/module scope.

Based on static analysis hints.

app/models/assistant/configurable.rb (1)

183-190: Duplication flagged in previous review.

This method is duplicated between Assistant::Configurable and Provider::OpenAI. The previous review suggested extracting to a shared concern. Note that the Provider::OpenAI version has a memoization bug (uses = instead of ||=), so when extracting, use this correct implementation.

As per coding guidelines.

🧹 Nitpick comments (2)
app/models/assistant/configurable.rb (2)

64-64: Simplify ID extraction.

The nested conditional with inline rescue is hard to follow.

Consider:

-          id: prompt.respond_to?(:id) ? prompt.id : (prompt[:id] rescue nil),
+          id: prompt.try(:id) || prompt[:id],

50-52: Address TODO and remove debug logging before production.

The TODO comment and Rails.logger.warn statements suggest this code is experimental. Resolve the resilience issue or document why the logging is needed.

Do you want me to help implement robust handling for different prompt types (chat vs. text), or should this TODO be tracked in a separate issue?

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d8804ea and 17f5c02.

📒 Files selected for processing (2)
  • app/models/assistant/configurable.rb (3 hunks)
  • app/models/provider/openai.rb (5 hunks)
🧰 Additional context used
📓 Path-based instructions (10)
app/models/**/*.rb

📄 CodeRabbit inference engine (.cursor/rules/project-design.mdc)

Domain models should not call Provider::Registry directly; use a Provided concern within the model’s namespace to select providers and expose convenience methods

Use ActiveRecord validations for forms and complex domain constraints.

app/models/**/*.rb: Place business logic in POROs and model classes under app/models
Models should answer questions about themselves (e.g., prefer account.balance_series over service objects)
Implement complex validations and business logic with ActiveRecord validations
Model-level validations may mirror DB constraints but are not strictly required

Files:

  • app/models/provider/openai.rb
  • app/models/assistant/configurable.rb
app/models/provider/**/*.rb

📄 CodeRabbit inference engine (.cursor/rules/project-design.mdc)

Concrete provider classes must be under the Provider:: namespace, inherit from Provider, wrap calls with with_provider_response, and raise on invalid/unavailable data

Files:

  • app/models/provider/openai.rb
app/**/*.rb

📄 CodeRabbit inference engine (AGENTS.md)

Place Rails application Ruby code (models, controllers, services, jobs, mailers, components) under app/

Files:

  • app/models/provider/openai.rb
  • app/models/assistant/configurable.rb
**/*.rb

📄 CodeRabbit inference engine (AGENTS.md)

Ruby style: 2-space indentation; snake_case for methods/variables; CamelCase for classes/modules

Files:

  • app/models/provider/openai.rb
  • app/models/assistant/configurable.rb
**/*.{rb,js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/*.{rb,js,jsx,ts,tsx}: Make changes atomic, testable, and explain their impact briefly in code suggestions.
Respect existing tests and add tests when changing critical logic.

Files:

  • app/models/provider/openai.rb
  • app/models/assistant/configurable.rb
**/app/**/*.{rb,erb,js,css}

📄 CodeRabbit inference engine (CLAUDE.md)

Push Rails to its limits before adding new dependencies; a strong technical/business reason is required for new dependencies.

Files:

  • app/models/provider/openai.rb
  • app/models/assistant/configurable.rb
**/app/models/**/*.rb

📄 CodeRabbit inference engine (CLAUDE.md)

Business logic should primarily reside in models; use concerns and POROs for organization.

Files:

  • app/models/provider/openai.rb
  • app/models/assistant/configurable.rb
**/*.{rb,erb,haml,slim}

📄 CodeRabbit inference engine (.cursor/rules/general-rules.mdc)

**/*.{rb,erb,haml,slim}: Use Current.user for the current user; do not use current_user
Use Current.family for the current family; do not use current_family
Ignore i18n methods; hardcode strings in English for now (do not use I18n.t, t, or similar)

Files:

  • app/models/provider/openai.rb
  • app/models/assistant/configurable.rb
app/{helpers,models}/**/*.rb

📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)

Format currencies, numbers, and dates on the server side (Ruby) before sending to the client

Files:

  • app/models/provider/openai.rb
  • app/models/assistant/configurable.rb
app/{models,controllers,views}/**/*.{rb,erb}

📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)

Avoid N+1 queries

Files:

  • app/models/provider/openai.rb
  • app/models/assistant/configurable.rb
🧬 Code graph analysis (1)
app/models/assistant/configurable.rb (1)
app/models/provider/openai.rb (1)
  • langfuse_client (141-145)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: ci / test
🔇 Additional comments (9)
app/models/provider/openai.rb (4)

61-72: LGTM!

The instructions_prompt parameter is correctly added as an optional keyword argument and properly threaded through to Langfuse logging.


111-119: LGTM!

The streaming path correctly passes instructions_prompt to Langfuse logging alongside other generation metadata.


123-132: LGTM!

The non-streaming path correctly propagates instructions_prompt to Langfuse logging.


156-184: Well-structured prompt metadata augmentation.

The logic correctly builds a base generation_options hash and conditionally augments it with prompt metadata when available. The use of compact on the nested metadata hash ensures clean output.

app/models/assistant/configurable.rb (5)

5-16: LGTM!

The refactoring correctly splits instruction configuration into content and prompt metadata, enabling downstream Langfuse integration.


28-42: LGTM!

The delegation to Langfuse with graceful fallback is well-structured and maintains a consistent return shape.


75-94: LGTM!

The method builds a comprehensive variables map and provides a defensive fallback for templates that don't support native compilation. The transform_values ensures nil safety.


113-124: LGTM!

The method correctly handles multiple compiled prompt formats and defensively checks for both symbol and string keys in message hashes.


126-181: LGTM!

The fallback prompt template is comprehensive and correctly interpolates user preferences for currency and date formatting.

Comment on lines +54 to +58
compiled_prompt = compile_langfuse_prompt(
prompt.prompt.dig(0, "content"),
preferred_currency: preferred_currency,
preferred_date_format: preferred_date_format
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate prompt structure before accessing nested content.

Line 55 assumes prompt.prompt is an array with a hash at index 0 containing a "content" key. If the Langfuse prompt structure differs, this will silently return nil or raise an error.

Consider adding validation:

-        compiled_prompt = compile_langfuse_prompt(
-          prompt.prompt.dig(0, "content"),
+        raw_content = prompt.prompt.dig(0, "content")
+        unless raw_content
+          Rails.logger.warn("Langfuse prompt structure unexpected: missing content at prompt.prompt[0]['content']")
+          return nil
+        end
+
+        compiled_prompt = compile_langfuse_prompt(
+          raw_content,
           preferred_currency: preferred_currency,
           preferred_date_format: preferred_date_format
         )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
compiled_prompt = compile_langfuse_prompt(
prompt.prompt.dig(0, "content"),
preferred_currency: preferred_currency,
preferred_date_format: preferred_date_format
)
raw_content = prompt.prompt.dig(0, "content")
unless raw_content
Rails.logger.warn("Langfuse prompt structure unexpected: missing content at prompt.prompt[0]['content']")
return nil
end
compiled_prompt = compile_langfuse_prompt(
raw_content,
preferred_currency: preferred_currency,
preferred_date_format: preferred_date_format
)
🤖 Prompt for AI Agents
In app/models/assistant/configurable.rb around lines 54 to 58, the code assumes
prompt.prompt is an array with a hash at index 0 containing "content"; add
validation to check that prompt.prompt is an Array, that it has at least one
element, and that the first element responds to [] with a "content" key (or use
safe navigation), and then either raise a clear ArgumentError or return a
sensible default if the structure is invalid; update the call to
compile_langfuse_prompt to use the validated content (or fallback) and include a
short error message referencing the prompt id/context when raising.

@jjmata jjmata self-assigned this Oct 1, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
app/models/assistant/configurable.rb (1)

44-73: Consider caching compiled prompts.

Each call to langfuse_default_instructions fetches and compiles the prompt from Langfuse. For frequently accessed prompts (e.g., default_instructions), consider memoizing the result keyed by (prompt.version, preferred_currency, preferred_date_format) to reduce API calls and compilation overhead.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 17f5c02 and 805d405.

📒 Files selected for processing (1)
  • app/models/assistant/configurable.rb (3 hunks)
🧰 Additional context used
📓 Path-based instructions (9)
app/models/**/*.rb

📄 CodeRabbit inference engine (.cursor/rules/project-design.mdc)

Domain models should not call Provider::Registry directly; use a Provided concern within the model’s namespace to select providers and expose convenience methods

Use ActiveRecord validations for forms and complex domain constraints.

app/models/**/*.rb: Place business logic in POROs and model classes under app/models
Models should answer questions about themselves (e.g., prefer account.balance_series over service objects)
Implement complex validations and business logic with ActiveRecord validations
Model-level validations may mirror DB constraints but are not strictly required

Files:

  • app/models/assistant/configurable.rb
app/**/*.rb

📄 CodeRabbit inference engine (AGENTS.md)

Place Rails application Ruby code (models, controllers, services, jobs, mailers, components) under app/

Files:

  • app/models/assistant/configurable.rb
**/*.rb

📄 CodeRabbit inference engine (AGENTS.md)

Ruby style: 2-space indentation; snake_case for methods/variables; CamelCase for classes/modules

Files:

  • app/models/assistant/configurable.rb
**/*.{rb,js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/*.{rb,js,jsx,ts,tsx}: Make changes atomic, testable, and explain their impact briefly in code suggestions.
Respect existing tests and add tests when changing critical logic.

Files:

  • app/models/assistant/configurable.rb
**/app/**/*.{rb,erb,js,css}

📄 CodeRabbit inference engine (CLAUDE.md)

Push Rails to its limits before adding new dependencies; a strong technical/business reason is required for new dependencies.

Files:

  • app/models/assistant/configurable.rb
**/app/models/**/*.rb

📄 CodeRabbit inference engine (CLAUDE.md)

Business logic should primarily reside in models; use concerns and POROs for organization.

Files:

  • app/models/assistant/configurable.rb
**/*.{rb,erb,haml,slim}

📄 CodeRabbit inference engine (.cursor/rules/general-rules.mdc)

**/*.{rb,erb,haml,slim}: Use Current.user for the current user; do not use current_user
Use Current.family for the current family; do not use current_family
Ignore i18n methods; hardcode strings in English for now (do not use I18n.t, t, or similar)

Files:

  • app/models/assistant/configurable.rb
app/{helpers,models}/**/*.rb

📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)

Format currencies, numbers, and dates on the server side (Ruby) before sending to the client

Files:

  • app/models/assistant/configurable.rb
app/{models,controllers,views}/**/*.{rb,erb}

📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)

Avoid N+1 queries

Files:

  • app/models/assistant/configurable.rb
🧬 Code graph analysis (1)
app/models/assistant/configurable.rb (1)
app/models/provider/openai.rb (1)
  • langfuse_client (141-145)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: ci / test
🔇 Additional comments (7)
app/models/assistant/configurable.rb (7)

5-16: LGTM: Clean integration of prompt metadata.

The refactoring to extract instructions_config and plumb through both content and prompt metadata is well-structured and maintains backward compatibility.


28-42: LGTM: Robust fallback logic.

The conditional logic cleanly handles both Langfuse-backed and fallback instructions, maintaining a consistent return structure.


75-94: LGTM: Flexible compilation strategy.

The dual-path approach (SDK compilation vs. local interpolation) provides good resilience when working with different Langfuse SDK versions or prompt formats.


96-111: LGTM: Recursive interpolation with symbol key fix applied.

The interpolation logic correctly handles nested structures and the symbol key conversion (line 101) addresses the mismatch flagged in previous reviews.


113-124: LGTM: Robust content extraction.

The method defensively handles both symbol and string keys (line 119), accommodating different Langfuse prompt structures.


126-181: LGTM: Comprehensive fallback prompt.

The static fallback maintains feature parity with Langfuse-driven prompts and provides clear, actionable instructions for the assistant.


183-190: Verify error handling doesn't mask configuration issues.

The rescue block (lines 187-189) catches all exceptions and returns nil, which could mask configuration errors (e.g., missing gems, network issues during initialization). Consider rescuing more specific exceptions or logging at error level instead of warn for initialization failures.

Based on learnings (coding guidelines recommend explicit error handling).

@jjmata jjmata marked this pull request as draft October 1, 2025 20:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

codex Touched by OpenAI Codex somehow

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant