-
Notifications
You must be signed in to change notification settings - Fork 60
Fetch assistant prompt configuration from Langfuse #177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughAdds an optional instructions_prompt through Assistant, Responder, and Provider layers; Assistant::Configurable now fetches/compiles Langfuse-backed prompts with a static fallback and exposes both instructions (content) and instructions_prompt (prompt object); OpenAI provider logs prompt metadata to Langfuse and chat_response signatures are extended accordingly. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant UI as UI/Caller
participant Asst as Assistant
participant Cfg as Assistant::Configurable
participant LF as Langfuse Client
participant Resp as Assistant::Responder
participant LLM as Provider::OpenAI
UI->>Asst: initialize(chat, instructions: nil, instructions_prompt: nil)
Asst->>Cfg: config_for(chat.user_prefs)
alt Langfuse available
Cfg->>LF: fetch prompt object (name/version/template)
LF-->>Cfg: prompt object
Cfg->>Cfg: compile(template, currency/date) --> extract content
Cfg-->>Asst: {instructions: content, instructions_prompt: prompt object}
else Fallback
Cfg-->>Asst: {instructions: static content, instructions_prompt: nil}
end
UI->>Asst: respond(message)
Asst->>Resp: initialize(message:, instructions:, instructions_prompt:, llm:)
Resp->>LLM: chat_response(..., instructions:, instructions_prompt:, functions:, ...)
LLM->>LLM: call OpenAI API (stream/non-stream)
alt Langfuse tracing available
LLM->>LF: trace.generation(..., prompt: prompt metadata)
end
LLM-->>Resp: response
Resp-->>UI: reply
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/models/provider/openai.rb (1)
141-145
: Add error handling and memoization to match the pattern inAssistant::Configurable
.This implementation differs from the
langfuse_client
method inapp/models/assistant/configurable.rb
(lines 138-145):
- Missing error handling: Initialization failures will propagate as exceptions rather than returning nil
- Missing memoization: Uses direct assignment instead of
||=
, creating a new client on every callApply this diff to align with the established pattern:
def langfuse_client return unless ENV["LANGFUSE_PUBLIC_KEY"].present? && ENV["LANGFUSE_SECRET_KEY"].present? - @langfuse_client = Langfuse.new + @langfuse_client ||= Langfuse.new +rescue => e + Rails.logger.warn("Langfuse client initialization failed: #{e.message}") + nil end
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
app/models/assistant.rb
(2 hunks)app/models/assistant/configurable.rb
(3 hunks)app/models/assistant/responder.rb
(3 hunks)app/models/provider/llm_concept.rb
(1 hunks)app/models/provider/openai.rb
(4 hunks)
🧰 Additional context used
📓 Path-based instructions (10)
app/models/**/*.rb
📄 CodeRabbit inference engine (.cursor/rules/project-design.mdc)
Domain models should not call Provider::Registry directly; use a Provided concern within the model’s namespace to select providers and expose convenience methods
Use ActiveRecord validations for forms and complex domain constraints.
app/models/**/*.rb
: Place business logic in POROs and model classes under app/models
Models should answer questions about themselves (e.g., prefer account.balance_series over service objects)
Implement complex validations and business logic with ActiveRecord validations
Model-level validations may mirror DB constraints but are not strictly required
Files:
app/models/provider/llm_concept.rb
app/models/assistant.rb
app/models/assistant/responder.rb
app/models/provider/openai.rb
app/models/assistant/configurable.rb
app/models/provider/**/*.rb
📄 CodeRabbit inference engine (.cursor/rules/project-design.mdc)
Concrete provider classes must be under the Provider:: namespace, inherit from Provider, wrap calls with with_provider_response, and raise on invalid/unavailable data
Files:
app/models/provider/llm_concept.rb
app/models/provider/openai.rb
app/**/*.rb
📄 CodeRabbit inference engine (AGENTS.md)
Place Rails application Ruby code (models, controllers, services, jobs, mailers, components) under app/
Files:
app/models/provider/llm_concept.rb
app/models/assistant.rb
app/models/assistant/responder.rb
app/models/provider/openai.rb
app/models/assistant/configurable.rb
**/*.rb
📄 CodeRabbit inference engine (AGENTS.md)
Ruby style: 2-space indentation; snake_case for methods/variables; CamelCase for classes/modules
Files:
app/models/provider/llm_concept.rb
app/models/assistant.rb
app/models/assistant/responder.rb
app/models/provider/openai.rb
app/models/assistant/configurable.rb
**/*.{rb,js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{rb,js,jsx,ts,tsx}
: Make changes atomic, testable, and explain their impact briefly in code suggestions.
Respect existing tests and add tests when changing critical logic.
Files:
app/models/provider/llm_concept.rb
app/models/assistant.rb
app/models/assistant/responder.rb
app/models/provider/openai.rb
app/models/assistant/configurable.rb
**/app/**/*.{rb,erb,js,css}
📄 CodeRabbit inference engine (CLAUDE.md)
Push Rails to its limits before adding new dependencies; a strong technical/business reason is required for new dependencies.
Files:
app/models/provider/llm_concept.rb
app/models/assistant.rb
app/models/assistant/responder.rb
app/models/provider/openai.rb
app/models/assistant/configurable.rb
**/app/models/**/*.rb
📄 CodeRabbit inference engine (CLAUDE.md)
Business logic should primarily reside in models; use concerns and POROs for organization.
Files:
app/models/provider/llm_concept.rb
app/models/assistant.rb
app/models/assistant/responder.rb
app/models/provider/openai.rb
app/models/assistant/configurable.rb
**/*.{rb,erb,haml,slim}
📄 CodeRabbit inference engine (.cursor/rules/general-rules.mdc)
**/*.{rb,erb,haml,slim}
: UseCurrent.user
for the current user; do not usecurrent_user
UseCurrent.family
for the current family; do not usecurrent_family
Ignore i18n methods; hardcode strings in English for now (do not useI18n.t
,t
, or similar)
Files:
app/models/provider/llm_concept.rb
app/models/assistant.rb
app/models/assistant/responder.rb
app/models/provider/openai.rb
app/models/assistant/configurable.rb
app/{helpers,models}/**/*.rb
📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)
Format currencies, numbers, and dates on the server side (Ruby) before sending to the client
Files:
app/models/provider/llm_concept.rb
app/models/assistant.rb
app/models/assistant/responder.rb
app/models/provider/openai.rb
app/models/assistant/configurable.rb
app/{models,controllers,views}/**/*.{rb,erb}
📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)
Avoid N+1 queries
Files:
app/models/provider/llm_concept.rb
app/models/assistant.rb
app/models/assistant/responder.rb
app/models/provider/openai.rb
app/models/assistant/configurable.rb
🧬 Code graph analysis (4)
app/models/assistant.rb (2)
app/models/assistant/responder.rb (3)
chat
(102-104)initialize
(1-105)initialize
(2-8)app/models/assistant/configurable.rb (1)
config_for
(5-16)
app/models/assistant/responder.rb (2)
app/models/assistant.rb (2)
initialize
(18-23)function_tool_caller
(75-81)app/models/provider/openai/chat_config.rb (2)
initialize
(1-36)initialize
(2-5)
app/models/provider/openai.rb (1)
app/models/assistant/configurable.rb (1)
langfuse_client
(139-146)
app/models/assistant/configurable.rb (1)
app/models/provider/openai.rb (1)
langfuse_client
(141-145)
🪛 GitHub Check: ci / lint
app/models/provider/openai.rb
[failure] 147-147:
Layout/IndentationConsistency: Inconsistent indentation detected.
app/models/assistant/configurable.rb
[failure] 67-67:
Layout/EndAlignment: end
at 67, 18 is not aligned with content = case
at 60, 8.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: ci / test
🔇 Additional comments (7)
app/models/assistant.rb (1)
4-36
: Instructions prompt plumbing looks solidNice job threading
instructions_prompt
from configuration into the initializer and responder without disturbing existing behavior.app/models/assistant/responder.rb (1)
2-69
: Good propagation of prompt metadataPassing
instructions_prompt
through the responder and into the LLM call keeps the new metadata flowing end-to-end.app/models/provider/llm_concept.rb (1)
25-25
: LGTM!The
instructions_prompt
parameter is well-positioned in the signature and maintains backward compatibility with its nil default.app/models/provider/openai.rb (4)
65-65
: LGTM!The
instructions_prompt
parameter addition matches the interface definition inProvider::LlmConcept
.
111-119
: LGTM!Prompt metadata is correctly plumbed through the streaming path to Langfuse logging.
123-132
: LGTM!Prompt metadata is correctly plumbed through the non-streaming path to Langfuse logging, maintaining symmetry with the streaming implementation.
166-177
: No action needed for version duplication. The top-levelgeneration_options[:version]
tracks metrics and the nestedmetadata[:prompt][:version]
is purely contextual; per Langfuse docs this does not conflict.
app/models/provider/openai.rb
Outdated
trace.update(output: output) | ||
rescue => e | ||
Rails.logger.warn("Langfuse logging failed: #{e.message}") | ||
def log_langfuse_generation(name:, model:, input:, output:, usage: nil, session_id: nil, user_identifier: nil, prompt: nil) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix indentation to align with Ruby style guidelines.
The method definition has inconsistent indentation.
Apply this diff:
- def log_langfuse_generation(name:, model:, input:, output:, usage: nil, session_id: nil, user_identifier: nil, prompt: nil)
+def log_langfuse_generation(name:, model:, input:, output:, usage: nil, session_id: nil, user_identifier: nil, prompt: nil)
Based on static analysis hints.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def log_langfuse_generation(name:, model:, input:, output:, usage: nil, session_id: nil, user_identifier: nil, prompt: nil) | |
def log_langfuse_generation(name:, model:, input:, output:, usage: nil, session_id: nil, user_identifier: nil, prompt: nil) |
🧰 Tools
🪛 GitHub Check: ci / lint
[failure] 147-147:
Layout/IndentationConsistency: Inconsistent indentation detected.
🤖 Prompt for AI Agents
In app/models/provider/openai.rb around line 147, the method definition `def
log_langfuse_generation...` is indented inconsistently; reformat the line to
follow Ruby style (2-space indentation) so it aligns with the other method
definitions in the class/module scope — move the `def` left to the same
indentation level as neighboring methods and ensure the parameter list and
closing `end` remain aligned accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
app/models/assistant/configurable.rb (1)
84-95
: Fix case expression alignment (duplicate issue).This alignment issue was already flagged in a previous review. The
case
expression needs to be reformatted so thewhen
clauses and closingend
align properly with the assignment.Apply the previously suggested diff:
- case compiled_prompt - when String - compiled_prompt - when Array - compiled_prompt.filter_map do |message| - message[:content] || message["content"] - end.join("\n\n") - else - nil - end + content = + case compiled_prompt + when String + compiled_prompt + when Array + compiled_prompt.filter_map do |message| + message[:content] || message["content"] + end.join("\n\n") + else + nil + endNote: Line 55 assigns to
content
, so this diff should be adjusted for the context whereextract_prompt_content
returns the value directly.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
app/models/assistant/configurable.rb
(3 hunks)
🧰 Additional context used
📓 Path-based instructions (9)
app/models/**/*.rb
📄 CodeRabbit inference engine (.cursor/rules/project-design.mdc)
Domain models should not call Provider::Registry directly; use a Provided concern within the model’s namespace to select providers and expose convenience methods
Use ActiveRecord validations for forms and complex domain constraints.
app/models/**/*.rb
: Place business logic in POROs and model classes under app/models
Models should answer questions about themselves (e.g., prefer account.balance_series over service objects)
Implement complex validations and business logic with ActiveRecord validations
Model-level validations may mirror DB constraints but are not strictly required
Files:
app/models/assistant/configurable.rb
app/**/*.rb
📄 CodeRabbit inference engine (AGENTS.md)
Place Rails application Ruby code (models, controllers, services, jobs, mailers, components) under app/
Files:
app/models/assistant/configurable.rb
**/*.rb
📄 CodeRabbit inference engine (AGENTS.md)
Ruby style: 2-space indentation; snake_case for methods/variables; CamelCase for classes/modules
Files:
app/models/assistant/configurable.rb
**/*.{rb,js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{rb,js,jsx,ts,tsx}
: Make changes atomic, testable, and explain their impact briefly in code suggestions.
Respect existing tests and add tests when changing critical logic.
Files:
app/models/assistant/configurable.rb
**/app/**/*.{rb,erb,js,css}
📄 CodeRabbit inference engine (CLAUDE.md)
Push Rails to its limits before adding new dependencies; a strong technical/business reason is required for new dependencies.
Files:
app/models/assistant/configurable.rb
**/app/models/**/*.rb
📄 CodeRabbit inference engine (CLAUDE.md)
Business logic should primarily reside in models; use concerns and POROs for organization.
Files:
app/models/assistant/configurable.rb
**/*.{rb,erb,haml,slim}
📄 CodeRabbit inference engine (.cursor/rules/general-rules.mdc)
**/*.{rb,erb,haml,slim}
: UseCurrent.user
for the current user; do not usecurrent_user
UseCurrent.family
for the current family; do not usecurrent_family
Ignore i18n methods; hardcode strings in English for now (do not useI18n.t
,t
, or similar)
Files:
app/models/assistant/configurable.rb
app/{helpers,models}/**/*.rb
📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)
Format currencies, numbers, and dates on the server side (Ruby) before sending to the client
Files:
app/models/assistant/configurable.rb
app/{models,controllers,views}/**/*.{rb,erb}
📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)
Avoid N+1 queries
Files:
app/models/assistant/configurable.rb
🧬 Code graph analysis (1)
app/models/assistant/configurable.rb (1)
app/models/provider/openai.rb (1)
langfuse_client
(141-145)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: ci / test
🔇 Additional comments (5)
app/models/assistant/configurable.rb (5)
9-13
: LGTM! Clean refactoring to support prompt metadata.The extraction of
instructions_config
and splitting into:instructions
and:instructions_prompt
keys is a clean way to plumb prompt metadata through the pipeline while maintaining backward compatibility with the content-only use case.
28-42
: LGTM! Clear fallback strategy.The conditional logic cleanly separates Langfuse-backed prompts from the static fallback, and consistently returns a hash with
:content
and:prompt
keys in both branches.
69-82
: LGTM! Solid nil handling in variable preparation.The
transform_values
approach converting nils to empty strings ensures the template compiler receives clean string values, preventing interpolation errors.
97-152
: LGTM! Well-structured fallback prompt.The static template provides comprehensive instructions and properly interpolates currency/date preferences. The heredoc format keeps the prompt readable and maintainable.
44-67
: No action needed:prompt.prompt
correctly returns the template
Langfuse Ruby SDK definesprompt.prompt
as the raw prompt template (string for text prompts, array for chat prompts) per SDK docs (langfuse.com)
def langfuse_client | ||
return unless ENV["LANGFUSE_PUBLIC_KEY"].present? && ENV["LANGFUSE_SECRET_KEY"].present? | ||
|
||
@langfuse_client ||= Langfuse.new | ||
rescue => e | ||
Rails.logger.warn("Langfuse client initialization failed: #{e.message}") | ||
nil | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Extract duplicated langfuse_client
to a shared concern.
This method is nearly identical to Provider::OpenAI#langfuse_client
(lines 140-144 in app/models/provider/openai.rb
). To follow DRY principles and maintain consistency, consider extracting this to a shared concern (e.g., LangfuseClientConcern
) that both classes can include.
Create a new concern at app/models/concerns/langfuse_client_concern.rb
:
module LangfuseClientConcern
extend ActiveSupport::Concern
class_methods do
def langfuse_client
return unless ENV["LANGFUSE_PUBLIC_KEY"].present? && ENV["LANGFUSE_SECRET_KEY"].present?
@langfuse_client ||= Langfuse.new
rescue => e
Rails.logger.warn("Langfuse client initialization failed: #{e.message}")
nil
end
end
end
Then include it in both Assistant::Configurable
and Provider::OpenAI
:
module Assistant::Configurable
extend ActiveSupport::Concern
+ include LangfuseClientConcern
class_methods do
# ...
-
- def langfuse_client
- return unless ENV["LANGFUSE_PUBLIC_KEY"].present? && ENV["LANGFUSE_SECRET_KEY"].present?
-
- @langfuse_client ||= Langfuse.new
- rescue => e
- Rails.logger.warn("Langfuse client initialization failed: #{e.message}")
- nil
- end
end
end
As per coding guidelines.
🤖 Prompt for AI Agents
In app/models/assistant/configurable.rb around lines 154 to 161 and
app/models/provider/openai.rb around lines 140 to 144, the langfuse_client
method is duplicated; extract it into a new concern at
app/models/concerns/langfuse_client_concern.rb that defines the class-level
langfuse_client (using ActiveSupport::Concern with class_methods), move the
existing implementation there (keeping the same ENV checks, memoization and
rescue logging), then remove the method from both Assistant::Configurable and
Provider::OpenAI and include the new LangfuseClientConcern in each class so they
share the single implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/models/provider/openai.rb (1)
141-145
: Fix memoization inlangfuse_client
.The method uses
@langfuse_client =
instead of@langfuse_client ||=
, causing a new Langfuse client to be instantiated on every call. This defeats memoization and may create unnecessary connections.Apply this diff:
def langfuse_client return unless ENV["LANGFUSE_PUBLIC_KEY"].present? && ENV["LANGFUSE_SECRET_KEY"].present? - @langfuse_client = Langfuse.new + @langfuse_client ||= Langfuse.new end
♻️ Duplicate comments (2)
app/models/provider/openai.rb (1)
147-147
: Fix indentation.This is the same indentation issue flagged in the previous review. The method definition should align with the class/module scope.
Based on static analysis hints.
app/models/assistant/configurable.rb (1)
183-190
: Duplication flagged in previous review.This method is duplicated between
Assistant::Configurable
andProvider::OpenAI
. The previous review suggested extracting to a shared concern. Note that theProvider::OpenAI
version has a memoization bug (uses=
instead of||=
), so when extracting, use this correct implementation.As per coding guidelines.
🧹 Nitpick comments (2)
app/models/assistant/configurable.rb (2)
64-64
: Simplify ID extraction.The nested conditional with inline rescue is hard to follow.
Consider:
- id: prompt.respond_to?(:id) ? prompt.id : (prompt[:id] rescue nil), + id: prompt.try(:id) || prompt[:id],
50-52
: Address TODO and remove debug logging before production.The TODO comment and
Rails.logger.warn
statements suggest this code is experimental. Resolve the resilience issue or document why the logging is needed.Do you want me to help implement robust handling for different prompt types (chat vs. text), or should this TODO be tracked in a separate issue?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
app/models/assistant/configurable.rb
(3 hunks)app/models/provider/openai.rb
(5 hunks)
🧰 Additional context used
📓 Path-based instructions (10)
app/models/**/*.rb
📄 CodeRabbit inference engine (.cursor/rules/project-design.mdc)
Domain models should not call Provider::Registry directly; use a Provided concern within the model’s namespace to select providers and expose convenience methods
Use ActiveRecord validations for forms and complex domain constraints.
app/models/**/*.rb
: Place business logic in POROs and model classes under app/models
Models should answer questions about themselves (e.g., prefer account.balance_series over service objects)
Implement complex validations and business logic with ActiveRecord validations
Model-level validations may mirror DB constraints but are not strictly required
Files:
app/models/provider/openai.rb
app/models/assistant/configurable.rb
app/models/provider/**/*.rb
📄 CodeRabbit inference engine (.cursor/rules/project-design.mdc)
Concrete provider classes must be under the Provider:: namespace, inherit from Provider, wrap calls with with_provider_response, and raise on invalid/unavailable data
Files:
app/models/provider/openai.rb
app/**/*.rb
📄 CodeRabbit inference engine (AGENTS.md)
Place Rails application Ruby code (models, controllers, services, jobs, mailers, components) under app/
Files:
app/models/provider/openai.rb
app/models/assistant/configurable.rb
**/*.rb
📄 CodeRabbit inference engine (AGENTS.md)
Ruby style: 2-space indentation; snake_case for methods/variables; CamelCase for classes/modules
Files:
app/models/provider/openai.rb
app/models/assistant/configurable.rb
**/*.{rb,js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{rb,js,jsx,ts,tsx}
: Make changes atomic, testable, and explain their impact briefly in code suggestions.
Respect existing tests and add tests when changing critical logic.
Files:
app/models/provider/openai.rb
app/models/assistant/configurable.rb
**/app/**/*.{rb,erb,js,css}
📄 CodeRabbit inference engine (CLAUDE.md)
Push Rails to its limits before adding new dependencies; a strong technical/business reason is required for new dependencies.
Files:
app/models/provider/openai.rb
app/models/assistant/configurable.rb
**/app/models/**/*.rb
📄 CodeRabbit inference engine (CLAUDE.md)
Business logic should primarily reside in models; use concerns and POROs for organization.
Files:
app/models/provider/openai.rb
app/models/assistant/configurable.rb
**/*.{rb,erb,haml,slim}
📄 CodeRabbit inference engine (.cursor/rules/general-rules.mdc)
**/*.{rb,erb,haml,slim}
: UseCurrent.user
for the current user; do not usecurrent_user
UseCurrent.family
for the current family; do not usecurrent_family
Ignore i18n methods; hardcode strings in English for now (do not useI18n.t
,t
, or similar)
Files:
app/models/provider/openai.rb
app/models/assistant/configurable.rb
app/{helpers,models}/**/*.rb
📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)
Format currencies, numbers, and dates on the server side (Ruby) before sending to the client
Files:
app/models/provider/openai.rb
app/models/assistant/configurable.rb
app/{models,controllers,views}/**/*.{rb,erb}
📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)
Avoid N+1 queries
Files:
app/models/provider/openai.rb
app/models/assistant/configurable.rb
🧬 Code graph analysis (1)
app/models/assistant/configurable.rb (1)
app/models/provider/openai.rb (1)
langfuse_client
(141-145)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: ci / test
🔇 Additional comments (9)
app/models/provider/openai.rb (4)
61-72
: LGTM!The
instructions_prompt
parameter is correctly added as an optional keyword argument and properly threaded through to Langfuse logging.
111-119
: LGTM!The streaming path correctly passes
instructions_prompt
to Langfuse logging alongside other generation metadata.
123-132
: LGTM!The non-streaming path correctly propagates
instructions_prompt
to Langfuse logging.
156-184
: Well-structured prompt metadata augmentation.The logic correctly builds a base
generation_options
hash and conditionally augments it with prompt metadata when available. The use ofcompact
on the nested metadata hash ensures clean output.app/models/assistant/configurable.rb (5)
5-16
: LGTM!The refactoring correctly splits instruction configuration into content and prompt metadata, enabling downstream Langfuse integration.
28-42
: LGTM!The delegation to Langfuse with graceful fallback is well-structured and maintains a consistent return shape.
75-94
: LGTM!The method builds a comprehensive variables map and provides a defensive fallback for templates that don't support native compilation. The
transform_values
ensures nil safety.
113-124
: LGTM!The method correctly handles multiple compiled prompt formats and defensively checks for both symbol and string keys in message hashes.
126-181
: LGTM!The fallback prompt template is comprehensive and correctly interpolates user preferences for currency and date formatting.
compiled_prompt = compile_langfuse_prompt( | ||
prompt.prompt.dig(0, "content"), | ||
preferred_currency: preferred_currency, | ||
preferred_date_format: preferred_date_format | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validate prompt structure before accessing nested content.
Line 55 assumes prompt.prompt
is an array with a hash at index 0 containing a "content"
key. If the Langfuse prompt structure differs, this will silently return nil
or raise an error.
Consider adding validation:
- compiled_prompt = compile_langfuse_prompt(
- prompt.prompt.dig(0, "content"),
+ raw_content = prompt.prompt.dig(0, "content")
+ unless raw_content
+ Rails.logger.warn("Langfuse prompt structure unexpected: missing content at prompt.prompt[0]['content']")
+ return nil
+ end
+
+ compiled_prompt = compile_langfuse_prompt(
+ raw_content,
preferred_currency: preferred_currency,
preferred_date_format: preferred_date_format
)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
compiled_prompt = compile_langfuse_prompt( | |
prompt.prompt.dig(0, "content"), | |
preferred_currency: preferred_currency, | |
preferred_date_format: preferred_date_format | |
) | |
raw_content = prompt.prompt.dig(0, "content") | |
unless raw_content | |
Rails.logger.warn("Langfuse prompt structure unexpected: missing content at prompt.prompt[0]['content']") | |
return nil | |
end | |
compiled_prompt = compile_langfuse_prompt( | |
raw_content, | |
preferred_currency: preferred_currency, | |
preferred_date_format: preferred_date_format | |
) |
🤖 Prompt for AI Agents
In app/models/assistant/configurable.rb around lines 54 to 58, the code assumes
prompt.prompt is an array with a hash at index 0 containing "content"; add
validation to check that prompt.prompt is an Array, that it has at least one
element, and that the first element responds to [] with a "content" key (or use
safe navigation), and then either raise a clear ArgumentError or return a
sensible default if the structure is invalid; update the call to
compile_langfuse_prompt to use the validated content (or fallback) and include a
short error message referencing the prompt id/context when raising.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
app/models/assistant/configurable.rb (1)
44-73
: Consider caching compiled prompts.Each call to
langfuse_default_instructions
fetches and compiles the prompt from Langfuse. For frequently accessed prompts (e.g.,default_instructions
), consider memoizing the result keyed by(prompt.version, preferred_currency, preferred_date_format)
to reduce API calls and compilation overhead.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
app/models/assistant/configurable.rb
(3 hunks)
🧰 Additional context used
📓 Path-based instructions (9)
app/models/**/*.rb
📄 CodeRabbit inference engine (.cursor/rules/project-design.mdc)
Domain models should not call Provider::Registry directly; use a Provided concern within the model’s namespace to select providers and expose convenience methods
Use ActiveRecord validations for forms and complex domain constraints.
app/models/**/*.rb
: Place business logic in POROs and model classes under app/models
Models should answer questions about themselves (e.g., prefer account.balance_series over service objects)
Implement complex validations and business logic with ActiveRecord validations
Model-level validations may mirror DB constraints but are not strictly required
Files:
app/models/assistant/configurable.rb
app/**/*.rb
📄 CodeRabbit inference engine (AGENTS.md)
Place Rails application Ruby code (models, controllers, services, jobs, mailers, components) under app/
Files:
app/models/assistant/configurable.rb
**/*.rb
📄 CodeRabbit inference engine (AGENTS.md)
Ruby style: 2-space indentation; snake_case for methods/variables; CamelCase for classes/modules
Files:
app/models/assistant/configurable.rb
**/*.{rb,js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{rb,js,jsx,ts,tsx}
: Make changes atomic, testable, and explain their impact briefly in code suggestions.
Respect existing tests and add tests when changing critical logic.
Files:
app/models/assistant/configurable.rb
**/app/**/*.{rb,erb,js,css}
📄 CodeRabbit inference engine (CLAUDE.md)
Push Rails to its limits before adding new dependencies; a strong technical/business reason is required for new dependencies.
Files:
app/models/assistant/configurable.rb
**/app/models/**/*.rb
📄 CodeRabbit inference engine (CLAUDE.md)
Business logic should primarily reside in models; use concerns and POROs for organization.
Files:
app/models/assistant/configurable.rb
**/*.{rb,erb,haml,slim}
📄 CodeRabbit inference engine (.cursor/rules/general-rules.mdc)
**/*.{rb,erb,haml,slim}
: UseCurrent.user
for the current user; do not usecurrent_user
UseCurrent.family
for the current family; do not usecurrent_family
Ignore i18n methods; hardcode strings in English for now (do not useI18n.t
,t
, or similar)
Files:
app/models/assistant/configurable.rb
app/{helpers,models}/**/*.rb
📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)
Format currencies, numbers, and dates on the server side (Ruby) before sending to the client
Files:
app/models/assistant/configurable.rb
app/{models,controllers,views}/**/*.{rb,erb}
📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)
Avoid N+1 queries
Files:
app/models/assistant/configurable.rb
🧬 Code graph analysis (1)
app/models/assistant/configurable.rb (1)
app/models/provider/openai.rb (1)
langfuse_client
(141-145)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: ci / test
🔇 Additional comments (7)
app/models/assistant/configurable.rb (7)
5-16
: LGTM: Clean integration of prompt metadata.The refactoring to extract
instructions_config
and plumb through both content and prompt metadata is well-structured and maintains backward compatibility.
28-42
: LGTM: Robust fallback logic.The conditional logic cleanly handles both Langfuse-backed and fallback instructions, maintaining a consistent return structure.
75-94
: LGTM: Flexible compilation strategy.The dual-path approach (SDK compilation vs. local interpolation) provides good resilience when working with different Langfuse SDK versions or prompt formats.
96-111
: LGTM: Recursive interpolation with symbol key fix applied.The interpolation logic correctly handles nested structures and the symbol key conversion (line 101) addresses the mismatch flagged in previous reviews.
113-124
: LGTM: Robust content extraction.The method defensively handles both symbol and string keys (line 119), accommodating different Langfuse prompt structures.
126-181
: LGTM: Comprehensive fallback prompt.The static fallback maintains feature parity with Langfuse-driven prompts and provides clear, actionable instructions for the assistant.
183-190
: Verify error handling doesn't mask configuration issues.The rescue block (lines 187-189) catches all exceptions and returns
nil
, which could mask configuration errors (e.g., missing gems, network issues during initialization). Consider rescuing more specific exceptions or logging at error level instead of warn for initialization failures.Based on learnings (coding guidelines recommend explicit error handling).
Summary
Testing
https://chatgpt.com/codex/tasks/task_e_68dd6b795bdc8332baa635ea3985454f
Summary by CodeRabbit
New Features
Improvements