-
Notifications
You must be signed in to change notification settings - Fork 63
Support OpenRouter Provider with model selector #139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughAdds configurable AI provider support with a new AI Settings UI, controller, routes, and Stimulus controller. Introduces OpenRouter and Ollama providers, parsing/streaming utilities, registry selection, and related settings fields. Updates availability checks, chat form defaults, and consent messaging. Adjusts dev DB port and example envs. Adds tests for controller, providers, and registry. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Admin as Admin User
participant UI as AI Settings Page (Stimulus)
participant C as Settings::AiController
participant Reg as Provider::Registry
participant Prov as Provider (OpenRouter/Ollama)
Admin->>UI: Click "Test Connection" (provider)
UI->>C: POST /settings/ai/test_connection {provider, creds}
C->>Reg: preferred_ai_provider (or build by provider)
Reg-->>C: Provider instance
C->>Prov: models / check support
Prov-->>C: Result (success/failure)
C-->>UI: JSON {success, message, models?}
UI-->>Admin: Render result
sequenceDiagram
autonumber
participant App as Assistant
participant Reg as Provider::Registry
participant Prov as Provider
participant LLM as LLM API
App->>Reg: preferred_ai_provider (by Setting.ai_provider)
Reg-->>App: Provider instance
App->>Prov: chat_response(prompt, model, options)
Prov->>LLM: Request (streaming or non-stream)
alt Streaming
LLM-->>Prov: delta events
Prov-->>App: stream chunks (output_text / response)
else Non-stream
LLM-->>Prov: full response
Prov-->>App: parsed response
end
App-->>App: Build AssistantMessage / handle output
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested labels
Poem
✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 30
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/models/assistant.rb (1)
66-71
: Persist tool_calls; current assignment is never saved.
assistant_message.tool_calls = ...
isn’t persisted. Save or update! to avoid losing tool calls.- if data[:function_tool_calls].present? - assistant_message.tool_calls = data[:function_tool_calls] - latest_response_id = data[:id] + if data[:function_tool_calls].present? + assistant_message.update!(tool_calls: data[:function_tool_calls]) + latest_response_id = data[:id]
🧹 Nitpick comments (31)
.env.test.example (1)
1-1
: Default SELF_HOSTED=true in test example — add context and confirm CI env.Good if tests rely on self-hosted flows. Please:
- Ensure CI sets SELF_HOSTED=true explicitly.
- Add a brief comment so contributors don’t flip it unintentionally.
Suggested tweak:
+# Enable self-hosted features used in AI provider tests (OpenRouter/Ollama) +# Set to false if running against hosted mode locally SELF_HOSTED=true.env.local.example (1)
2-2
: Local example defaults to SELF_HOSTED=true — verify intent and dev UX.This will hide billing and enable AI settings by default. If that’s intended, add guidance; otherwise consider defaulting to false.
Suggested tweak:
-SELF_HOSTED=true +# Set to true to enable local AI provider configuration (e.g., Ollama). +# Leave false for cloud-style defaults (billing visible, etc.). +SELF_HOSTED=trueapp/views/messages/_chat_form.html.erb (1)
10-11
: Extract AI model default logic into a helper and guard against blank values
UseSetting.ai_model.presence
(instead of||
) to avoid empty strings, and move the fallback"openai/gpt-4o-mini"
into a view helper so the template stays logic-free. BothMessagesController
andChatsController
already permit:ai_model
, so this refactor will work end to end.app/views/chats/_ai_consent.html.erb (2)
12-17
: Make the non–self-hosted branch a link too for consistent UXProvide a link in both branches.
- <% if self_hosted? %> - <%= link_to "AI Settings", settings_ai_path, class: "underline" %> or Self-Hosting settings. - <% else %> - AI Settings section of your account. - <% end %> + <% if self_hosted? %> + <%= link_to "AI Settings", settings_ai_path, class: "underline" %> or Self-Hosting settings. + <% else %> + Visit the <%= link_to "AI Settings", settings_ai_path, class: "underline" %> section of your account. + <% end %>
8-8
: Use design-system token instead of raw grayReplace
text-gray-600
withtext-secondary
per Tailwind token guidance.- <p class="text-gray-600 mb-4 text-sm text-center"> + <p class="text-secondary mb-4 text-sm text-center">test/models/provider/openrouter_test.rb (1)
1-39
: Add boundary tests for the 25‐item limit in both methods
Ensure thatauto_categorize
andauto_detect_merchants
accept exactly 25 transactions without error (mocha is already configured viatest/test_helper.rb
).end + +test "auto categorize allows up to 25 transactions" do + transactions = (1..25).map { |i| { id: i.to_s, name: "Transaction #{i}" } } + @subject.stubs(:with_provider_response).yields + Provider::Openrouter::AutoCategorizer.any_instance.stubs(:auto_categorize).returns([]) + assert_nothing_raised { @subject.auto_categorize(transactions: transactions, model: @subject_model) } +end + +test "auto detect merchants allows up to 25 transactions" do + transactions = (1..25).map { |i| { id: i.to_s, description: "Transaction #{i}" } } + @subject.stubs(:with_provider_response).yields + Provider::Openrouter::AutoMerchantDetector.any_instance.stubs(:auto_detect_merchants).returns([]) + assert_nothing_raised { @subject.auto_detect_merchants(transactions: transactions, model: @subject_model) } +endapp/models/assistant/provided.rb (1)
5-9
: Compact provider list before search; keep fallback but prefer a provider that actually supports the modelMinor tidy and avoids calling
&.
on nils. Consider handling the case where the fallback doesn’t support the requested model.- provider = registry.providers.find { |provider| provider&.supports_model?(ai_model) } + provider = registry.providers.compact.find { |p| p.supports_model?(ai_model) } - provider || registry.preferred_ai_provider + provider || registry.preferred_ai_providerIf
supports_model?
may hit the network (e.g., dynamic model discovery), cache provider-to-model support with a short TTL to reduce latency.app/views/settings/hostings/show.html.erb (1)
5-16
: Replace raw Tailwind colors with design-system tokens and add semantic HTML/i18n
- Use functional tokens from maybe-design-system.css instead of blue-50/200/600/700/900.
- Prefer a semantic container (aside) with accessible labeling.
- Move hardcoded strings to i18n.
- <div class="p-4 bg-blue-50 border border-blue-200 rounded-lg"> - <div class="flex items-start gap-3"> - <%= icon("info", class: "text-blue-600 mt-0.5") %> - <div> - <h3 class="text-sm font-medium text-blue-900">AI Settings Moved</h3> - <p class="text-sm text-blue-700 mt-1"> - AI provider configuration has moved to the dedicated - <%= link_to "AI Settings", settings_ai_path, class: "underline font-medium" %> page. - </p> - </div> - </div> - </div> + <aside class="p-4 bg-container border border-primary rounded-lg" role="note" aria-labelledby="ai-settings-moved-title"> + <div class="flex items-start gap-3"> + <%= icon("info", class: "text-primary mt-0.5") %> + <div> + <h3 id="ai-settings-moved-title" class="text-sm font-medium text-primary"><%= t(".ai_settings_moved") %></h3> + <p class="text-sm text-secondary mt-1"> + <%= t(".ai_settings_moved_body_html", link: link_to(t(".ai_settings_link"), settings_ai_path, class: "underline font-medium")).html_safe %> + </p> + </div> + </div> + </aside>Remember to add the new translation keys under config/locales/views/settings/en.yml.
test/models/provider/registry_test.rb (3)
5-13
: Fix misleading comment about default providerDefault isn’t OpenAI; Setting.ai_provider default is “openrouter”. Clarify the comment.
- # Test default (OpenAI) + # Explicitly prefer OpenAI
12-12
: Prefer assert_kind_of over assert_instance_ofAllows subclass/proxy flexibility without making tests brittle.
- assert_instance_of Provider::Openai, provider + assert_kind_of Provider::Openai, providerApply similarly to Openrouter/Ollama/fallback assertions.
Also applies to: 22-22, 32-32, 44-44
59-67
: Avoid poking private API unless necessaryCalling private method via send is fine in a pinch; alternatively assert on names by comparing against Registry.for_concept(:llm).methods or expose a small public predicate. Optional.
app/models/user.rb (1)
106-123
: Visibility layout is fine; minor style optionalKeeping helper predicates private is good. The public keyword mid-file is unconventional but valid; optionally group private sections to reduce churn.
app/models/provider/registry.rb (2)
79-85
: Normalize Ollama base_urlTrim trailing slash to avoid double slashes in downstream clients.
- base_url = ENV.fetch("OLLAMA_BASE_URL", Setting.ollama_base_url) + base_url = ENV.fetch("OLLAMA_BASE_URL", Setting.ollama_base_url) + base_url = base_url&.chomp("/")
119-129
: available_providers for :llm looks correct; consider updating call sitesFamily::AutoCategorizer/AutoMerchantDetector still hardcode Registry.get_provider(:openai). Consider switching them to Registry.for_concept(:llm).preferred_ai_provider or a Provided concern for LLM to honor settings.
app/models/provider/openrouter/chat_config.rb (1)
7-16
: Harden tool descriptors (defaults and presence).Default
strict
to true and fail fast whenparams_schema
is missing to avoid malformed requests.- def tools - functions.map do |fn| + def tools + functions.map do |fn| { type: "function", name: fn[:name], description: fn[:description], - parameters: fn[:params_schema], - strict: fn[:strict] + parameters: fn.fetch(:params_schema), + strict: fn.fetch(:strict, true) } end endapp/models/assistant.rb (2)
53-54
: Avoid writing nil latest_response_id.When first text arrives,
latest_response_id
may be nil; skip the update to avoid unnecessary write.- chat.update_latest_response!(latest_response_id) + chat.update_latest_response!(latest_response_id) if latest_response_id.present?
38-38
: Avoid reaching into responder internals.
instance_variable_get(:@llm)
is brittle. If you need class info, expose it via a public method on Responder or skip logging.test/controllers/settings/ai_controller_test.rb (1)
9-13
: Reduce brittleness of H2 text assertion.Hard-coded English can fail with i18n changes. Prefer a data-test id or a less specific selector.
- assert_select "h2", "AI Provider Settings" + assert_select "[data-test-id=ai-provider-settings-title]"app/models/provider/ollama/auto_categorizer.rb (1)
20-21
: Avoid logging full AI response (may contain PII)Truncate or redact to reduce PII leakage in logs.
Apply:
- Rails.logger.info("Ollama categorization response: #{content}") + Rails.logger.info("Ollama categorization response: #{content.to_s.truncate(500)}")app/views/settings/ai/show.html.erb (2)
31-37
: Remove raw dark-mode color utilities on labelsPrefer tokens; drop dark:hover:bg-gray-700.
Apply:
-<label class="flex items-start gap-3 p-3 border border-secondary rounded-lg cursor-pointer hover:bg-surface-hover dark:hover:bg-gray-700 transition-colors" +<label class="flex items-start gap-3 p-3 border border-secondary rounded-lg cursor-pointer hover:bg-surface-hover transition-colors"Also applies to: 50-56
68-73
: Avoid inline style toggling; use Stimulus to add/remove ‘hidden’Inline styles fight with CSS and JS. Prefer hidden class for default state and let ai-settings toggle visibility.
Apply:
- <div data-ai-settings-target="providerSettings" - data-provider="openrouter" - class="space-y-4" - style="<%= Setting.ai_provider != 'openrouter' && Setting.ai_provider.present? ? 'display: none;' : '' %>"> + <div data-ai-settings-target="providerSettings" + data-provider="openrouter" + class="space-y-4 <%= (Setting.ai_provider.present? && Setting.ai_provider != 'openrouter') ? 'hidden' : '' %>">- <div data-ai-settings-target="providerSettings" - data-provider="ollama" - class="space-y-4" - style="<%= Setting.ai_provider != 'ollama' ? 'display: none;' : '' %>"> + <div data-ai-settings-target="providerSettings" + data-provider="ollama" + class="space-y-4 <%= (Setting.ai_provider != 'ollama') ? 'hidden' : '' %>">Also applies to: 115-119
app/controllers/settings/ai_controller.rb (2)
41-45
: Return proper HTTP status on errors.Use 422 for invalid provider and rescued errors.
- else - render json: { success: false, message: "Unknown provider" } + else + render json: { success: false, message: "Unknown provider" }, status: :unprocessable_entity end rescue => e - render json: { success: false, message: "Connection failed: #{e.message}" } + render json: { success: false, message: "Connection failed: #{e.message}" }, status: :unprocessable_entity
57-66
: Models: call public method, set status codes, avoid silent fallbacks.Small correctness/consistency tweaks.
when "ollama" base_url = params[:base_url].presence || Setting.ollama_base_url provider_instance = Provider::Ollama.new(base_url) - models = provider_instance.send(:available_models) + models = provider_instance.available_models @@ - render json: { success: true, models: models } + render json: { success: true, models: models } @@ - else - render json: { success: false, message: "Unknown provider" } + else + render json: { success: false, message: "Unknown provider" }, status: :unprocessable_entity end rescue => e - render json: { success: false, message: "Failed to get models: #{e.message}" } + render json: { success: false, message: "Failed to get models: #{e.message}" }, status: :unprocessable_entityapp/models/provider/ollama.rb (3)
104-106
: Fix newline joining in error backtrace log.
'\n'
in single quotes is a literal backslash-n.- Rails.logger.error ">>> Ollama error backtrace: #{error.backtrace.first(5).join('\n')}" + Rails.logger.error ">>> Ollama error backtrace: #{error.backtrace.first(5).join("\n")}"
74-80
: Reduce verbose info logs or downgrade to debug; redact prompt.Current logs may leak prompts/PII and are noisy. Prefer
debug
and minimal content.- Rails.logger.info ">>> Ollama chat_response called with prompt: '#{prompt}', model: '#{model}'" - Rails.logger.info ">>> Ollama instructions: #{instructions.inspect}" - Rails.logger.info ">>> Ollama streamer present: #{streamer.present?}" + Rails.logger.debug ">>> Ollama chat_response called (model: '#{model}', streamer: #{streamer.present?})" + Rails.logger.debug ">>> Ollama instructions present?: #{instructions.present?}" @@ - Rails.logger.info ">>> Ollama built messages: #{messages.inspect}" + Rails.logger.debug ">>> Ollama built messages (count=#{messages.size})" @@ - Rails.logger.info ">>> Ollama final payload: #{payload.inspect}" + Rails.logger.debug ">>> Ollama final payload keys: #{payload.keys}"(Consider similar downgrades in streaming handlers where feasible.)
Also applies to: 92-94, 189-238, 240-274, 288-334
38-40
: Potential latency: supports_model? may trigger network.
available_models
calls Ollama; cache results briefly or accept cost. Not blocking.app/javascript/controllers/ai_settings_controller.js (2)
40-43
: Prevent default click/submit behavior in testConnection.Avoids accidental form submission or page navigation.
async testConnection(event) { + event.preventDefault() const button = event.currentTarget const provider = button.dataset.provider
4-4
: Optional: remove unusedmodelSelect
target if not used.Keeps controller lean.
- static targets = ["providerOption", "providerSettings", "connectionResult", "openrouterApiKey", "ollamaBaseUrl", "modelSelect"] + static targets = ["providerOption", "providerSettings", "connectionResult", "openrouterApiKey", "ollamaBaseUrl"]app/models/provider/openrouter/auto_merchant_detector.rb (2)
28-28
: Rename for clarity and consistency.Method name says “categorizations” but returns merchants. Improves readability and parity with Ollama helper.
- build_response(extract_categorizations(response)) + build_response(extract_merchants(response)) @@ - def extract_categorizations(response) + def extract_merchants(response) JSON.parse(response.dig("output").first.dig("content").first.dig("text"))["merchants"] rescue JSON::ParserError, NoMethodError => e Rails.logger.error("Failed to parse OpenRouter auto-merchant-detection response: #{e.message}") [] endAlso applies to: 68-73
68-73
: Consider surfacing parse failures to the caller instead of returning [].Per provider guidelines, invalid AI data should raise so
with_provider_response
can transform it; returning[]
masks failures.I can wire this to raise
Provider::Openrouter::Error
and validate result count matchestransactions.size
. Want a patch?app/models/provider/openrouter.rb (1)
21-29
: Update OpenRouter headers to reflect this app.
HTTP-Referer
andX-Title
still reference “Maybe Finance”. Use your product URL/name.- extra_headers: { - "HTTP-Referer" => "https://maybe.co", - "X-Title" => "Maybe Finance" - } + extra_headers: { + "HTTP-Referer" => ENV.fetch("APP_BASE_URL", "https://example.com"), + "X-Title" => ENV.fetch("APP_TITLE", "Sure") + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
package-lock.json
is excluded by!**/package-lock.json
📒 Files selected for processing (30)
.devcontainer/docker-compose.yml
(1 hunks).env.local.example
(1 hunks).env.test.example
(1 hunks)app/controllers/settings/ai_controller.rb
(1 hunks)app/javascript/controllers/ai_settings_controller.js
(1 hunks)app/models/assistant.rb
(3 hunks)app/models/assistant/provided.rb
(1 hunks)app/models/provider/ollama.rb
(1 hunks)app/models/provider/ollama/auto_categorizer.rb
(1 hunks)app/models/provider/ollama/auto_merchant_detector.rb
(1 hunks)app/models/provider/openrouter.rb
(1 hunks)app/models/provider/openrouter/auto_categorizer.rb
(1 hunks)app/models/provider/openrouter/auto_merchant_detector.rb
(1 hunks)app/models/provider/openrouter/chat_config.rb
(1 hunks)app/models/provider/openrouter/chat_parser.rb
(1 hunks)app/models/provider/openrouter/chat_stream_parser.rb
(1 hunks)app/models/provider/registry.rb
(3 hunks)app/models/setting.rb
(1 hunks)app/models/user.rb
(2 hunks)app/views/chats/_ai_consent.html.erb
(1 hunks)app/views/messages/_chat_form.html.erb
(1 hunks)app/views/settings/_settings_nav.html.erb
(1 hunks)app/views/settings/ai/show.html.erb
(1 hunks)app/views/settings/hostings/show.html.erb
(1 hunks)config/locales/views/settings/en.yml
(2 hunks)config/routes.rb
(1 hunks)test/controllers/settings/ai_controller_test.rb
(1 hunks)test/models/provider/ollama_test.rb
(1 hunks)test/models/provider/openrouter_test.rb
(1 hunks)test/models/provider/registry_test.rb
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (27)
{app,lib}/**/*.{rb,erb}
📄 CodeRabbit inference engine (CLAUDE.md)
{app,lib}/**/*.{rb,erb}
: Use Current.user for the authenticated user; do not use current_user
Use Current.family for the current family; do not use current_family
Files:
app/views/messages/_chat_form.html.erb
app/models/assistant/provided.rb
app/views/chats/_ai_consent.html.erb
app/models/provider/ollama.rb
app/models/provider/registry.rb
app/views/settings/hostings/show.html.erb
app/models/setting.rb
app/views/settings/ai/show.html.erb
app/models/provider/ollama/auto_merchant_detector.rb
app/models/user.rb
app/models/provider/openrouter.rb
app/models/provider/openrouter/chat_stream_parser.rb
app/models/provider/ollama/auto_categorizer.rb
app/models/provider/openrouter/auto_categorizer.rb
app/models/provider/openrouter/chat_config.rb
app/models/provider/openrouter/auto_merchant_detector.rb
app/models/assistant.rb
app/models/provider/openrouter/chat_parser.rb
app/controllers/settings/ai_controller.rb
app/views/settings/_settings_nav.html.erb
{app/views,app/components}/**/*.html.erb
📄 CodeRabbit inference engine (CLAUDE.md)
{app/views,app/components}/**/*.html.erb
: In views, always use the icon helper; never call lucide_icon directly
Prefer semantic HTML and server-rendered Hotwire patterns (Turbo) over client-side JS widgets
Use Tailwind functional tokens (e.g., text-primary, bg-container, border-primary) instead of raw color utilities like text-white or bg-white
Keep domain/business logic out of view templates; views should primarily render data
Stimulus controllers must use declarative actions (data-action and data--target) rather than inline JavaScript
Pass data to Stimulus via data--value attributes; do not embed inline JavaScript
Files:
app/views/messages/_chat_form.html.erb
app/views/chats/_ai_consent.html.erb
app/views/settings/hostings/show.html.erb
app/views/settings/ai/show.html.erb
app/views/settings/_settings_nav.html.erb
{app/**/*.rb,app/**/*.erb}
📄 CodeRabbit inference engine (.cursor/rules/general-rules.mdc)
{app/**/*.rb,app/**/*.erb}
: UseCurrent.user
for the current user; do not usecurrent_user
in Rails code and templates
UseCurrent.family
for the current family; do not usecurrent_family
in Rails code and templates
Files:
app/views/messages/_chat_form.html.erb
app/models/assistant/provided.rb
app/views/chats/_ai_consent.html.erb
app/models/provider/ollama.rb
app/models/provider/registry.rb
app/views/settings/hostings/show.html.erb
app/models/setting.rb
app/views/settings/ai/show.html.erb
app/models/provider/ollama/auto_merchant_detector.rb
app/models/user.rb
app/models/provider/openrouter.rb
app/models/provider/openrouter/chat_stream_parser.rb
app/models/provider/ollama/auto_categorizer.rb
app/models/provider/openrouter/auto_categorizer.rb
app/models/provider/openrouter/chat_config.rb
app/models/provider/openrouter/auto_merchant_detector.rb
app/models/assistant.rb
app/models/provider/openrouter/chat_parser.rb
app/controllers/settings/ai_controller.rb
app/views/settings/_settings_nav.html.erb
app/views/**/*.html.erb
📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)
app/views/**/*.html.erb
: Prefer native semantic HTML elements over custom JS components (e.g., for modals,/for disclosures)
Use Turbo Frames to segment pages instead of JS-driven client-side solutions where possible
Prefer client-side form validation when possible for UX
app/views/**/*.html.erb
: Use ViewComponents when the element has complex logic/styling, is reused across contexts, needs variants/sizes or slots, requires interactivity/Stimulus, or needs accessibility/ARIA support
Use partials when the element is primarily static HTML with minimal logic, used in few contexts, simple template content, has no variants/configuration, and is mainly for content organization
Prefer components over partials: if a suitable component exists in app/components, use it; otherwise consider a partial or create a component based on the criteria
Integrate Stimulus declaratively in views: HTML declares actions/targets; controllers respond (avoid imperative JS in templates)
Files:
app/views/messages/_chat_form.html.erb
app/views/chats/_ai_consent.html.erb
app/views/settings/hostings/show.html.erb
app/views/settings/ai/show.html.erb
app/views/settings/_settings_nav.html.erb
{app/helpers/**/*.rb,app/models/**/*.rb,app/views/**/*.html.erb}
📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)
Format currencies, numbers, and dates server-side (helpers/models) and pass formatted values to Stimulus for display only
Files:
app/views/messages/_chat_form.html.erb
app/models/assistant/provided.rb
app/views/chats/_ai_consent.html.erb
app/models/provider/ollama.rb
app/models/provider/registry.rb
app/views/settings/hostings/show.html.erb
app/models/setting.rb
app/views/settings/ai/show.html.erb
app/models/provider/ollama/auto_merchant_detector.rb
app/models/user.rb
app/models/provider/openrouter.rb
app/models/provider/openrouter/chat_stream_parser.rb
app/models/provider/ollama/auto_categorizer.rb
app/models/provider/openrouter/auto_categorizer.rb
app/models/provider/openrouter/chat_config.rb
app/models/provider/openrouter/auto_merchant_detector.rb
app/models/assistant.rb
app/models/provider/openrouter/chat_parser.rb
app/views/settings/_settings_nav.html.erb
{app/views/**/*.{html,html+turbo-stream}.erb,app/helpers/**/*.rb}
📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)
Always use the icon helper (icon) from application_helper.rb; never call lucide_icon directly
Files:
app/views/messages/_chat_form.html.erb
app/views/chats/_ai_consent.html.erb
app/views/settings/hostings/show.html.erb
app/views/settings/ai/show.html.erb
app/views/settings/_settings_nav.html.erb
{app/javascript/controllers/**/*.@(js|ts),app/components/**/*.@(js|ts),app/views/**/*.html.erb,app/components/**/*.html.erb}
📄 CodeRabbit inference engine (.cursor/rules/stimulus_conventions.mdc)
Use declarative Stimulus actions in ERB (data-action) instead of imperative event listeners in controllers (e.g., avoid addEventListener in connect/initializers).
Files:
app/views/messages/_chat_form.html.erb
app/views/chats/_ai_consent.html.erb
app/views/settings/hostings/show.html.erb
app/views/settings/ai/show.html.erb
app/javascript/controllers/ai_settings_controller.js
app/views/settings/_settings_nav.html.erb
{app/components/**/*.@(js|ts),app/views/**/*.html.erb,app/components/**/*.html.erb}
📄 CodeRabbit inference engine (.cursor/rules/stimulus_conventions.mdc)
Component Stimulus controllers under app/components must only be used within their component views, not in app/views.
Files:
app/views/messages/_chat_form.html.erb
app/views/chats/_ai_consent.html.erb
app/views/settings/hostings/show.html.erb
app/views/settings/ai/show.html.erb
app/views/settings/_settings_nav.html.erb
{app/views/**,app/helpers/**,app/javascript/controllers/**}
📄 CodeRabbit inference engine (.cursor/rules/ui-ux-design-guidelines.mdc)
{app/views/**,app/helpers/**,app/javascript/controllers/**}
: Author styles assuming TailwindCSS v4.x is in use across the app
Always start by referencing app/assets/tailwind/maybe-design-system.css to understand the base primitives, functional tokens, and component tokens
Prefer functional tokens from maybe-design-system.css over raw colors/utilities (e.g., usetext-primary
overtext-white
,bg-container
overbg-white
,border border-primary
overborder border-gray-200
)
Files:
app/views/messages/_chat_form.html.erb
app/views/chats/_ai_consent.html.erb
app/views/settings/hostings/show.html.erb
app/views/settings/ai/show.html.erb
app/javascript/controllers/ai_settings_controller.js
app/views/settings/_settings_nav.html.erb
{app/views/**,app/helpers/**}
📄 CodeRabbit inference engine (.cursor/rules/ui-ux-design-guidelines.mdc)
Always generate semantic HTML
Files:
app/views/messages/_chat_form.html.erb
app/views/chats/_ai_consent.html.erb
app/views/settings/hostings/show.html.erb
app/views/settings/ai/show.html.erb
app/views/settings/_settings_nav.html.erb
{app/views/**/*.html.erb,app/components/**/*.html.erb}
📄 CodeRabbit inference engine (.cursor/rules/view_conventions.mdc)
{app/views/**/*.html.erb,app/components/**/*.html.erb}
: Keep domain logic out of views; compute classes and logic in component/view helpers, not inside ERB templates
Pass data from Rails to Stimulus via data-*-value attributes; do not embed inline JavaScript in templates
Files:
app/views/messages/_chat_form.html.erb
app/views/chats/_ai_consent.html.erb
app/views/settings/hostings/show.html.erb
app/views/settings/ai/show.html.erb
app/views/settings/_settings_nav.html.erb
app/views/**/_*.html.erb
📄 CodeRabbit inference engine (.cursor/rules/view_conventions.mdc)
Name partials with an underscore prefix (e.g., _trend_change.html.erb, _form_errors.html.erb)
Files:
app/views/messages/_chat_form.html.erb
app/views/chats/_ai_consent.html.erb
app/views/settings/_settings_nav.html.erb
app/views/!(shared)/**/_*.html.erb
📄 CodeRabbit inference engine (.cursor/rules/view_conventions.mdc)
Place context-specific partials in the relevant controller's view directory (not in shared/)
Files:
app/views/messages/_chat_form.html.erb
app/views/chats/_ai_consent.html.erb
app/views/settings/_settings_nav.html.erb
test/**/*_test.rb
📄 CodeRabbit inference engine (CLAUDE.md)
test/**/*_test.rb
: Use VCR for external API testing
Use the mocha gem for stubs and mocks in tests
Files:
test/controllers/settings/ai_controller_test.rb
test/models/provider/registry_test.rb
test/models/provider/openrouter_test.rb
test/models/provider/ollama_test.rb
test/**/*.rb
📄 CodeRabbit inference engine (.cursor/rules/testing.mdc)
test/**/*.rb
: Write tests with Minitest; do not use RSpec in this codebase
Use fixtures for test data; do not use factories
For tests needing many records, use Rails helpers to create them and inline the creation in the test (e.g., helpers in test/support)
Test boundaries correctly: for query methods, assert returned output; for command methods, assert the command was invoked with correct params
Do not test one class’s implementation details in another class’s test suite
Use the mocha gem for stubs and mocks
Prefer OpenStruct for simple mock instances, or define a small mock class for complex cases
Only mock what’s necessary; don’t mock return values you don’t assert on
Files:
test/controllers/settings/ai_controller_test.rb
test/models/provider/registry_test.rb
test/models/provider/openrouter_test.rb
test/models/provider/ollama_test.rb
app/models/**/{*,**/*.rb}
📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)
Use Rails concerns and POROs organized around model traits in app/models and app/models/concerns
Files:
app/models/assistant/provided.rb
app/models/provider/ollama.rb
app/models/provider/registry.rb
app/models/setting.rb
app/models/provider/ollama/auto_merchant_detector.rb
app/models/user.rb
app/models/provider/openrouter.rb
app/models/provider/openrouter/chat_stream_parser.rb
app/models/provider/ollama/auto_categorizer.rb
app/models/provider/openrouter/auto_categorizer.rb
app/models/provider/openrouter/chat_config.rb
app/models/provider/openrouter/auto_merchant_detector.rb
app/models/assistant.rb
app/models/provider/openrouter/chat_parser.rb
app/models/**/*.rb
📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)
app/models/**/*.rb
: Models should answer questions about themselves (e.g., Account#balance_series) rather than external service objects
Use ActiveRecord validations for complex validations and business logic (may mirror DB constraints for UX)Domain models should not call Provider::Registry directly; instead, add a Provided concern within the model’s namespace to choose providers and expose convenience methods (e.g., app/models//provided.rb).
Files:
app/models/assistant/provided.rb
app/models/provider/ollama.rb
app/models/provider/registry.rb
app/models/setting.rb
app/models/provider/ollama/auto_merchant_detector.rb
app/models/user.rb
app/models/provider/openrouter.rb
app/models/provider/openrouter/chat_stream_parser.rb
app/models/provider/ollama/auto_categorizer.rb
app/models/provider/openrouter/auto_categorizer.rb
app/models/provider/openrouter/chat_config.rb
app/models/provider/openrouter/auto_merchant_detector.rb
app/models/assistant.rb
app/models/provider/openrouter/chat_parser.rb
{app/models/**/*.rb,app/controllers/**/*.rb}
📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)
Avoid N+1 queries
Files:
app/models/assistant/provided.rb
app/models/provider/ollama.rb
app/models/provider/registry.rb
app/models/setting.rb
app/models/provider/ollama/auto_merchant_detector.rb
app/models/user.rb
app/models/provider/openrouter.rb
app/models/provider/openrouter/chat_stream_parser.rb
app/models/provider/ollama/auto_categorizer.rb
app/models/provider/openrouter/auto_categorizer.rb
app/models/provider/openrouter/chat_config.rb
app/models/provider/openrouter/auto_merchant_detector.rb
app/models/assistant.rb
app/models/provider/openrouter/chat_parser.rb
app/controllers/settings/ai_controller.rb
app/models/**/provided.rb
📄 CodeRabbit inference engine (.cursor/rules/project-design.mdc)
Place Provided concerns next to their domain model under the model’s namespace as provided.rb (e.g., app/models/exchange_rate/provided.rb).
Files:
app/models/assistant/provided.rb
app/models/provider/{!(concepts)/**/*.rb,!(registry).rb}
📄 CodeRabbit inference engine (.cursor/rules/project-design.mdc)
Concrete provider implementations must live under the Provider:: namespace, inherit from Provider, wrap operations with with_provider_response, and raise on invalid data.
Files:
app/models/provider/ollama.rb
app/models/provider/ollama/auto_merchant_detector.rb
app/models/provider/openrouter.rb
app/models/provider/openrouter/chat_stream_parser.rb
app/models/provider/ollama/auto_categorizer.rb
app/models/provider/openrouter/auto_categorizer.rb
app/models/provider/openrouter/chat_config.rb
app/models/provider/openrouter/auto_merchant_detector.rb
app/models/provider/openrouter/chat_parser.rb
app/javascript/controllers/**/*.{js,ts}
📄 CodeRabbit inference engine (CLAUDE.md)
app/javascript/controllers/**/*.{js,ts}
: Keep Stimulus controllers lightweight and simple (fewer than 7 targets), single-responsibility, with a clear public API and private internals
Global Stimulus controllers must live in app/javascript/controllers/
Files:
app/javascript/controllers/ai_settings_controller.js
app/javascript/**/*.{js,ts}
📄 CodeRabbit inference engine (CLAUDE.md)
app/javascript/**/*.{js,ts}
: Prefer query params for UI state instead of localStorage/sessionStorage
Perform currency/number/date formatting on the server; avoid client-side formatting
Files:
app/javascript/controllers/ai_settings_controller.js
{app/javascript/**/*.js,app/controllers/**/*.rb}
📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)
Prefer query params in the URL for UI state over localStorage/sessionStorage; use DB only if persistence is necessary
Files:
app/javascript/controllers/ai_settings_controller.js
app/controllers/settings/ai_controller.rb
app/javascript/**/*.js
📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)
Avoid implementing currency/number/date formatting in client-side Stimulus/JS; keep controllers for display behavior
Files:
app/javascript/controllers/ai_settings_controller.js
{app/javascript/controllers/**/*.@(js|ts),app/components/**/*.@(js|ts)}
📄 CodeRabbit inference engine (.cursor/rules/stimulus_conventions.mdc)
{app/javascript/controllers/**/*.@(js|ts),app/components/**/*.@(js|ts)}
: Keep Stimulus controllers lightweight: prefer fewer than 7 targets; use private methods and expose a clear public API.
Keep Stimulus controllers focused: no domain logic in controllers; aim for single responsibility; leverage Stimulus callbacks, actions, targets, values, and classes.
Files:
app/javascript/controllers/ai_settings_controller.js
app/javascript/controllers/**/*_controller.js
📄 CodeRabbit inference engine (.cursor/rules/view_conventions.mdc)
Global Stimulus controllers in app/javascript/controllers/ can be used across any view
Files:
app/javascript/controllers/ai_settings_controller.js
{app/javascript/controllers/**/*_controller.js,app/components/**/*_controller.js}
📄 CodeRabbit inference engine (.cursor/rules/view_conventions.mdc)
Use Stimulus targets to reference DOM elements instead of manual document.getElementById calls
Files:
app/javascript/controllers/ai_settings_controller.js
🧠 Learnings (3)
📚 Learning: 2025-08-12T01:03:01.210Z
Learnt from: CR
PR: we-promise/sure#0
File: .cursor/rules/project-design.mdc:0-0
Timestamp: 2025-08-12T01:03:01.210Z
Learning: Applies to app/models/**/*.rb : Domain models should not call Provider::Registry directly; instead, add a Provided concern within the model’s namespace to choose providers and expose convenience methods (e.g., app/models/<model>/provided.rb).
Applied to files:
test/models/provider/registry_test.rb
app/models/assistant/provided.rb
app/models/provider/registry.rb
📚 Learning: 2025-08-12T01:03:01.210Z
Learnt from: CR
PR: we-promise/sure#0
File: .cursor/rules/project-design.mdc:0-0
Timestamp: 2025-08-12T01:03:01.210Z
Learning: Applies to app/models/provider/{!(concepts)/**/*.rb,!(registry).rb} : Concrete provider implementations must live under the Provider:: namespace, inherit from Provider, wrap operations with with_provider_response, and raise on invalid data.
Applied to files:
test/models/provider/registry_test.rb
app/models/assistant/provided.rb
app/models/provider/ollama.rb
app/models/provider/registry.rb
app/models/provider/openrouter.rb
test/models/provider/ollama_test.rb
📚 Learning: 2025-08-12T01:00:40.409Z
Learnt from: CR
PR: we-promise/sure#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-12T01:00:40.409Z
Learning: Applies to app/javascript/controllers/**/*.{js,ts} : Keep Stimulus controllers lightweight and simple (fewer than 7 targets), single-responsibility, with a clear public API and private internals
Applied to files:
app/javascript/controllers/ai_settings_controller.js
🧬 Code graph analysis (18)
test/controllers/settings/ai_controller_test.rb (1)
test/models/provider/registry_test.rb (1)
test
(3-67)
test/models/provider/registry_test.rb (2)
app/models/assistant/provided.rb (1)
registry
(13-15)app/models/provider/registry.rb (2)
for_concept
(11-13)preferred_ai_provider
(106-114)
test/models/provider/openrouter_test.rb (1)
app/models/provider/openrouter.rb (4)
include
(1-164)supports_model?
(32-34)auto_categorize
(36-56)auto_detect_merchants
(58-78)
app/models/assistant/provided.rb (3)
app/models/provider/registry.rb (2)
providers
(93-95)preferred_ai_provider
(106-114)app/models/provider/ollama.rb (1)
supports_model?
(38-40)app/models/provider/openrouter.rb (1)
supports_model?
(32-34)
app/models/provider/ollama.rb (7)
app/models/provider/openrouter.rb (6)
include
(1-164)initialize
(21-30)supports_model?
(32-34)auto_categorize
(36-56)auto_detect_merchants
(58-78)chat_response
(80-138)app/models/provider/registry.rb (2)
include
(1-131)initialize
(88-91)app/models/provider/openrouter/chat_parser.rb (4)
new
(1-59)initialize
(4-6)messages
(32-45)function_requests
(47-58)app/models/provider/openrouter/chat_stream_parser.rb (2)
new
(1-28)initialize
(4-6)app/models/provider/ollama/auto_categorizer.rb (3)
initialize
(1-105)initialize
(4-9)auto_categorize
(11-23)app/models/provider/ollama/auto_merchant_detector.rb (3)
initialize
(1-106)initialize
(4-9)auto_detect_merchants
(11-23)app/models/provider.rb (1)
with_provider_response
(24-44)
app/models/provider/registry.rb (6)
app/models/provider/openrouter/chat_parser.rb (1)
new
(1-59)app/models/provider/openrouter/chat_stream_parser.rb (1)
new
(1-28)app/models/family/auto_merchant_detector.rb (1)
new
(1-98)app/models/family/auto_categorizer.rb (1)
new
(1-83)app/models/exchange_rate/provided.rb (1)
provider
(5-9)app/models/security/provided.rb (2)
provider
(7-11)provider
(110-112)
app/models/provider/ollama/auto_merchant_detector.rb (3)
app/models/provider/ollama.rb (3)
initialize
(34-36)auto_detect_merchants
(57-70)simple_chat
(130-153)app/models/provider/ollama/auto_categorizer.rb (6)
initialize
(1-105)initialize
(4-9)system_prompt
(30-49)user_message
(51-76)build_response
(93-100)parse_response
(78-91)app/models/provider/openrouter/auto_merchant_detector.rb (5)
initialize
(1-123)initialize
(4-9)auto_detect_merchants
(11-29)build_response
(53-61)normalize_ai_value
(63-66)
app/models/user.rb (2)
config/application.rb (1)
config
(10-44)app/controllers/concerns/invitable.rb (1)
self_hosted?
(14-16)
app/models/provider/openrouter.rb (7)
app/models/provider/registry.rb (2)
include
(1-131)initialize
(88-91)app/models/provider/openrouter/chat_parser.rb (4)
new
(1-59)initialize
(4-6)parsed
(8-15)messages
(32-45)app/models/provider/openrouter/chat_stream_parser.rb (3)
new
(1-28)initialize
(4-6)parsed
(8-18)app/models/provider/openrouter/auto_categorizer.rb (4)
initialize
(1-124)initialize
(4-9)auto_categorize
(11-29)instructions
(31-49)app/models/provider/openrouter/auto_merchant_detector.rb (4)
initialize
(1-123)initialize
(4-9)auto_detect_merchants
(11-29)instructions
(31-46)app/models/provider/openrouter/chat_config.rb (4)
initialize
(1-36)initialize
(2-5)build_input
(19-32)tools
(7-17)app/models/provider.rb (1)
with_provider_response
(24-44)
test/models/provider/ollama_test.rb (3)
app/models/provider/ollama.rb (4)
include
(5-335)supports_model?
(38-40)auto_categorize
(42-55)auto_detect_merchants
(57-70)app/models/provider/ollama/auto_categorizer.rb (1)
auto_categorize
(11-23)app/models/provider/ollama/auto_merchant_detector.rb (1)
auto_detect_merchants
(11-23)
app/models/provider/openrouter/chat_stream_parser.rb (1)
app/models/provider/openrouter/chat_parser.rb (3)
new
(1-59)initialize
(4-6)parsed
(8-15)
app/models/provider/ollama/auto_categorizer.rb (3)
app/models/provider/ollama.rb (3)
initialize
(34-36)auto_categorize
(42-55)simple_chat
(130-153)app/models/provider/ollama/auto_merchant_detector.rb (6)
initialize
(1-106)initialize
(4-9)system_prompt
(30-49)user_message
(51-75)build_response
(92-100)parse_response
(77-90)app/models/provider/openrouter/auto_categorizer.rb (5)
initialize
(1-124)initialize
(4-9)auto_categorize
(11-29)build_response
(56-63)normalize_category_name
(65-67)
app/models/provider/openrouter/auto_categorizer.rb (3)
app/models/provider/ollama/auto_categorizer.rb (5)
initialize
(1-105)initialize
(4-9)auto_categorize
(11-23)build_response
(93-100)normalize_category_name
(102-104)app/models/provider/openrouter.rb (2)
initialize
(21-30)auto_categorize
(36-56)app/models/provider/openrouter/auto_merchant_detector.rb (7)
initialize
(1-123)initialize
(4-9)developer_message
(98-122)json_schema
(75-96)instructions
(31-46)build_response
(53-61)extract_categorizations
(68-73)
app/models/provider/openrouter/chat_config.rb (4)
app/models/assistant.rb (1)
initialize
(13-17)app/models/provider/openrouter.rb (1)
initialize
(21-30)app/models/provider/openrouter/auto_categorizer.rb (2)
initialize
(1-124)initialize
(4-9)app/models/provider/openrouter/auto_merchant_detector.rb (2)
initialize
(1-123)initialize
(4-9)
app/models/provider/openrouter/auto_merchant_detector.rb (3)
app/models/provider/ollama/auto_merchant_detector.rb (5)
initialize
(1-106)initialize
(4-9)auto_detect_merchants
(11-23)build_response
(92-100)normalize_ai_value
(102-105)app/models/provider/openrouter.rb (2)
initialize
(21-30)auto_detect_merchants
(58-78)app/models/provider/openrouter/auto_categorizer.rb (7)
initialize
(1-124)initialize
(4-9)developer_message
(98-123)json_schema
(76-96)instructions
(31-49)build_response
(56-63)extract_categorizations
(69-74)
app/models/assistant.rb (5)
app/models/chat.rb (1)
update_latest_response!
(41-43)app/models/assistant/provided.rb (1)
get_model_provider
(4-10)app/models/assistant/responder.rb (2)
on
(9-11)respond
(13-31)app/models/assistant/broadcastable.rb (1)
stop_thinking
(9-11)app/models/assistant_message.rb (1)
append_text!
(8-11)
app/models/provider/openrouter/chat_parser.rb (2)
app/models/provider/openrouter/chat_stream_parser.rb (3)
new
(1-28)initialize
(4-6)parsed
(8-18)app/models/provider/openrouter/chat_config.rb (2)
initialize
(1-36)initialize
(2-5)
app/controllers/settings/ai_controller.rb (4)
app/javascript/controllers/ai_settings_controller.js (2)
provider
(42-42)params
(53-53)app/models/provider/ollama.rb (1)
supports_model?
(38-40)app/models/provider/openrouter.rb (1)
supports_model?
(32-34)config/application.rb (1)
config
(10-44)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: ci / test
🔇 Additional comments (13)
.devcontainer/docker-compose.yml (1)
59-59
: Switch to host port 5433 — avoids local Postgres conflicts; sync docs/tooling.Looks good. Please update any README, scripts, or DB client examples that reference 5432 on the host.
config/routes.rb (1)
60-63
: AI settings routes added — check controller auth/CSRF and guideline usage.Routes LGTM. Verify Settings::AiController:
- Enforces authentication/authorization and CSRF on POST test_connection.
- Uses Current.user and Current.family per guidelines.
- Responds with JSON for models/test_connection as expected by Stimulus.
app/views/settings/_settings_nav.html.erb (1)
26-26
: Approve new “AI Settings” nav item
I18n keyai_settings_label
is defined inconfig/locales/views/settings/en.yml
, and there are nolucide_icon
calls inapp/views
, confirming use of the icon helper.config/locales/views/settings/en.yml (2)
4-6
: LGTM: Adds page title for AI SettingsKeys are well-namespaced and consistent.
92-92
: LGTM: Adds nav label for AI SettingsMatches new settings section.
test/models/provider/registry_test.rb (1)
47-57
: Guard against ambient credentialsIf CI has any of OPENAI_ACCESS_TOKEN/OPENROUTER_API_KEY/OLLAMA_BASE_URL set, this test can fail. The setup/teardown change above addresses this; verify locally.
app/models/user.rb (1)
91-92
: ai_available? logic LGTMCorrectly permits AI in managed mode and requires any provider in self-hosted mode.
app/models/provider/registry.rb (1)
71-77
: OpenRouter provider factory LGTMConsistent with existing OpenAI pattern; returns nil when not configured.
app/models/assistant.rb (1)
2-2
: Verify Provided vs Assistant::Provided collision
No top-levelProvided
module was found in the codebase—confirm whichProvided
you’re including and ensure its methods (e.g.get_model_provider
) don’t overlap with those inAssistant::Provided
.test/controllers/settings/ai_controller_test.rb (1)
15-27
: Isolate global Setting mutations between tests.These tests mutate global settings and can leak state. Add teardown to reset relevant keys to reduce flakiness.
class Settings::AiControllerTest < ActionDispatch::IntegrationTest setup do @user = users(:family_admin) sign_in @user end + teardown do + Setting.ai_provider = nil + Setting.openrouter_api_key = nil + Setting.openai_access_token = nil + Setting.ollama_base_url = nil + endapp/views/settings/ai/show.html.erb (1)
76-85
: Controller already ignores the “****” placeholder: TheSettings::AiController
update action skips any value equal to"********"
when iterating overpermitted_params
(around line 97), so the existingopenrouter_api_key
is preserved.app/javascript/controllers/ai_settings_controller.js (1)
4-5
: Controller stays within target budget, good separation.Targets <= 7 and single-responsibility; nice.
app/models/provider/ollama/auto_merchant_detector.rb (1)
102-105
: LGTM — robust normalization.
to_s.downcase
andto_s.strip
avoid type errors and mirror our OpenRouter fix.
class Settings::AiController < ApplicationController | ||
layout "settings" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enforce self-hosted guard for all actions.
require_self_hosted
is defined but never used. Gate the controller actions.
class Settings::AiController < ApplicationController
layout "settings"
+ before_action :require_self_hosted
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
class Settings::AiController < ApplicationController | |
layout "settings" | |
class Settings::AiController < ApplicationController | |
layout "settings" | |
before_action :require_self_hosted | |
# …rest of controller actions… | |
end |
🤖 Prompt for AI Agents
In app/controllers/settings/ai_controller.rb around lines 1 to 3, the controller
defines require_self_hosted but never uses it; add a before_action to enforce
the self-hosted guard for all controller actions (e.g., insert a before_action
:require_self_hosted at the top of the class) so every action is gated by the
self-hosted check and remove any unused method or comment if applicable.
when "openrouter" | ||
api_key = params[:api_key].presence || Setting.openrouter_api_key | ||
raise "API key required" if api_key.blank? | ||
|
||
provider_instance = Provider::Openrouter.new(api_key) | ||
# Test with a simple model check | ||
if provider_instance.supports_model?("openai/gpt-4o-mini") | ||
render json: { success: true, message: "OpenRouter connection successful!" } | ||
else | ||
render json: { success: false, message: "OpenRouter connection failed" } | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix: OpenRouter “Test Connection” never hits network; false positives.
supports_model?
only checks a constant; it won’t validate the API key or connectivity.
- # Test with a simple model check
- if provider_instance.supports_model?("openai/gpt-4o-mini")
- render json: { success: true, message: "OpenRouter connection successful!" }
- else
- render json: { success: false, message: "OpenRouter connection failed" }
- end
+ # Perform a minimal real request
+ resp = provider_instance.chat_response("ping", model: "openai/gpt-4o-mini", instructions: "healthcheck")
+ if resp.success?
+ render json: { success: true, message: "OpenRouter connection successful!" }
+ else
+ render json: { success: false, message: "OpenRouter connection failed: #{resp.error.message}" }, status: :unprocessable_entity
+ end
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
when "openrouter" | |
api_key = params[:api_key].presence || Setting.openrouter_api_key | |
raise "API key required" if api_key.blank? | |
provider_instance = Provider::Openrouter.new(api_key) | |
# Test with a simple model check | |
if provider_instance.supports_model?("openai/gpt-4o-mini") | |
render json: { success: true, message: "OpenRouter connection successful!" } | |
else | |
render json: { success: false, message: "OpenRouter connection failed" } | |
end | |
when "openrouter" | |
api_key = params[:api_key].presence || Setting.openrouter_api_key | |
raise "API key required" if api_key.blank? | |
provider_instance = Provider::Openrouter.new(api_key) | |
# Perform a minimal real request | |
resp = provider_instance.chat_response("ping", model: "openai/gpt-4o-mini", instructions: "healthcheck") | |
if resp.success? | |
render json: { success: true, message: "OpenRouter connection successful!" } | |
else | |
render json: { success: false, message: "OpenRouter connection failed: #{resp.error.message}" }, status: :unprocessable_entity | |
end |
when "ollama" | ||
base_url = params[:base_url].presence || Setting.ollama_base_url | ||
provider_instance = Provider::Ollama.new(base_url) | ||
|
||
# Test connection by trying to get models list | ||
models = provider_instance.send(:available_models) | ||
if models.any? | ||
render json: { success: true, message: "Ollama connection successful! Found #{models.size} models." } | ||
else | ||
render json: { success: false, message: "Ollama connection failed or no models found. Make sure Ollama is running and has models installed." } | ||
end | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Validate Ollama base URL and call public API; add failure status.
Avoid send(:available_models)
, require presence, and set proper status on failure.
when "ollama"
- base_url = params[:base_url].presence || Setting.ollama_base_url
- provider_instance = Provider::Ollama.new(base_url)
-
- # Test connection by trying to get models list
- models = provider_instance.send(:available_models)
- if models.any?
- render json: { success: true, message: "Ollama connection successful! Found #{models.size} models." }
- else
- render json: { success: false, message: "Ollama connection failed or no models found. Make sure Ollama is running and has models installed." }
- end
+ base_url = params[:base_url].presence || Setting.ollama_base_url
+ raise "Base URL required" if base_url.blank?
+ provider_instance = Provider::Ollama.new(base_url)
+ models = provider_instance.available_models
+ if models.any?
+ render json: { success: true, message: "Ollama connection successful! Found #{models.size} models." }
+ else
+ render json: { success: false, message: "Ollama connection failed or no models found. Make sure Ollama is running and has models installed." }, status: :unprocessable_entity
+ end
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
when "ollama" | |
base_url = params[:base_url].presence || Setting.ollama_base_url | |
provider_instance = Provider::Ollama.new(base_url) | |
# Test connection by trying to get models list | |
models = provider_instance.send(:available_models) | |
if models.any? | |
render json: { success: true, message: "Ollama connection successful! Found #{models.size} models." } | |
else | |
render json: { success: false, message: "Ollama connection failed or no models found. Make sure Ollama is running and has models installed." } | |
end | |
when "ollama" | |
base_url = params[:base_url].presence || Setting.ollama_base_url | |
raise "Base URL required" if base_url.blank? | |
provider_instance = Provider::Ollama.new(base_url) | |
models = provider_instance.available_models | |
if models.any? | |
render json: { success: true, message: "Ollama connection successful! Found #{models.size} models." } | |
else | |
render json: { success: false, message: "Ollama connection failed or no models found. Make sure Ollama is running and has models installed." }, status: :unprocessable_entity | |
end |
# Clear provider-specific settings when switching providers | ||
if params[:setting][:ai_provider] != Setting.ai_provider | ||
case params[:setting][:ai_provider] | ||
when "openrouter" | ||
permitted_params[:ollama_base_url] = "http://host.docker.internal:11434" | ||
permitted_params[:ai_model] = "openai/gpt-4o-mini" if params[:setting][:ai_model].blank? | ||
when "ollama" | ||
permitted_params[:openrouter_api_key] = nil | ||
permitted_params[:ai_model] = "llama3.2:3b" if params[:setting][:ai_model].blank? | ||
end | ||
end | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Provider switch handling contradicts comment; don’t set unrelated defaults, validate model belongs to provider.
Currently sets Ollama base URL when selecting OpenRouter. Clear opposite-provider fields and coerce model to a valid default if incompatible.
- # Clear provider-specific settings when switching providers
+ # Clear provider-specific settings when switching providers
if params[:setting][:ai_provider] != Setting.ai_provider
- case params[:setting][:ai_provider]
+ case params[:setting][:ai_provider]
when "openrouter"
- permitted_params[:ollama_base_url] = "http://host.docker.internal:11434"
- permitted_params[:ai_model] = "openai/gpt-4o-mini" if params[:setting][:ai_model].blank?
+ permitted_params[:ollama_base_url] = nil
+ permitted_params[:ai_model] = "openai/gpt-4o-mini" if params[:setting][:ai_model].blank?
+ if permitted_params[:ai_model].present? && !Provider::Openrouter::MODELS.include?(permitted_params[:ai_model])
+ permitted_params[:ai_model] = "openai/gpt-4o-mini"
+ end
when "ollama"
permitted_params[:openrouter_api_key] = nil
- permitted_params[:ai_model] = "llama3.2:3b" if params[:setting][:ai_model].blank?
+ permitted_params[:ai_model] = "llama3.2:3b" if params[:setting][:ai_model].blank?
+ if permitted_params[:ai_model].present? && !Provider::Ollama::MODELS.include?(permitted_params[:ai_model])
+ permitted_params[:ai_model] = "llama3.2:3b"
+ end
end
end
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
# Clear provider-specific settings when switching providers | |
if params[:setting][:ai_provider] != Setting.ai_provider | |
case params[:setting][:ai_provider] | |
when "openrouter" | |
permitted_params[:ollama_base_url] = "http://host.docker.internal:11434" | |
permitted_params[:ai_model] = "openai/gpt-4o-mini" if params[:setting][:ai_model].blank? | |
when "ollama" | |
permitted_params[:openrouter_api_key] = nil | |
permitted_params[:ai_model] = "llama3.2:3b" if params[:setting][:ai_model].blank? | |
end | |
end | |
# Clear provider-specific settings when switching providers | |
if params[:setting][:ai_provider] != Setting.ai_provider | |
case params[:setting][:ai_provider] | |
when "openrouter" | |
permitted_params[:ollama_base_url] = nil | |
permitted_params[:ai_model] = "openai/gpt-4o-mini" if params[:setting][:ai_model].blank? | |
if permitted_params[:ai_model].present? && !Provider::Openrouter::MODELS.include?(permitted_params[:ai_model]) | |
permitted_params[:ai_model] = "openai/gpt-4o-mini" | |
end | |
when "ollama" | |
permitted_params[:openrouter_api_key] = nil | |
permitted_params[:ai_model] = "llama3.2:3b" if params[:setting][:ai_model].blank? | |
if permitted_params[:ai_model].present? && !Provider::Ollama::MODELS.include?(permitted_params[:ai_model]) | |
permitted_params[:ai_model] = "llama3.2:3b" | |
end | |
end | |
end |
🤖 Prompt for AI Agents
In app/controllers/settings/ai_controller.rb around lines 83–94, the
provider-switch block is setting unrelated defaults (e.g., setting
ollama_base_url when OpenRouter is selected) and does not validate that ai_model
belongs to the chosen provider; change it to clear the other-provider-specific
keys (nil out openrouter keys when switching to ollama and nil out ollama keys
when switching to openrouter), remove/stop setting unrelated defaults like
ollama_base_url for OpenRouter, and enforce model validation by: if
params[:setting][:ai_model].blank? set a provider-appropriate default, otherwise
check the supplied ai_model is valid for the selected provider and replace or
reject it if not (set to provider default or add a validation error), keeping
all provider-specific fields scoped only to their provider.
if (data.success) { | ||
resultElement.innerHTML = ` | ||
<div class="flex items-center gap-2 text-sm text-green-600 dark:text-green-400"> | ||
<svg class="w-4 h-4" fill="currentColor" viewBox="0 0 20 20"> | ||
<path fill-rule="evenodd" d="M10 18a8 8 0 100-16 8 8 0 000 16zm3.707-9.293a1 1 0 00-1.414-1.414L9 10.586 7.707 9.293a1 1 0 00-1.414 1.414l2 2a1 1 0 001.414 0l4-4z" clip-rule="evenodd"></path> | ||
</svg> | ||
${data.message} | ||
</div> | ||
` | ||
} else { | ||
resultElement.innerHTML = ` | ||
<div class="flex items-center gap-2 text-sm text-red-600 dark:text-red-400"> | ||
<svg class="w-4 h-4" fill="currentColor" viewBox="0 0 20 20"> | ||
<path fill-rule="evenodd" d="M18 10a8 8 0 11-16 0 8 8 0 0116 0zm-7 4a1 1 0 11-2 0 1 1 0 012 0zm-1-9a1 1 0 00-1 1v4a1 1 0 102 0V6a1 1 0 00-1-1z" clip-rule="evenodd"></path> | ||
</svg> | ||
${data.message} | ||
</div> | ||
` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix XSS: don’t inject server-provided message via innerHTML.
data.message
can contain unsanitized input; use text nodes.
- if (data.success) {
- resultElement.innerHTML = `
- <div class="flex items-center gap-2 text-sm text-green-600 dark:text-green-400">
- <svg class="w-4 h-4" fill="currentColor" viewBox="0 0 20 20">
- <path fill-rule="evenodd" d="M10 18a8 8 0 100-16 8 8 0 000 16zm3.707-9.293a1 1 0 00-1.414-1.414L9 10.586 7.707 9.293a1 1 0 00-1.414 1.414l2 2a1 1 0 001.414 0l4-4z" clip-rule="evenodd"></path>
- </svg>
- ${data.message}
- </div>
- `
- } else {
- resultElement.innerHTML = `
- <div class="flex items-center gap-2 text-sm text-red-600 dark:text-red-400">
- <svg class="w-4 h-4" fill="currentColor" viewBox="0 0 20 20">
- <path fill-rule="evenodd" d="M18 10a8 8 0 11-16 0 8 8 0 0116 0zm-7 4a1 1 0 11-2 0 1 1 0 012 0zm-1-9a1 1 0 00-1 1v4a1 1 0 102 0V6a1 1 0 00-1-1z" clip-rule="evenodd"></path>
- </svg>
- ${data.message}
- </div>
- `
- }
+ const wrapper = document.createElement("div")
+ wrapper.className = `flex items-center gap-2 text-sm ${data.success ? "text-green-600 dark:text-green-400" : "text-red-600 dark:text-red-400"}`
+ wrapper.innerHTML = `<svg class="w-4 h-4" fill="currentColor" viewBox="0 0 20 20">
+ <path fill-rule="evenodd" d="${data.success ? "M10 18a8 8 0 100-16 8 8 0 000 16zm3.707-9.293a1 1 0 00-1.414-1.414L9 10.586 7.707 9.293a1 1 0 00-1.414 1.414l2 2a1 1 0 001.414 0l4-4z" : "M18 10a8 8 0 11-16 0 8 8 0 0116 0zm-7 4a1 1 0 11-2 0 1 1 0 012 0zm-1-9a1 1 0 00-1 1v4a1 1 0 102 0V6a1 1 0 00-1-1z"}" clip-rule="evenodd"></path>
+ </svg>`
+ const text = document.createTextNode(data.message)
+ wrapper.appendChild(text)
+ resultElement.replaceChildren(wrapper)
@@
- const resultElement = button.parentElement.querySelector('[data-ai-settings-target="connectionResult"]')
- resultElement.innerHTML = `
- <div class="flex items-center gap-2 text-sm text-red-600 dark:text-red-400">
- <svg class="w-4 h-4" fill="currentColor" viewBox="0 0 20 20">
- <path fill-rule="evenodd" d="M18 10a8 8 0 11-16 0 8 8 0 0116 0zm-7 4a1 1 0 11-2 0 1 1 0 012 0zm-1-9a1 1 0 00-1 1v4a1 1 0 102 0V6a1 1 0 00-1-1z" clip-rule="evenodd"></path>
- </svg>
- Connection failed: ${error.message}
- </div>
- `
+ const resultElement = button.parentElement.querySelector('[data-ai-settings-target="connectionResult"]')
+ const wrapper = document.createElement("div")
+ wrapper.className = "flex items-center gap-2 text-sm text-red-600 dark:text-red-400"
+ wrapper.innerHTML = `<svg class="w-4 h-4" fill="currentColor" viewBox="0 0 20 20">
+ <path fill-rule="evenodd" d="M18 10a8 8 0 11-16 0 8 8 0 0116 0zm-7 4a1 1 0 11-2 0 1 1 0 012 0zm-1-9a1 1 0 00-1 1v4a1 1 0 102 0V6a1 1 0 00-1-1z" clip-rule="evenodd"></path>
+ </svg>`
+ wrapper.appendChild(document.createTextNode(`Connection failed: ${error.message}`))
+ resultElement.replaceChildren(wrapper)
Also applies to: 102-110
🤖 Prompt for AI Agents
In app/javascript/controllers/ai_settings_controller.js around lines 82-99 (and
also 102-110), the code injects server-provided data.message directly into
innerHTML, which risks XSS; instead build the result node programmatically:
create the wrapper div and SVG (or reuse the existing static SVG string for the
markup), create a separate span/text node for the message and set its
textContent (or append a Text node) to data.message, then append the span to the
wrapper and replace the resultElement contents; do this for both the success and
error branches so no untrusted input is inserted via innerHTML.
spellcheck: "false", | ||
inputmode: "text", | ||
data: { "ai-settings-target": "openrouterApiKey" }, | ||
help_text: "Get your API key from <a href='https://openrouter.ai/keys' target='_blank' class='underline'>OpenRouter</a>".html_safe %> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add rel="noopener noreferrer" to external links
Security best practice when using target="_blank".
Apply:
- help_text: "Get your API key from <a href='https://openrouter.ai/keys' target='_blank' class='underline'>OpenRouter</a>".html_safe %>
+ help_text: "Get your API key from <a href='https://openrouter.ai/keys' target='_blank' rel='noopener noreferrer' class='underline'>OpenRouter</a>".html_safe %>
- help_text: "URL of your local Ollama instance. Install from <a href='https://ollama.ai' target='_blank' class='underline'>ollama.ai</a>".html_safe %>
+ help_text: "URL of your local Ollama instance. Install from <a href='https://ollama.ai' target='_blank' rel='noopener noreferrer' class='underline'>ollama.ai</a>".html_safe %>
Also applies to: 127-127
🤖 Prompt for AI Agents
In app/views/settings/ai/show.html.erb around lines 85 and 127, the external
anchor tags using target="_blank" are missing rel="noopener noreferrer"; update
those help_text strings to include rel="noopener noreferrer" on the <a> tags
(keeping target="_blank" and html_safe) to mitigate reverse tabnabbing and
ensure safe external linking.
class: "w-full px-3 py-2 border border-secondary rounded-lg bg-white dark:bg-gray-800 text-primary dark:text-gray-100 focus:ring-2 focus:ring-blue-500 focus:border-blue-500", | ||
data: { "ai-settings-target": "modelSelect" } | ||
} %> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Replace raw background/text colors on selects with tokens
Follow the “maybe-design-system.css” tokens (bg-container, text-primary, border-secondary).
Apply:
- class: "w-full px-3 py-2 border border-secondary rounded-lg bg-white dark:bg-gray-800 text-primary dark:text-gray-100 focus:ring-2 focus:ring-blue-500 focus:border-blue-500",
+ class: "w-full px-3 py-2 border border-secondary rounded-lg bg-container text-primary focus:ring-2 focus:ring-primary focus:border-primary",
Also applies to: 136-137
🤖 Prompt for AI Agents
app/views/settings/ai/show.html.erb around lines 94-96 (and also 136-137): the
select uses raw color utility classes (bg-white dark:bg-gray-800 and
dark:text-gray-100) which should be replaced with design system tokens; remove
the explicit light/dark color utilities and replace bg-white dark:bg-gray-800
with the bg-container token, keep text-primary and border-secondary tokens
(remove dark:text-gray-100), so the select classes use bg-container text-primary
border-secondary (relying on the token to handle dark mode) at both locations.
<div class="pt-4 mt-4 border-t border-gray-200 dark:border-gray-700"> | ||
<button type="button" | ||
class="inline-flex items-center gap-2 px-4 py-2 bg-blue-600 text-white rounded-lg hover:bg-blue-700 dark:bg-blue-500 dark:hover:bg-blue-600 transition-colors disabled:opacity-50 disabled:cursor-not-allowed" | ||
data-action="click->ai-settings#testConnection" | ||
data-provider="openrouter"> | ||
<%= icon("wifi", class: "w-4 h-4") %> | ||
Test Connection | ||
</button> | ||
<div data-ai-settings-target="connectionResult" class="mt-3"></div> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Use design-system tokens instead of raw Tailwind colors on buttons
Replace bg-blue-600/text-white with bg-primary/text-primary-foreground to follow the Tailwind v4 token guidance.
Apply:
- <button type="button"
- class="inline-flex items-center gap-2 px-4 py-2 bg-blue-600 text-white rounded-lg hover:bg-blue-700 dark:bg-blue-500 dark:hover:bg-blue-600 transition-colors disabled:opacity-50 disabled:cursor-not-allowed"
+ <button type="button"
+ class="inline-flex items-center gap-2 px-4 py-2 bg-primary text-primary-foreground rounded-lg hover:bg-primary/90 transition-colors disabled:opacity-50 disabled:cursor-not-allowed"
data-action="click->ai-settings#testConnection"
data-provider="openrouter">
- <button type="button"
- class="inline-flex items-center gap-2 px-4 py-2 bg-blue-600 text-white rounded-lg hover:bg-blue-700 dark:bg-blue-500 dark:hover:bg-blue-600 transition-colors disabled:opacity-50 disabled:cursor-not-allowed"
+ <button type="button"
+ class="inline-flex items-center gap-2 px-4 py-2 bg-primary text-primary-foreground rounded-lg hover:bg-primary/90 transition-colors disabled:opacity-50 disabled:cursor-not-allowed"
data-action="click->ai-settings#testConnection"
data-provider="ollama">
Also applies to: 143-151
🤖 Prompt for AI Agents
In app/views/settings/ai/show.html.erb around lines 101-109 (and similarly at
143-151), the button uses raw Tailwind color classes (bg-blue-600, text-white,
hover:bg-blue-700, dark:bg-blue-500, dark:hover:bg-blue-600); replace these with
the design-system tokens: use bg-primary and text-primary-foreground for base
colors, use a token-aware hover state such as hover:bg-primary/90 or
hover:bg-primary (matching your design system hover convention), and replace
dark: variants with their token equivalents (e.g. dark:bg-primary and
dark:hover:bg-primary/90) so the button consistently uses the v4 token classes
instead of hard-coded blue/white colors.
test "supports ollama models" do | ||
assert @subject.supports_model?("llama3.2") | ||
assert @subject.supports_model?("mistral") | ||
assert_not @subject.supports_model?("invalid-model") | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Stub available_models to avoid real HTTP in unit tests
supports_model? falls back to available_models for unknown models; your third assertion will hit the network. Stub it to keep tests hermetic and to comply with “Use VCR for external API testing” (we don’t need VCR here if we stub).
Apply:
test "supports ollama models" do
+ # Avoid network call to /api/tags
+ @subject.stubs(:available_models).returns([])
assert @subject.supports_model?("llama3.2")
assert @subject.supports_model?("mistral")
assert_not @subject.supports_model?("invalid-model")
end
🏁 Script executed:
#!/bin/bash
set -eu
# Locate the Ollama provider implementation
file=$(rg -l --type=rb 'class\s+Provider::Ollama' -g 'app/models/provider/*.rb')
echo "Found file: $file"
# Show supports_model? implementation context
rg -nP 'def\s+supports_model\?' -C3 "$file"
# Show available_models implementation context
rg -nP 'def\s+available_models' -C3 "$file"
Length of output: 120
🏁 Script executed:
#!/bin/bash
set -eu
# Locate the Ollama provider implementation without restricting by type
file=$(rg -l 'class\s\+Provider::Ollama' -g 'app/models/provider/*.rb')
if [[ -z "$file" ]]; then
echo "Provider::Ollama class not found in app/models/provider"
exit 0
fi
echo "Found file: $file"
# Inspect supports_model? implementation
rg -n 'def supports_model\?' -C3 "$file" || true
# Inspect available_models implementation
rg -n 'def available_models' -C3 "$file" || true
Length of output: 84
🏁 Script executed:
#!/bin/bash
set -eu
# Find any Ruby files related to Ollama
echo "Files matching '*ollama*.rb':"
find . -type f -iname '*ollama*.rb'
echo
# Show implementations of available_models across the codebase
echo "Definitions of available_models:"
rg -n 'def available_models' -C3 -g '*.rb'
echo
# Show implementations of supports_model? across the codebase
echo "Definitions of supports_model?:"
rg -n 'def supports_model\?' -C3 -g '*.rb'
Length of output: 1925
🏁 Script executed:
#!/bin/bash
set -eu
echo "Searching for .stubs in test directory"
rg -n '\.stubs' -g 'test/**/*.rb' || true
echo
echo "Displaying first 50 lines of ollama_test.rb"
sed -n '1,50p' test/models/provider/ollama_test.rb
Length of output: 8554
Stub available_models in the “supports ollama models” test to prevent real HTTP calls
The test’s third assertion invokes available_models, resulting in an external request. Keep the test hermetic by stubbing it:
test "supports ollama models" do
+ @subject.stubs(:available_models).returns([])
assert @subject.supports_model?("llama3.2")
assert @subject.supports_model?("mistral")
assert_not @subject.supports_model?("invalid-model")
end
Committable suggestion skipped: line range outside the PR's diff.
require "test_helper" | ||
|
||
class Provider::RegistryTest < ActiveSupport::TestCase |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Isolate ENV/Setting side effects in tests
ENV values can override Setting.* (see Registry: ENV.fetch(...)), making tests flaky on CI. Backup/clear ENV and reset Settings per test.
class Provider::RegistryTest < ActiveSupport::TestCase
+ setup do
+ @env_backup = {
+ "OPENAI_ACCESS_TOKEN" => ENV["OPENAI_ACCESS_TOKEN"],
+ "OPENROUTER_API_KEY" => ENV["OPENROUTER_API_KEY"],
+ "OLLAMA_BASE_URL" => ENV["OLLAMA_BASE_URL"]
+ }
+ %w[OPENAI_ACCESS_TOKEN OPENROUTER_API_KEY OLLAMA_BASE_URL].each { |k| ENV.delete(k) }
+
+ @settings_backup = {
+ ai_provider: Setting.ai_provider,
+ ai_model: Setting.ai_model,
+ openai_access_token: Setting.openai_access_token,
+ openrouter_api_key: Setting.openrouter_api_key,
+ ollama_base_url: Setting.ollama_base_url
+ }
+ end
+
+ teardown do
+ @env_backup.each { |k, v| v.nil? ? ENV.delete(k) : ENV[k] = v }
+ @settings_backup.each { |k, v| Setting.send("#{k}=", v) }
+ end
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In test/models/provider/registry_test.rb around lines 1 to 3, tests can be
affected by ENV values that override Settings (Registry uses ENV.fetch), so add
setup/teardown (or before/after) to backup and clear the relevant ENV keys (or
snapshot ENV) before each test and restore them after, and also reload/reset the
Settings object after restoring ENV (e.g. Settings.reload! or the app-specific
method) to ensure settings are deterministic across CI runs.
@jjmata I managed to make the LLM talk but everytime it send smt like:
Does it looks familiar to you? Tell me if you can reproduce |
Will let you know when I test this PR, but that JSON response body doesn't look familiar ... no. When you say "make the LLM talk" which prompt are you talking about? What code-path are you exploring first? |
One thing that would really help you debug here is setting up the Langfuse observability I merged via #86, @matthieuEv! Really excited to get to this, but traveling the next couple of days ... |
Well just find out that's because the ollama model i was using did'nt support tools, so it was outputting some nonsense... |
I tried to use Langfuse this afternoon but couldn't manage to make it work :/ |
I also think that we should do a review on the pre-prompt, because the LLM traps himself in a "thinking" loop |
@jjmata if you have the time to look this up. Also i was thinking of doing some benchmark for the best ollama model for our fit, what do you think? |
Yeah, I've been meaning to return to this but my travels got in the way. Having a "benchmark" (in LLM parlance they call them evals) would be amazing. It's basically a large data set of ground-truth answers that we can run not just across models but also every time we change/revise prompts. Would be amazing to have a starting point, yes. 🙏 |
@matthieuEv @jjmata I gave a try to this PR, as I wanted to play a bit with OpenRouter models, but it was not working for some reason. 😕 So I decided to try implementing the OpenRouter support on my branch, and I managed to make it work; here's my attempt if you want to take a look. I tried to keep it simple: I've added some options in the settings page to allow users to define the endpoint, API key, and model; in this way, theoretically, you should be able to use any endpoint/model that supports the OpenAI SDK. I've also replaced the OpenAI gem with the official one, to ensure long-term support. I've been testing it for a few days now, and the chat seems to be working fine (except on mobile, for some reason I don't see the response sometimes, unless I close and re-open the page) as well as the categories/merchants determination. I hope it might be useful for your PR as well. 😊 |
Hey, nice work 👍 |
@matthieuEv a couple of notes from my side on the PR. Adding Openrouter is nice for versatility, but this means users now have to register to another provider. Which I feel is a step back in features, since a lot more people probably have an OpenAI api key. I feel we should keep support for pure OpenAI. Ollama api integration feels like an odd choice:
I think if we want to implement this PR, next to Openrouter we need to implement an OpenAI api layer. I've discussed this with @jjmata and exposing an OpenAI compatible layer with uri, model and apikey will open user choices to a lot more: Minor nitpicks:
|
Uh oh!
There was an error while loading. Please reload this page.