Skip to content

Conversation

@gallettilance
Copy link
Contributor

@gallettilance gallettilance commented Aug 12, 2025

Description

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement

Related Tickets & Documents

  • Related Issue #
  • Closes #

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Summary by CodeRabbit

  • New Features
    • Added a ready-to-use multi-service stack configuration with default providers and a pre-registered OpenAI LLM; services now bind to 0.0.0.0 and reference the Llama stack by container name for easier networking.
  • Documentation
    • Added “Building Container Images” guidance covering Server and Library modes, with examples and verification steps.
  • Chores
    • Updated container build guidance and test images to include common C/C++ build tools.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 12, 2025

Walkthrough

Updated docs and container configurations: added "Building Container Images" docs describing Server vs Library deployment modes, adjusted container images (added C/C++ tools in test image), changed service bind/address and llama_stack URL, and added a new run.yaml with sqlite-backed stores and provider configurations.

Changes

Cohort / File(s) Summary
Container build files
Containerfile, test.containerfile
test.containerfile adds C/C++ build tools (gcc, gcc-c++, make) to the package install step. Containerfile shows no functional USER/ENTRYPOINT changes (formatting artifact only).
Documentation
README.md
Adds "Building Container Images" section describing Server Mode and Library Mode, example configs, podman build/run commands, mounting notes (macOS variant), and verification curl.
Service configuration
lightspeed-stack.yaml, run.yaml
lightspeed-stack.yaml: service.host changed to 0.0.0.0, llama_stack.url set to http://llama-stack:8321. New run.yaml: v2 stack example with sqlite-backed stores, multiple inline/remote providers (openai, huggingface, llama-guard, faiss, etc.), model registration, and service settings; secrets referenced from env where shown.

Sequence Diagram(s)

sequenceDiagram
  participant Dev as Developer
  participant Builder as Builder Stage
  participant Final as Final Image

  Dev->>Builder: define Containerfile / test.containerfile
  Builder->>Builder: install runtime + build tools (gcc, make)
  Builder-->>Final: copy runtime artifacts (build-only tools excluded)
Loading
sequenceDiagram
  participant Client
  participant Core as lightspeed-core
  participant Stack as llama-stack
  participant OpenAI

  rect rgb(240,240,255)
  note over Core,Stack: Server Mode
  Client->>Core: HTTP request
  Core->>Stack: HTTP call (http://llama-stack:8321)
  Stack->>OpenAI: Provider inference request
  OpenAI-->>Stack: Response
  Stack-->>Core: Result
  Core-->>Client: Response
  end

  rect rgb(240,255,240)
  note over Core: Library Mode
  Client->>Core: HTTP request
  Core->>OpenAI: Inference via embedded llama-stack library
  OpenAI-->>Core: Response
  Core-->>Client: Response
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • tisnik

Poem

I hopped through layers with a tiny build hat,
Two doors now open — server or library chat.
Ports and sqlite crumbs tucked safe in my sack,
I curl and I curl through docs on my back.
A rabbit applauds the configs on track 🐇

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@gallettilance gallettilance changed the title (docs) how to build the lightspeed core image LCORE-431: how to build the lightspeed core image Aug 12, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🔭 Outside diff range comments (2)
test.containerfile (1)

27-43: Out-of-lock uv pip installs undermine reproducibility

Installing a large set of packages with uv pip install after uv sync --locked bypasses uv.lock, leading to non-reproducible images and potential supply-chain drift. Prefer declaring these dependencies in pyproject.toml and re-locking so uv sync --locked can reproduce the environment deterministically. If you need optional tooling (e.g., test-only), consider extras or separate lock files.

I can generate a PR to move these to pyproject and refresh uv.lock. Want me to proceed?

lightspeed-stack.yaml (1)

3-7: Secure service exposure risk in lightspeed-stack.yaml

Binding the service to all interfaces with authentication disabled allows unrestricted access:

File: lightspeed-stack.yaml
Lines: 3–7

Current config:

  host: 0.0.0.0
  port: 8080
  auth_enabled: false
  workers: 1
  color_log: true

Recommendations:

  • If the service must remain internal only, restrict binding:
      host: 127.0.0.1
  • Otherwise, enable basic auth:
      auth_enabled: true
      auth:
        username: myuser
        password: ${LS_PASSWORD}
  • Document running behind a reverse proxy (e.g., Nginx) with TLS termination and IP whitelisting.
  • Add a minimal example in your README or docs to guide secure deployment.
🧹 Nitpick comments (5)
test.containerfile (1)

22-23: Remove redundant uv -h layer

This layer only prints help and adds no value to the image; drop it to speed up builds.

-RUN uv -h
README.md (2)

312-323: Prefer ‘docker compose’ over legacy ‘docker-compose’

The plugin-based docker compose is the default going forward. Update the command for consistency with modern Docker installations.

-docker-compose up --build
+docker compose up --build

355-356: Double-check the /v1/models endpoint exists

If the service doesn’t expose /v1/models, the verification step will fail. Use readiness/liveness as an alternative if models aren’t listed.

Would you like me to adjust the README to curl /v1/readiness instead?

run.yaml (1)

87-89: Avoid committing placeholder secrets in config

Even though obfuscated, openai_api_key: '********' can cause confusion. Prefer referencing an environment variable consistently, or omit the field if unused.

-  - config:
-      openai_api_key: '********'
+  - config:
+      openai_api_key: ${env.OPENAI_API_KEY}
Containerfile (1)

60-66: PATH precedence ensures venv binaries; consider explicit python path for clarity

ENV PATH="/app-root/.venv/bin:$PATH" is correct. Optionally, set ENTRYPOINT to the venv’s python to avoid relying on PATH resolution.

-ENTRYPOINT ["python3.12", "src/lightspeed_stack.py"]
+ENTRYPOINT ["/app-root/.venv/bin/python", "src/lightspeed_stack.py"]
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e02198e and 90885f1.

📒 Files selected for processing (5)
  • Containerfile (1 hunks)
  • README.md (1 hunks)
  • lightspeed-stack.yaml (2 hunks)
  • run.yaml (1 hunks)
  • test.containerfile (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: e2e_tests
🔇 Additional comments (3)
lightspeed-stack.yaml (1)

16-16: Ensure Compose service ‘llama-stack’ exists

There isn’t a Docker Compose file in this repo, but lightspeed-stack.yaml (line 16) points to http://llama-stack:8321. Please verify that your external Compose setup defines a service or network alias named llama-stack, or update this URL to match your actual service name.

• lightspeed-stack.yaml:16 – url: http://llama-stack:8321

run.yaml (2)

63-67: Environment variable expansion for OpenAI key: confirm supported syntax

Confirm that ${env.OPENAI_API_KEY} is the correct interpolation syntax expected by your llama-stack version. Some configs prefer ${env:OPENAI_API_KEY}.

I can search for examples in this repo or upstream configs to validate and adjust accordingly. Do you want a script to scan for similar patterns?


20-26: Relative DB paths under .llama directory: ensure write permissions and directory creation

These SQLite paths are relative (e.g., .llama/distributions/ollama/...). Make sure the working directory is writeable by the runtime user (1001) and that parent directories are created on startup or via entrypoint logic.

I can add a tiny init script to create these directories at container start if you’d like.

Also applies to: 31-37, 42-45, 49-52, 57-61, 93-96, 103-107

Containerfile Outdated
Comment on lines 15 to 21
RUN microdnf install -y --nodocs --setopt=keepcache=0 --setopt=tsflags=nodocs \
python3.12 python3.12-devel python3.12-pip git tar \
gcc gcc-c++ make

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Install curl and CA bundles; clean cached metadata

curl is used later (Line 24), and HTTPS requires CA bundles. Also, clean DNF metadata to reduce image size.

-RUN microdnf install -y --nodocs --setopt=keepcache=0 --setopt=tsflags=nodocs \
-    python3.12 python3.12-devel python3.12-pip git tar \
-    gcc gcc-c++ make
+RUN microdnf install -y --nodocs --setopt=keepcache=0 --setopt=tsflags=nodocs \
+    python3.12 python3.12-devel python3.12-pip git tar curl ca-certificates \
+    gcc gcc-c++ make && microdnf clean all
🤖 Prompt for AI Agents
In Containerfile around lines 15 to 18, the image installs Python and build
tools but omits curl and CA bundles and does not clean DNF metadata; update the
RUN install line to include curl and an appropriate CA package (e.g.,
ca-certificates) so HTTPS calls work, and after installation run microdnf
cleanup to remove cached metadata (e.g., clean all/tmp metadata) to reduce image
size.

Containerfile Outdated
Comment on lines 24 to 25
RUN curl -LsSf https://astral.sh/uv/install.sh | sh

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Harden the uv install and pin the version

Same as test.containerfile: guard against partial downloads and pin uv to ensure reproducible builds.

-RUN curl -LsSf https://astral.sh/uv/install.sh | sh
+RUN set -euxo pipefail; curl -fsSL https://astral.sh/uv/install.sh | sh -s -- --version ${UV_VERSION:-latest}

Add near top of this file:

ARG UV_VERSION=0.4.29
🤖 Prompt for AI Agents
In Containerfile around lines 24-25, the RUN that pipes curl to sh needs to be
hardened and UV pinned: add an ARG UV_VERSION=0.4.29 near the top of the
Dockerfile, replace the direct pipe with a safe download that uses the pinned
UV_VERSION (e.g., curl -fSL to write to a temporary file), verify the download
(checksum or signature) before executing, and invoke the installer explicitly
from that trusted file rather than piping curl to sh to prevent partial-download
execution.

Containerfile Outdated
Comment on lines 26 to 42
RUN uv sync --locked --no-dev && \
uv pip install \
opentelemetry-sdk \
opentelemetry-exporter-otlp \
opentelemetry-instrumentation \
aiosqlite \
litellm \
blobfile \
datasets \
sqlalchemy \
faiss-cpu \
mcp \
autoevals \
psutil \
torch \
peft \
trl
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Out-of-lock installs reduce reproducibility; move deps into pyproject/uv.lock

uv pip install of many packages after uv sync --locked --no-dev side-steps lock guarantees. Prefer declaring these in pyproject.toml and re-locking so uv sync --locked fully reproduces the environment. This also makes dependency provenance auditable.

I can update pyproject.toml and uv.lock to include these packages and drop the uv pip install block. Want me to open a follow-up PR?

🤖 Prompt for AI Agents
In Containerfile around lines 26 to 42, the Dockerfile runs `uv pip install`
after `uv sync --locked --no-dev`, which bypasses the lockfile and undermines
reproducibility; instead add these packages to pyproject.toml (or the project's
uv configuration), run `uv lock` (or the project's equivalent) to update
uv.lock, commit the updated pyproject.toml and uv.lock, and remove the `uv pip
install` block from the Containerfile so `uv sync --locked --no-dev` alone
installs all declared dependencies from the lockfile.

RUN curl -LsSf https://astral.sh/uv/install.sh | sh

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Harden the curl | sh step and pin uv version

Piping curl to sh without pipefail can mask download failures; also pin uv for reproducible builds.

-RUN curl -LsSf https://astral.sh/uv/install.sh | sh
+RUN set -euxo pipefail; curl -fsSL https://astral.sh/uv/install.sh | sh -s -- --version ${UV_VERSION:-latest}

Add this ARG near the top of the file (outside the selected range):

ARG UV_VERSION=0.4.29
🤖 Prompt for AI Agents
In test.containerfile around lines 20-21, the RUN that pipes curl to sh is
unsafe and unpinned; add an ARG near the top of the file: UV_VERSION=0.4.29, and
update the RUN to enable strict failure handling and pass the pinned version to
the installer (use set -o pipefail -e and curl with --fail --show-error
--location, piping to sh -s -- "$UV_VERSION") so the build fails on download
errors and the uv version is reproducible.

@@ -1,25 +1,45 @@
# vim: set filetype=dockerfile
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please drop changes in this file, I update mine PR to include this kind of changes. #292

COPY uv.lock ./
COPY LICENSE ./
COPY README.md ./
COPY src/ ./src/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here: Please drop changes in this file, I update mine PR to include this kind of changes.
#292

README.md Outdated
**Build and run**:
```bash
# Build lightspeed-core with embedded llama-stack
docker build -f Containerfile -t my-lightspeed-core:latest .
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please put here podman instead of docker (my opinion).

-v ./run.yaml:/app-root/run.yaml:ro \
-e OPENAI_API_KEY=your-api-key \
my-lightspeed-core:latest
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This command is more for macosx. Please also include this linux command:

podman run -p 8080:8080 -v ./lightspeed-stack.yaml:/app-root/lightspeed-stack.yaml:Z -v ./run.yaml:/app-root/run.yaml:Z -e OPENAI_API_KEY=$OPENAI_API_KEY my-lightspeed-core:latest

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
README.md (1)

576-582: Add a simple readiness wait before hitting /v1/models

Avoids flaky failures if the service isn’t ready yet.

 ### Verify it's running properly
 
 A simple sanity check:
 
 ```bash
+until curl -sf http://localhost:8080/v1/liveness > /dev/null; do
+  echo "Waiting for lightspeed-core to become ready..."
+  sleep 1
+done
+
 curl -H "Accept: application/json" http://localhost:8080/v1/models

</blockquote></details>

</blockquote></details>

<details>
<summary>🧹 Nitpick comments (5)</summary><blockquote>

<details>
<summary>README.md (5)</summary><blockquote>

`516-520`: **Unify naming and tighten wording (Llama Stack, Lightspeed Core; sentence clarity)**

Minor editorial fixes: remove the hyphen in “Llama-Stack”, capitalize proper names, and clarify the sentence.


```diff
-### Llama-Stack as Separate Service (Server Mode)
+### Llama Stack as Separate Service (Server Mode)

-When using llama-stack as a separate service, the existing `docker-compose.yaml` provides the complete setup. This builds two containers for lightspeed core and llama stack.
+When using Llama Stack as a separate service, the existing `docker-compose.yaml` provides the complete setup. It builds two containers: Lightspeed Core and Llama Stack.

541-551: Fix minor wording and naming; use consistent “Llama Stack”

Small grammar/style nits; also remove duplicate “directly”.

-### Llama-Stack as Library (Library Mode)
+### Llama Stack as Library (Library Mode)

-When embedding llama-stack directly in the container, use the existing `Containerfile` directly (this will not build the llama stack service in a separate container). First modify the `lightspeed-stack.yaml` config to use llama stack in library mode.
+When embedding Llama Stack in the container, use the existing `Containerfile` (this will not build the Llama Stack service in a separate container). First modify the `lightspeed-stack.yaml` config to use library mode.

552-565: Linux SELinux note: add :Z label for podman volumes (keep RO mount)

On Linux with SELinux, add the :Z label to avoid permission issues. You can combine :ro,Z.

 podman run \
   -p 8080:8080 \
-  -v ./lightspeed-stack.yaml:/app-root/lightspeed-stack.yaml:ro \
-  -v ./run.yaml:/app-root/run.yaml:ro \
+  -v ./lightspeed-stack.yaml:/app-root/lightspeed-stack.yaml:ro,Z \
+  -v ./run.yaml:/app-root/run.yaml:ro,Z \
   -e OPENAI_API_KEY=your-api-key \
   my-lightspeed-core:latest

566-574: Fix spelling (macOS) and remove SELinux label from macOS example

macOS should be capitalized “macOS”. The :Z SELinux label isn’t needed on macOS; keep read-only mounts instead.

-For macosx users:
+For macOS users:
 ```bash
 podman run \
   -p 8080:8080 \
-  -v ./lightspeed-stack.yaml:/app-root/lightspeed-stack.yaml:Z \
-  -v ./run.yaml:/app-root/run.yaml:Z \
+  -v ./lightspeed-stack.yaml:/app-root/lightspeed-stack.yaml:ro \
+  -v ./run.yaml:/app-root/run.yaml:ro \
   -e OPENAI_API_KEY=your-api-key \
   my-lightspeed-core:latest

---

`528-539`: **Ensure Compose filename consistency**

The repository includes a single `docker-compose.yaml`, but the README’s example simply runs:

```bash
podman compose up --build

While Podman Compose will pick up docker-compose.yaml by default, it isn’t immediately obvious. To avoid confusion, consider one of the following:

  • Explicitly reference the file in the command:
    podman compose -f docker-compose.yaml up --build
  • Or rename docker-compose.yaml to compose.yaml and update the README accordingly.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 9fa1485 and 8d9ae4d.

📒 Files selected for processing (5)
  • Containerfile (1 hunks)
  • README.md (1 hunks)
  • lightspeed-stack.yaml (2 hunks)
  • run.yaml (1 hunks)
  • test.containerfile (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • Containerfile
🚧 Files skipped from review as they are similar to previous changes (3)
  • lightspeed-stack.yaml
  • test.containerfile
  • run.yaml
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-18T10:56:55.302Z
Learnt from: matysek
PR: lightspeed-core/lightspeed-stack#292
File: pyproject.toml:0-0
Timestamp: 2025-08-18T10:56:55.302Z
Learning: The lightspeed-stack project intentionally uses a "generic image" approach, bundling many dependencies directly in the base runtime image to work for everyone, rather than using lean base images with optional dependency groups.

Applied to files:

  • README.md
🪛 LanguageTool
README.md

[grammar] ~516-~516: There might be a mistake here.
Context: ...within lightspeed-core ### Llama-Stack as Separate Service (Server Mode) When us...

(QB_NEW_EN)


[grammar] ~566-~566: Ensure spelling is correct
Context: ... \ my-lightspeed-core:latest For macosx users:bash podman run \ -p 8080:8...

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: build-pr
  • GitHub Check: e2e_tests
🔇 Additional comments (2)
README.md (2)

509-515: Nice addition: clear section framing for container builds

The new “Building Container Images” section neatly separates the two deployment modes. Good structure.


520-526: No documentation change needed: llama_stack.api_key is supported

The api_key field under llama_stack is part of the configuration model (validated in tests/unit/test_configuration.py) and is used when instantiating AsyncLlamaStackClient in src/client.py. The example in README.md is correct and should remain as is.

Copy link
Contributor

@tisnik tisnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

-v ./lightspeed-stack.yaml:/app-root/lightspeed-stack.yaml:Z \
-v ./run.yaml:/app-root/run.yaml:Z \
-e OPENAI_API_KEY=your-api-key \
my-lightspeed-core:latest
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just nitpick: This command is for linux. Please swap those commands

command macosx:

podman run -p 8080:8080 -v ./lightspeed-stack.yaml:/app-root/lightspeed-stack.yaml:ro \
  -v ./run.yaml:/app-root/run.yaml:ro \
  -e OPENAI_API_KEY=your-api-key \
  my-lightspeed-core:latest

command for linux:

podman run -p 8080:8080 -v ./lightspeed-stack.yaml:/app-root/lightspeed-stack.yaml:Z -v ./run.yaml:/app-root/run.yaml:Z -e OPENAI_API_KEY=$OPENAI_API_KEY my-lightspeed-core:latest

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
README.md (1)

552-564: Linux example: pass exported env var (per prior feedback)

The line uses a literal placeholder; for Linux we should pass the env var value as suggested previously. This keeps it consistent with the earlier export OPENAI_API_KEY=... step.

-  -e OPENAI_API_KEY=your-api-key \
+  -e OPENAI_API_KEY=$OPENAI_API_KEY \
🧹 Nitpick comments (6)
README.md (6)

516-516: Naming consistency: prefer “Llama Stack” (no hyphen) in headings

Elsewhere in the README you use “Llama Stack”. Align this header for consistency.

-### Llama-Stack as Separate Service (Server Mode)
+### Llama Stack as Separate Service (Server Mode)

518-519: Wording and product naming tweaks

  • “docker-compose.yaml” is fine, but the sentence reads a bit awkwardly and “builds two containers” may be misleading (compose might pull images). Suggest “starts two containers” and keep product names consistent.
-When using llama-stack as a separate service, the existing `docker-compose.yaml` provides the complete setup. This builds two containers for lightspeed core and llama stack.
+When using Llama Stack as a separate service, the existing `docker-compose.yaml` provides the complete setup. This starts two containers: lightspeed-core and Llama Stack.

541-541: Naming consistency: use “Llama Stack” (no hyphen) in this heading too

Align with earlier sections that use “Llama Stack”.

-### Llama-Stack as Library (Library Mode)
+### Llama Stack as Library (Library Mode)

543-544: Remove duplicated “directly” and align naming

Minor readability fix and naming consistency.

-When embedding llama-stack directly in the container, use the existing `Containerfile` directly (this will not build the llama stack service in a separate container). First modify the `lightspeed-stack.yaml` config to use llama stack in library mode.
+When embedding Llama Stack in the container, use the existing `Containerfile` (this will not build the Llama Stack service in a separate container). First modify the `lightspeed-stack.yaml` config to use Llama Stack in library mode.

566-574: Fix platform name capitalization: “macOS”

Use “macOS” instead of “macosx”.

-For macosx users:
+For macOS users:

576-582: Optional: add a short readiness loop before querying models

Avoids a race where the service isn’t ready when curl runs.

-```bash
-curl -H "Accept: application/json" http://localhost:8080/v1/models
-```
+```bash
+# Wait for the API to be ready, then list available models
+until curl -sf http://localhost:8080/v1/readiness >/dev/null; do
+  echo "Waiting for lightspeed-core to become ready…"
+  sleep 1
+done
+curl -H "Accept: application/json" http://localhost:8080/v1/models
+```
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 8d9ae4d and 7ee8ae2.

📒 Files selected for processing (5)
  • Containerfile (1 hunks)
  • README.md (1 hunks)
  • lightspeed-stack.yaml (2 hunks)
  • run.yaml (1 hunks)
  • test.containerfile (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
  • Containerfile
  • run.yaml
  • lightspeed-stack.yaml
  • test.containerfile
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-18T10:56:55.302Z
Learnt from: matysek
PR: lightspeed-core/lightspeed-stack#292
File: pyproject.toml:0-0
Timestamp: 2025-08-18T10:56:55.302Z
Learning: The lightspeed-stack project intentionally uses a "generic image" approach, bundling many dependencies directly in the base runtime image to work for everyone, rather than using lean base images with optional dependency groups.

Applied to files:

  • README.md
🪛 LanguageTool
README.md

[grammar] ~516-~516: There might be a mistake here.
Context: ...within lightspeed-core ### Llama-Stack as Separate Service (Server Mode) When us...

(QB_NEW_EN)


[grammar] ~566-~566: Ensure spelling is correct
Context: ... \ my-lightspeed-core:latest For macosx users:bash podman run \ -p 8080:8...

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: build-pr
  • GitHub Check: e2e_tests
🔇 Additional comments (3)
README.md (3)

509-515: Good addition: clear two-mode overview for container builds

The new “Building Container Images” section cleanly separates Server vs Library modes and fits well with the rest of the README.


522-526: No changes needed: api_key is a valid LlamaStackConfiguration field

The api_key property is defined on LlamaStackConfiguration (src/models/config.py:158–160) and is exercised in both integration (tests/integration/test_configuration.py:60–61) and unit tests (e.g., tests/unit/test_configuration.py:86). The example in README.md correctly reflects the schema.


529-539: Verified compose filename and ports are correct

The repository contains docker-compose.yaml at the root, and it exposes:

  • llama-stack on port 8321
  • lightspeed-stack on port 8080

No updates to the README are needed.

Copy link
Contributor

@matysek matysek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

Copy link
Contributor

@umago umago left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, it LGTM! Just a few nitpicks. But since this is docs, I think we need to have a consistent naming for things.


# no-root user is checked in Konflux
USER 1001
USER 1001
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unrelated change

### Llama-Stack as Separate Service (Server Mode)
When using llama-stack as a separate service, the existing `docker-compose.yaml` provides the complete setup. This builds two containers for lightspeed core and llama stack.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

llama-stack (with hyphen). To keep things consistent, I'm seeing Llama-stack, llama-stack and now llama stack. Let's pick one


### Llama-Stack as Library (Library Mode)

When embedding llama-stack directly in the container, use the existing `Containerfile` directly (this will not build the llama stack service in a separate container). First modify the `lightspeed-stack.yaml` config to use llama stack in library mode.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto, llama-stack (with hyphen)

@matysek matysek merged commit ea8141c into lightspeed-core:main Aug 19, 2025
18 of 19 checks passed
@coderabbitai coderabbitai bot mentioned this pull request Nov 6, 2025
18 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants