Skip to content

Commit 6598362

Browse files
committed
📝 minor updates in readme files
Signed-off-by: Krishna Murti <[email protected]>
1 parent 9b12618 commit 6598362

File tree

2 files changed

+8
-8
lines changed

2 files changed

+8
-8
lines changed

helm-charts/chatqna/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ For LLM inference, two more microservices will be required. We can either use [T
2222
- [llm-ctrl-uservice](../common/llm-ctrl-uservice/README.md)
2323
- [vllm](../common/vllm/README.md)
2424

25-
> **Note:** We shouldn't have both inference engine in our setup. We have to setup either of them. For this, conditional flags are added in the chart dependency. We will be switching off flag corresponding to one service and switching on the other, in order to have a proper setup of all ChatQnA dependencies.
25+
> **_NOTE :_** We shouldn't have both inference engine deployed. It is required to only setup either of them. To achieve this, conditional flags are added in the chart dependency. We will be switching off flag corresponding to one service and switching on the other, in order to have a proper setup of all ChatQnA dependencies.
2626
2727
## Installing the Chart
2828

@@ -76,7 +76,7 @@ helm install chatqna chatqna --set global.HUGGINGFACEHUB_API_TOKEN=${HFTOKEN} --
7676
helm install chatqna chatqna --set global.HUGGINGFACEHUB_API_TOKEN=${HFTOKEN} --set global.modelUseHostPath=${MODELDIR} -f chatqna/guardrails-gaudi-values.yaml
7777
```
7878

79-
> **_NOTE:_** Default installation will use [TGI (Text Generation Inference)](https://github.com/huggingface/text-generation-inference) as inference engine. To use vLLM as inference engine, please see below.
79+
> **_NOTE :_** Default installation will use [TGI (Text Generation Inference)](https://github.com/huggingface/text-generation-inference) as inference engine. To use vLLM as inference engine, please see below.
8080
8181
```bash
8282
# To use vLLM inference engine on XEON device

helm-charts/common/llm-ctrl-uservice/README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,17 @@
11
# llm-ctrl Microservice
22

3-
Helm chart for deploying a microservice which facilitates connections and handles responses from OpenVINO vLLM microservice.
3+
Helm chart for deploying LLM controller microservice which facilitates connections and handles responses from OpenVINO vLLM microservice.
44

5-
`llm-ctrl-uservice` depends on OpenVINO vLLM. You should properly set `vLLM_ENDPOINT` as the HOST URI of vLLM microservice. If not set, it will consider the default value : `http://<helm-release-name>-vllm-openvino:80`
5+
`llm-ctrl-uservice` depends on vLLM microservice. You should properly set `vLLM_ENDPOINT` as the HOST URI of vLLM microservice. If not set, it will consider the default value : `http://<helm-release-name>-vllm-openvino:80`
66

77
As this service depends on vLLM microservice, we can proceed in either of 2 ways:
88

9-
- Install both microservices separately one after another.
10-
- Install the vLLM microservice as dependency for the our main `llm-ctrl-uservice` microservice.
9+
- Install both microservices individually.
10+
- Install the vLLM microservice as dependency for `llm-ctrl-uservice` microservice.
1111

12-
## (Option 1): Installing the chart separately:
12+
## (Option 1): Installing the charts individually:
1313

14-
First, you need to install the `vllm-openvino` chart, please refer to the [vllm](../vllm) chart for more information.
14+
First, you need to install the `vllm` chart, please refer to the [vllm](../vllm) chart for more information.
1515

1616
After you've deployed the `vllm` chart successfully, please run `kubectl get svc` to get the vLLM service name with port. We need to provide this to `llm-ctrl-uservice` as a value for vLLM_ENDPOINT for letting it discover and connect to the vLLM microservice.
1717

0 commit comments

Comments
 (0)