Replies: 1 comment
-
The default is settings.yml, but it shouldn't really matter if you provide a different name on the command line. The
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello GraphRAG Team,
I'm trying to run the indexing engine for the first time on a fresh clone of the repository, following the development guide. I'm running into a ValidationError and would appreciate some help.
My Environment:
Operating System: Windows 11
Terminal: PowerShell
Setup Method: Local development setup using uv (not Docker).
Steps I've Taken:
Cloned the latest version of the repository.
Installed dependencies successfully using uv sync --extra dev inside a virtual environment.
Created an input folder with several .txt files.
After encountering a FileNotFoundError, I created a graphrag.yml file in the root directory (D:\graphrag).
My graphrag.yml file content:
YAML
This top-level block defines my API key
llm:
type: openai
api_key: "sk-..." # (My actual key is here)
This new block defines the roles for my models
models:
id: "default_chat_model"
type: llm
model: "gpt-4o-mini"
id: "default_embedding_model"
type: text_embedding
model: "text-embedding-3-small"
These sections remain the same
input:
type: text
base_dir: "./input"
output:
base_dir: "./output"
Command I am running:
I am in the root directory (D:\graphrag) with the virtual environment activated. I've tried both of the following commands:
PowerShell
Attempt 1
uv run poe index --root .
Attempt 2 (explicitly pointing to the config file)
uv run poe index --root . --config ./graphrag.yml
The Error:
Both commands result in the same ValidationError. It seems the configuration is being read, but the models definition for default_chat_model isn't being recognized correctly.
Here is the traceback:
ValidationError: 1 validation error for GraphRagConfig
Value error, A default_chat_model model configuration is required. Please rerun
graphrag init
and set models["default_chat_model"]in settings.yaml. [type=value_error, input_value={'llm': {'type': 'openai'...ot_dir': 'D:\graphrag'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.11/v/value_error
It's strange that the error message mentions settings.yaml, while the documentation seems to point towards graphrag.yml.
Could you please advise on what might be wrong with my configuration or the steps I'm taking?
Thank you for your help!
Beta Was this translation helpful? Give feedback.
All reactions