-
Couldn't load subscription status.
- Fork 241
KG-452 Add documentation about LLMParams #991
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
Qodana for JVM17342 new problems were found
@@ Code coverage @@
+ 70% total lines covered
14810 lines analyzed, 10439 lines covered
# Calculated according to the filters of your coverage tool☁️ View the detailed Qodana report Contact Qodana teamContact us at [email protected]
|
| --> | ||
| ```kotlin | ||
| val prompt = prompt( | ||
| "dev-assistant", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
parameter names?
|
|
||
| ## LLM parameter reference | ||
|
|
||
| The following table provides a reference of LLM parameters included in the `LLMParams` class and supported by all LLM providers that are available in Koog out of the box. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe the table should mention default values, wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good idea, but it might be difficult to maintain in the future, since each provider can change its default values. It might be better to refer to their documentation instead.
|
|
||
| The `ToolChoice` class controls how the language model uses tools. It provides the following options: | ||
|
|
||
| 1. **Named** (`LLMParams.ToolChoice.Named`): the language model calls the specified tool. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think there is a need to name the subclass and then type it out from the top-level class in parentheses. Just LLMParams.ToolChoice.Named as the list item name is fine. Also, this is not a sequence, so should be unordered. Maybe even use a definition list? IMO it's what def lists are for. But not everyone likes how they look compared to lists with bullet points, and it requires adding the def_list extension: https://squidfunk.github.io/mkdocs-material/reference/lists/
| --> | ||
| ```kotlin | ||
| // Use a specific tool | ||
| val specificToolParams = LLMParams( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having both the list of possible tool choices and these examples seems redundant. I'd leave just the list without examples in this case, because they are very simple and pretty much the same. Maybe add only one example for the named tool choice, because you need to add a name of the tool there. If you think the examples are useful, then maybe drop the list and have only examples with descriptions in comment?
|
|
||
| - DeepSeek: `DeepSeekParams` | ||
| - OpenRouter: `OpenRouterParams` | ||
| - The supported parameters for OpenAI may differ between the underlying OpenAI APIs: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding this nesting level seems unnecessary. And this may also be a case for a def list or maybe just a flat bulleted list. Something like this:
DeepSeekParams
: Parameters specific to DeepSeek models.
OpenRouterParams
: Parameters specific to OpenRouter models.
OpenAIChatParams
: Parameters specific to the OpenAI Chat Completions API.
OpenAIResponsesParams
: Parameters specific to the OpenAI Responses API.
| - OpenAI Chat: `OpenAIChatParams`. Parameters specific to the OpenAI Chat Completions API. | ||
| - OpenAI Responses: `OpenAIResponsesParams`. Parameters specific to the OpenAI Responses API. | ||
|
|
||
| Here is the complete reference of provider-specific parameters in Koog: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also maybe would be nice to have defaults mentioned?
| | `topLogprobs` | OpenAI Chat, OpenAI Responses, DeepSeek, OpenRouter | Integer | Number of top most likely tokens per position. Takes a value in the range of 0–20. Requires the `logprobs` parameter to be set to `true`. | | ||
| | `frequencyPenalty` | OpenAI Chat, DeepSeek, OpenRouter | Double | Penalizes frequent tokens to reduce repetition. Higher `frequencyPenalty` values result in larger variations of phrasing and reduced repetition. Takes a value in the range of -2.0 to 2.0. | | ||
| | `presencePenalty` | OpenAI Chat, DeepSeek, OpenRouter | Double | Prevents the model from reusing tokens that have already been included in the output. Higher values encourage the introduction of new tokens and topics. Takes a value in the range of -2.0 to 2.0. | | ||
| | `stop` | OpenAI Chat, DeepSeek, OpenRouter | List<String> | Stop sequences. The model stops generating content when it encounters any of the sequences in the list. A list of strings containing maximum 4 items. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like the least obvious one in terms of syntax. What are "items" in a string? Like words? An example would be helpful here.
| | `frequencyPenalty` | OpenAI Chat, DeepSeek, OpenRouter | Double | Penalizes frequent tokens to reduce repetition. Higher `frequencyPenalty` values result in larger variations of phrasing and reduced repetition. Takes a value in the range of -2.0 to 2.0. | | ||
| | `presencePenalty` | OpenAI Chat, DeepSeek, OpenRouter | Double | Prevents the model from reusing tokens that have already been included in the output. Higher values encourage the introduction of new tokens and topics. Takes a value in the range of -2.0 to 2.0. | | ||
| | `stop` | OpenAI Chat, DeepSeek, OpenRouter | List<String> | Stop sequences. The model stops generating content when it encounters any of the sequences in the list. A list of strings containing maximum 4 items. | | ||
| | `parallelToolCalls` | OpenAI Chat, OpenAI Responses | Boolean | If `true`, multiple tool calls can run in parallel. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment is more from the user's perspective, not related to content but to the API itself. Maybe I don't get something, but my understanding was that by default an LLM request node can return a tool call, because a tool also returns one specific result. If you want to expect parallel tool calls, you need to use a different node, because parallel tool calls will return a list (or a map?) of tool results. So I don't understand what we control here if this is controlled by the type of node. But I didn't look at the API myself, this was just my first thought when I read this)
| | `repetitionPenalty` | OpenRouter | Double | Penalizes token repetition. Next-token probabilities for tokens that already appeared in the output are divided by the value of `repetitionPenalty`, which makes them less likely to appear again if `repetitionPenalty > 1`. Takes a value greater than 0.0 and lower than or equal to 2.0. | | ||
| | `minP` | OpenRouter | Double | Filters out tokens whose relative probability to the most likely token is below the defined `minP` value. Takes a value in the range of 0.0–0.1. | | ||
| | `topA` | OpenRouter | Double | Dynamically adjusts the sampling window based on model confidence. If the model is confident (there are dominant high-probability next tokens), it keeps the sampling window limited to a few top tokens. If the confidence is low (there are many tokens with similar probabilities), keeps more tokens in the sampling window. Takes a value in the range of 0.0–0.1 (inclusive). Higher value means greater dynamic adaptation. | | ||
| | `transforms` | OpenRouter | List<String> | List of context transforms. Defines how context is transformed when it exceeds the model's token limit. The default transformation is `middle-out` which truncates from the middle of the prompt. Use empty list for no transformations. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An example here would also be helpful or a link to relevant OpenRouter docs?
| ``` | ||
| <!--- KNIT example-llm-parameters-12.kt --> | ||
| ### Setting and overriding default parameters |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this also how I'd combine generic LLM params with provider-specific ones?
| <!--- KNIT example-llm-parameters-01.kt --> | ||
|
|
||
| For more information about prompt creation, see [Prompt API](prompt-api.md). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems to me there shouldn’t be any indentation here
|
|
||
| ## LLM parameter reference | ||
|
|
||
| The following table provides a reference of LLM parameters included in the `LLMParams` class and supported by all LLM providers that are available in Koog out of the box. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good idea, but it might be difficult to maintain in the future, since each provider can change its default values. It might be better to refer to their documentation instead.
|
|
||
| | Parameter | Type | Description | | ||
| |------------------------|--------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ||
| | `temperature` | Double | Controls randomness in the output. Higher values, such as 0.7–1.0, produce more diverse and creative responses, while lower values produce more deterministic and focused responses. Takes a value in the range of 0.0–2.0. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default values differ between providers: some use the range [0, 1], some use [0, 2]
Add documentation for LLM parameters (LLMParams + provider-specific parameters).
Type of the changes
Checklist
developas the base branchAdditional steps for pull requests adding a new feature