Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
182 changes: 181 additions & 1 deletion registry/ggml.llamacpp/extension.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,188 @@
"display_vendor": "ggml-org",
"license": "open-source"
},
"latest": "2.0.1",
"latest": "2.0.2",
"versions": {
"2.0.2": {
"sources": [
{
"url": "https://github.com/cristianadam/llama.qtcreator/releases/download/v2.0.2/LlamaCpp-2.0.2-Windows-x64.7z",
"sha256": "631ad4a64577723c4678b4f0c884f27878e1e03ab7b238b8c589f8abcd7f4dac",
"platform": {
"name": "Windows",
"architecture": "x86_64"
}
},
{
"url": "https://github.com/cristianadam/llama.qtcreator/releases/download/v2.0.2/LlamaCpp-2.0.2-Windows-arm64.7z",
"sha256": "f7b06f8a3bd9383a32dc34b98b06273551cb136ff7aba44c0f9fae7445e4cd60",
"platform": {
"name": "Windows",
"architecture": "arm64"
}
},
{
"url": "https://github.com/cristianadam/llama.qtcreator/releases/download/v2.0.2/LlamaCpp-2.0.2-Linux-x64.7z",
"sha256": "b2b77f215893626415a44fbb0bd140a6b6f20a4c87aa058f6ea5c95f41a057a8",
"platform": {
"name": "Linux",
"architecture": "x86_64"
}
},
{
"url": "https://github.com/cristianadam/llama.qtcreator/releases/download/v2.0.2/LlamaCpp-2.0.2-Linux-arm64.7z",
"sha256": "57da59b06409e6b7c1990bf7969ff84f214774a46e5e81da06aa8fc08f45382f",
"platform": {
"name": "Linux",
"architecture": "arm64"
}
},
{
"url": "https://github.com/cristianadam/llama.qtcreator/releases/download/v2.0.2/LlamaCpp-2.0.2-macOS-universal.7z",
"sha256": "3eb4a32d0fb08287c7d6a690dc85ef9aa073d25a8a6566715342d08cdba6e66c",
"platform": {
"name": "macOS",
"architecture": "x86_64"
}
},
{
"url": "https://github.com/cristianadam/llama.qtcreator/releases/download/v2.0.2/LlamaCpp-2.0.2-macOS-universal.7z",
"sha256": "3eb4a32d0fb08287c7d6a690dc85ef9aa073d25a8a6566715342d08cdba6e66c",
"platform": {
"name": "macOS",
"architecture": "arm64"
}
}
],
"metadata": {
"Id": "llamacpp",
"Name": "llama.qtcreator",
"Version": "2.0.2",
"CompatVersion": "2.0.2",
"Vendor": "ggml-org",
"VendorId": "ggml",
"Copyright": "(C) 2025 The llama.qtcreator Contributors, Copyright (C) The Qt Company Ltd. and other contributors.",
"License": "MIT",
"Description": "llama.cpp infill completion plugin for Qt Creator",
"LongDescription": [
"# llama.qtcreator",
"",
"Local LLM-assisted text completion for Qt Creator.",
"",
"![Qt Creator llama.cpp Text](https://raw.githubusercontent.com/cristianadam/llama.qtcreator/refs/heads/main/screenshots/[email protected])",
"",
"---",
"",
"![Qt Creator llama.cpp Qt Widgets](https://raw.githubusercontent.com/cristianadam/llama.qtcreator/refs/heads/main/screenshots/[email protected])",
"",
"",
"## Features",
"",
"- Auto-suggest on cursor movement. Toggle enable / disable with `Ctrl+Shift+G`",
"- Trigger the suggestion manually by pressing `Ctrl+G`",
"- Accept a suggestion with `Tab`",
"- Accept the first line of a suggestion with `Shift+Tab`",
"- Control max text generation time",
"- Configure scope of context around the cursor",
"- Ring context with chunks from open and edited files and yanked text",
"- [Supports very large contexts even on low-end hardware via smart context reuse](https://github.com/ggml-org/llama.cpp/pull/9787)",
"- Speculative FIM support",
"- Speculative Decoding support",
"- Display performance stats",
"- Chat support",
"- Source and Image drag & drop support",
"- Current editor selection predefined and custom LLM prompts",
"",
"",
"### llama.cpp setup",
"",
"The plugin requires a [llama.cpp](https://github.com/ggml-org/llama.cpp) server instance to be running at:",
"",
"![Qt Creator llama.cpp Settings](https://raw.githubusercontent.com/cristianadam/llama.qtcreator/refs/heads/main/screenshots/[email protected])",
"",
"",
"#### Mac OS",
"",
"```bash",
"brew install llama.cpp",
"```",
"",
"#### Windows",
"",
"```bash",
"winget install llama.cpp",
"```",
"",
"#### Any other OS",
"",
"Either build from source or use the latest binaries: https://github.com/ggml-org/llama.cpp/releases",
"",
"### llama.cpp settings",
"",
"Here are recommended settings, depending on the amount of VRAM that you have:",
"",
"- More than 16GB VRAM:",
"",
" ```bash",
" llama-server --fim-qwen-7b-default",
" ```",
"",
"- Less than 16GB VRAM:",
"",
" ```bash",
" llama-server --fim-qwen-3b-default",
" ```",
"",
"- Less than 8GB VRAM:",
"",
" ```bash",
" llama-server --fim-qwen-1.5b-default",
" ```",
"",
"Use `llama-server --help` for more details.",
"",
"",
"### Recommended LLMs",
"",
"The plugin requires FIM-compatible models: [HF collection](https://huggingface.co/collections/ggml-org/llamavim-6720fece33898ac10544ecf9)",
"",
"## Examples",
"",
"### A Qt Quick example on MacBook Pro M3 `Qwen2.5-Coder 3B Q8_0`:",
"",
"![Qt Creator llama.cpp Qt Quick](https://raw.githubusercontent.com/cristianadam/llama.qtcreator/refs/heads/main/screenshots/[email protected])",
"",
"### Chat on a Mac Studio M2 with `gpt-oss 20B`:",
"",
"![Qt Creator llama.cpp Chat](https://raw.githubusercontent.com/cristianadam/llama.qtcreator/refs/heads/main/screenshots/qtcreator-llamacpp-chat.webp)",
"",
"## Implementation details",
"",
"The plugin aims to be very simple and lightweight and at the same time to provide high-quality and performant local FIM completions, even on consumer-grade hardware. ",
"",
"## Other IDEs",
"",
"- Vim/Neovim: https://github.com/ggml-org/llama.vim",
"- VS Code: https://github.com/ggml-org/llama.vscode"
],
"Url": "https://github.com/ggml-org/llama.qtcreator",
"DocumentationUrl": "",
"Dependencies": [
{
"Id": "core",
"Version": "18.0.0"
},
{
"Id": "projectexplorer",
"Version": "18.0.0"
},
{
"Id": "texteditor",
"Version": "18.0.0"
}
]
}
},
"2.0.1": {
"sources": [
{
Expand Down