Skip to content

Commit 03aba57

Browse files
authored
Update ggml.llamacpp to v2.0.2 (#43)
2 parents 491c094 + aad788e commit 03aba57

File tree

1 file changed

+181
-1
lines changed

1 file changed

+181
-1
lines changed

registry/ggml.llamacpp/extension.json

Lines changed: 181 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,188 @@
77
"display_vendor": "ggml-org",
88
"license": "open-source"
99
},
10-
"latest": "2.0.1",
10+
"latest": "2.0.2",
1111
"versions": {
12+
"2.0.2": {
13+
"sources": [
14+
{
15+
"url": "https://github.com/cristianadam/llama.qtcreator/releases/download/v2.0.2/LlamaCpp-2.0.2-Windows-x64.7z",
16+
"sha256": "631ad4a64577723c4678b4f0c884f27878e1e03ab7b238b8c589f8abcd7f4dac",
17+
"platform": {
18+
"name": "Windows",
19+
"architecture": "x86_64"
20+
}
21+
},
22+
{
23+
"url": "https://github.com/cristianadam/llama.qtcreator/releases/download/v2.0.2/LlamaCpp-2.0.2-Windows-arm64.7z",
24+
"sha256": "f7b06f8a3bd9383a32dc34b98b06273551cb136ff7aba44c0f9fae7445e4cd60",
25+
"platform": {
26+
"name": "Windows",
27+
"architecture": "arm64"
28+
}
29+
},
30+
{
31+
"url": "https://github.com/cristianadam/llama.qtcreator/releases/download/v2.0.2/LlamaCpp-2.0.2-Linux-x64.7z",
32+
"sha256": "b2b77f215893626415a44fbb0bd140a6b6f20a4c87aa058f6ea5c95f41a057a8",
33+
"platform": {
34+
"name": "Linux",
35+
"architecture": "x86_64"
36+
}
37+
},
38+
{
39+
"url": "https://github.com/cristianadam/llama.qtcreator/releases/download/v2.0.2/LlamaCpp-2.0.2-Linux-arm64.7z",
40+
"sha256": "57da59b06409e6b7c1990bf7969ff84f214774a46e5e81da06aa8fc08f45382f",
41+
"platform": {
42+
"name": "Linux",
43+
"architecture": "arm64"
44+
}
45+
},
46+
{
47+
"url": "https://github.com/cristianadam/llama.qtcreator/releases/download/v2.0.2/LlamaCpp-2.0.2-macOS-universal.7z",
48+
"sha256": "3eb4a32d0fb08287c7d6a690dc85ef9aa073d25a8a6566715342d08cdba6e66c",
49+
"platform": {
50+
"name": "macOS",
51+
"architecture": "x86_64"
52+
}
53+
},
54+
{
55+
"url": "https://github.com/cristianadam/llama.qtcreator/releases/download/v2.0.2/LlamaCpp-2.0.2-macOS-universal.7z",
56+
"sha256": "3eb4a32d0fb08287c7d6a690dc85ef9aa073d25a8a6566715342d08cdba6e66c",
57+
"platform": {
58+
"name": "macOS",
59+
"architecture": "arm64"
60+
}
61+
}
62+
],
63+
"metadata": {
64+
"Id": "llamacpp",
65+
"Name": "llama.qtcreator",
66+
"Version": "2.0.2",
67+
"CompatVersion": "2.0.2",
68+
"Vendor": "ggml-org",
69+
"VendorId": "ggml",
70+
"Copyright": "(C) 2025 The llama.qtcreator Contributors, Copyright (C) The Qt Company Ltd. and other contributors.",
71+
"License": "MIT",
72+
"Description": "llama.cpp infill completion plugin for Qt Creator",
73+
"LongDescription": [
74+
"# llama.qtcreator",
75+
"",
76+
"Local LLM-assisted text completion for Qt Creator.",
77+
"",
78+
"![Qt Creator llama.cpp Text](https://raw.githubusercontent.com/cristianadam/llama.qtcreator/refs/heads/main/screenshots/[email protected])",
79+
"",
80+
"---",
81+
"",
82+
"![Qt Creator llama.cpp Qt Widgets](https://raw.githubusercontent.com/cristianadam/llama.qtcreator/refs/heads/main/screenshots/[email protected])",
83+
"",
84+
"",
85+
"## Features",
86+
"",
87+
"- Auto-suggest on cursor movement. Toggle enable / disable with `Ctrl+Shift+G`",
88+
"- Trigger the suggestion manually by pressing `Ctrl+G`",
89+
"- Accept a suggestion with `Tab`",
90+
"- Accept the first line of a suggestion with `Shift+Tab`",
91+
"- Control max text generation time",
92+
"- Configure scope of context around the cursor",
93+
"- Ring context with chunks from open and edited files and yanked text",
94+
"- [Supports very large contexts even on low-end hardware via smart context reuse](https://github.com/ggml-org/llama.cpp/pull/9787)",
95+
"- Speculative FIM support",
96+
"- Speculative Decoding support",
97+
"- Display performance stats",
98+
"- Chat support",
99+
"- Source and Image drag & drop support",
100+
"- Current editor selection predefined and custom LLM prompts",
101+
"",
102+
"",
103+
"### llama.cpp setup",
104+
"",
105+
"The plugin requires a [llama.cpp](https://github.com/ggml-org/llama.cpp) server instance to be running at:",
106+
"",
107+
"![Qt Creator llama.cpp Settings](https://raw.githubusercontent.com/cristianadam/llama.qtcreator/refs/heads/main/screenshots/[email protected])",
108+
"",
109+
"",
110+
"#### Mac OS",
111+
"",
112+
"```bash",
113+
"brew install llama.cpp",
114+
"```",
115+
"",
116+
"#### Windows",
117+
"",
118+
"```bash",
119+
"winget install llama.cpp",
120+
"```",
121+
"",
122+
"#### Any other OS",
123+
"",
124+
"Either build from source or use the latest binaries: https://github.com/ggml-org/llama.cpp/releases",
125+
"",
126+
"### llama.cpp settings",
127+
"",
128+
"Here are recommended settings, depending on the amount of VRAM that you have:",
129+
"",
130+
"- More than 16GB VRAM:",
131+
"",
132+
" ```bash",
133+
" llama-server --fim-qwen-7b-default",
134+
" ```",
135+
"",
136+
"- Less than 16GB VRAM:",
137+
"",
138+
" ```bash",
139+
" llama-server --fim-qwen-3b-default",
140+
" ```",
141+
"",
142+
"- Less than 8GB VRAM:",
143+
"",
144+
" ```bash",
145+
" llama-server --fim-qwen-1.5b-default",
146+
" ```",
147+
"",
148+
"Use `llama-server --help` for more details.",
149+
"",
150+
"",
151+
"### Recommended LLMs",
152+
"",
153+
"The plugin requires FIM-compatible models: [HF collection](https://huggingface.co/collections/ggml-org/llamavim-6720fece33898ac10544ecf9)",
154+
"",
155+
"## Examples",
156+
"",
157+
"### A Qt Quick example on MacBook Pro M3 `Qwen2.5-Coder 3B Q8_0`:",
158+
"",
159+
"![Qt Creator llama.cpp Qt Quick](https://raw.githubusercontent.com/cristianadam/llama.qtcreator/refs/heads/main/screenshots/[email protected])",
160+
"",
161+
"### Chat on a Mac Studio M2 with `gpt-oss 20B`:",
162+
"",
163+
"![Qt Creator llama.cpp Chat](https://raw.githubusercontent.com/cristianadam/llama.qtcreator/refs/heads/main/screenshots/qtcreator-llamacpp-chat.webp)",
164+
"",
165+
"## Implementation details",
166+
"",
167+
"The plugin aims to be very simple and lightweight and at the same time to provide high-quality and performant local FIM completions, even on consumer-grade hardware. ",
168+
"",
169+
"## Other IDEs",
170+
"",
171+
"- Vim/Neovim: https://github.com/ggml-org/llama.vim",
172+
"- VS Code: https://github.com/ggml-org/llama.vscode"
173+
],
174+
"Url": "https://github.com/ggml-org/llama.qtcreator",
175+
"DocumentationUrl": "",
176+
"Dependencies": [
177+
{
178+
"Id": "core",
179+
"Version": "18.0.0"
180+
},
181+
{
182+
"Id": "projectexplorer",
183+
"Version": "18.0.0"
184+
},
185+
{
186+
"Id": "texteditor",
187+
"Version": "18.0.0"
188+
}
189+
]
190+
}
191+
},
12192
"2.0.1": {
13193
"sources": [
14194
{

0 commit comments

Comments
 (0)