Skip to content

Commit db4e5c7

Browse files
authored
Readme: Add Sambanova documentation (#803)
* README.org: Add Sambanova documentation.
1 parent 118431e commit db4e5c7

File tree

1 file changed

+54
-25
lines changed

1 file changed

+54
-25
lines changed

README.org

+54-25
Original file line numberDiff line numberDiff line change
@@ -8,31 +8,32 @@
88
gptel is a simple Large Language Model chat client for Emacs, with support for multiple models and backends. It works in the spirit of Emacs, available at any time and uniformly in any buffer.
99

1010
#+html: <div align="center">
11-
| LLM Backend | Supports | Requires |
12-
|--------------------+----------+----------------------------|
13-
| ChatGPT | ✓ | [[https://platform.openai.com/account/api-keys][API key]] |
14-
| Anthropic (Claude) | ✓ | [[https://www.anthropic.com/api][API key]] |
15-
| Gemini | ✓ | [[https://makersuite.google.com/app/apikey][API key]] |
16-
| Ollama | ✓ | [[https://ollama.ai/][Ollama running locally]] |
17-
| Llama.cpp | ✓ | [[https://github.com/ggerganov/llama.cpp/tree/master/examples/server#quick-start][Llama.cpp running locally]] |
18-
| Llamafile | ✓ | [[https://github.com/Mozilla-Ocho/llamafile#quickstart][Local Llamafile server]] |
19-
| GPT4All | ✓ | [[https://gpt4all.io/index.html][GPT4All running locally]] |
20-
| Kagi FastGPT | ✓ | [[https://kagi.com/settings?p=api][API key]] |
21-
| Kagi Summarizer | ✓ | [[https://kagi.com/settings?p=api][API key]] |
22-
| Azure | ✓ | Deployment and API key |
23-
| Groq | ✓ | [[https://console.groq.com/keys][API key]] |
24-
| Mistral Le Chat | ✓ | [[https://console.mistral.ai/api-keys][API key]] |
25-
| Perplexity | ✓ | [[https://docs.perplexity.ai/docs/getting-started][API key]] |
26-
| OpenRouter | ✓ | [[https://openrouter.ai/keys][API key]] |
27-
| together.ai | ✓ | [[https://api.together.xyz/settings/api-keys][API key]] |
28-
| Anyscale | ✓ | [[https://docs.endpoints.anyscale.com/][API key]] |
29-
| PrivateGPT | ✓ | [[https://github.com/zylon-ai/private-gpt#-documentation][PrivateGPT running locally]] |
30-
| DeepSeek | ✓ | [[https://platform.deepseek.com/api_keys][API key]] |
31-
| Cerebras | ✓ | [[https://cloud.cerebras.ai/][API key]] |
32-
| Github Models | ✓ | [[https://github.com/settings/tokens][Token]] |
33-
| Novita AI | ✓ | [[https://novita.ai/model-api/product/llm-api?utm_source=github_gptel&utm_medium=github_readme&utm_campaign=link][Token]] |
34-
| xAI | ✓ | [[https://console.x.ai?utm_source=github_gptel&utm_medium=github_readme&utm_campaign=link][API key]] |
35-
| Github CopilotChat | ✓ | Github account |
11+
| LLM Backend | Supports | Requires |
12+
|----------------------+----------+----------------------------|
13+
| ChatGPT | ✓ | [[https://platform.openai.com/account/api-keys][API key]] |
14+
| Anthropic (Claude) | ✓ | [[https://www.anthropic.com/api][API key]] |
15+
| Gemini | ✓ | [[https://makersuite.google.com/app/apikey][API key]] |
16+
| Ollama | ✓ | [[https://ollama.ai/][Ollama running locally]] |
17+
| Llama.cpp | ✓ | [[https://github.com/ggerganov/llama.cpp/tree/master/examples/server#quick-start][Llama.cpp running locally]] |
18+
| Llamafile | ✓ | [[https://github.com/Mozilla-Ocho/llamafile#quickstart][Local Llamafile server]] |
19+
| GPT4All | ✓ | [[https://gpt4all.io/index.html][GPT4All running locally]] |
20+
| Kagi FastGPT | ✓ | [[https://kagi.com/settings?p=api][API key]] |
21+
| Kagi Summarizer | ✓ | [[https://kagi.com/settings?p=api][API key]] |
22+
| Azure | ✓ | Deployment and API key |
23+
| Groq | ✓ | [[https://console.groq.com/keys][API key]] |
24+
| Mistral Le Chat | ✓ | [[https://console.mistral.ai/api-keys][API key]] |
25+
| Perplexity | ✓ | [[https://docs.perplexity.ai/docs/getting-started][API key]] |
26+
| OpenRouter | ✓ | [[https://openrouter.ai/keys][API key]] |
27+
| together.ai | ✓ | [[https://api.together.xyz/settings/api-keys][API key]] |
28+
| Anyscale | ✓ | [[https://docs.endpoints.anyscale.com/][API key]] |
29+
| PrivateGPT | ✓ | [[https://github.com/zylon-ai/private-gpt#-documentation][PrivateGPT running locally]] |
30+
| DeepSeek | ✓ | [[https://platform.deepseek.com/api_keys][API key]] |
31+
| Sambanova (Deepseek) | ✓ | [[https://cloud.sambanova.ai/apis][API key]] |
32+
| Cerebras | ✓ | [[https://cloud.cerebras.ai/][API key]] |
33+
| Github Models | ✓ | [[https://github.com/settings/tokens][Token]] |
34+
| Novita AI | ✓ | [[https://novita.ai/model-api/product/llm-api?utm_source=github_gptel&utm_medium=github_readme&utm_campaign=link][Token]] |
35+
| xAI | ✓ | [[https://console.x.ai?utm_source=github_gptel&utm_medium=github_readme&utm_campaign=link][API key]] |
36+
| Github CopilotChat | ✓ | Github account |
3637
#+html: </div>
3738

3839
*General usage*: ([[https://www.youtube.com/watch?v=bsRnh_brggM][YouTube Demo]])
@@ -110,6 +111,7 @@ gptel uses Curl if available, but falls back to the built-in url-retrieve to wor
110111
- [[#openrouter][OpenRouter]]
111112
- [[#privategpt][PrivateGPT]]
112113
- [[#deepseek][DeepSeek]]
114+
- [[#sambanova-deepseek][Sambanova (DeepSeek)]]
113115
- [[#cerebras][Cerebras]]
114116
- [[#github-models][Github Models]]
115117
- [[#novita-ai][Novita AI]]
@@ -748,6 +750,33 @@ The above code makes the backend available to select. If you want it to be the
748750

749751
#+html: </details>
750752
#+html: <details><summary>
753+
754+
**** Sambanova (Deepseek)
755+
#+html: </summary>
756+
Sambanova offers various LLMs through their Samba Nova Cloud offering, with Deepseek-R1 being one of them. The token speed for Deepseek R1 via Sambanova is about 6 times faster than when accessed through deepseek.com
757+
758+
Register a backend with
759+
#+begin_src emacs-lisp
760+
(gptel-make-openai "Sambanova" ;Any name you want
761+
:host "api.sambanova.ai"
762+
:endpoint "/v1/chat/completions"
763+
:stream t ;for streaming responses
764+
:key "your-api-key" ;can be a function that returns the key
765+
:models '(DeepSeek-R1))
766+
#+end_src
767+
768+
You can pick this backend from the menu when using gptel (see [[#usage][Usage]]).
769+
770+
***** (Optional) Set as the default gptel backend
771+
The code aboves makes the backend available for selection. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Add these two lines to your configuration:
772+
#+begin_src emacs-lisp
773+
;; OPTIONAL configuration
774+
(setq gptel-model 'DeepSeek-R1)
775+
(setq gptel-backend (gptel-get-backend "Sambanova"))
776+
#+end_src
777+
#+html: </details>
778+
#+html: <details><summary>
779+
751780
**** Cerebras
752781
#+html: </summary>
753782

0 commit comments

Comments
 (0)