Skip to content

Commit f78be9b

Browse files
authored
Update README.md
1 parent 01a4150 commit f78be9b

File tree

1 file changed

+10
-7
lines changed

1 file changed

+10
-7
lines changed

README.md

Lines changed: 10 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -51,13 +51,16 @@ https://huggingface.co/tiiuae/falcon-7b/
5151
https://huggingface.co/tiiuae/falcon-40b-instruct
5252
https://huggingface.co/tiiuae/falcon-7b-instruct
5353

54-
**OpenAssistant here:**
55-
https://huggingface.co/OpenAssistant
56-
https://huggingface.co/OpenAssistant/falcon-7b-sft-mix-2000
57-
https://huggingface.co/OpenAssistant/falcon-40b-sft-mix-1226
58-
_The sft-mix variants appear more capable than the top variants._
59-
_Download the 7B or 40B Falcon version, use falcon_convert.py (latest version) in 32 bit mode, then falcon_quantize to convert it to ggcc-v10_
60-
54+
**OpenAssistant here:**
55+
https://huggingface.co/OpenAssistant
56+
https://huggingface.co/OpenAssistant/falcon-7b-sft-mix-2000
57+
https://huggingface.co/OpenAssistant/falcon-40b-sft-mix-1226
58+
_The sft-mix variants appear more capable than the top variants._
59+
_Download the 7B or 40B Falcon version, use falcon_convert.py (latest version) in 32 bit mode, then falcon_quantize to convert it to ggcc-v10_
60+
61+
**OpenBuddy**
62+
https://huggingface.co/OpenBuddy
63+
_update ggllm.cpp, download the HF directory into openbuddy-7b, then `python falcon_convert.py openbuddy-7b openbuddy-7b 1`, then `falcon_quantize.exe openbuddy-7b/ggml.... openbuddy-7b/q5_1 q5_1 8`_
6164

6265
**Conversion of HF models and quantization:**
6366
1) use falcon_convert.py to produce a GGML v1 binary from HF - not recommended to be used directly

0 commit comments

Comments
 (0)