forked from ggml-org/llama.cpp
-
Notifications
You must be signed in to change notification settings - Fork 21
loading (partial) prompt from files
John edited this page Jul 15, 2023
·
1 revision
When testing different settings or loading more complex input you usually will not want to use the -p "PROMPT"
argument.
ggllm.cpp allows to load a prompt from a text -f /path/to/file
and combine that with additional features such as:
- -enc : All your prompt will be encapsulated by the correct tokens/syntax for your model type.
So for OpenAssistant V2.5 it's<|prompter|><|assistant|>
for Wizard it's### Response:
etc.
This feature relies on the model having special vocabulary (OA 2.5), using a good path or filename (oasst1) or using--alias
- -sys and -sysraw : This puts a system prompt before all other prompts. -sys will follow -enc syntax, -sysraw will not add anything to it.
The system prompt differs from a normal prompt as it will stay on top of the context window even if your generated output exceeds-c ctx
Example:-n 32000 -enc -sys "Write a story about a woman in the forest, searching for her lost daughter but getting harassed by wild rabbits and bugs" --ignore-eos
This example will write 32000 tokens of a story about that woman searching for her kid, it should stay focused on that plotline even when context is exceeded. - -e : This will change all literal "\n" into new lines in your prompts, allows to add new lines easily.
- -p "PROMPT" : This allows to add custom strings before or after the file you specified.
Example-e -p "Summarize this code:\n" -f "sourcecode.txt" -p "\nSummarize it now please, focus on xxx." -enc
This example will construct a 3 part prompt with new lines and encapsulte the entire combination into a finetune instruction.