LLMlean integrates LLMs and Lean for tactic suggestions, proof completion, and more.
- 06/2025: Introduced iterative refinement mode for proof generation
- 06/2025: Added support for Kimina Prover models via Ollama (thanks to @BoltonBailey)
Here's an example of using LLMLean on problems from Mathematics in Lean:
llmlean_example.mp4
You can use an LLM running on your laptop, or an LLM from the Open AI API or Together.ai API:
-
Get an OpenAI API key.
-
Modify
~/.config/llmlean/config.toml
(orC:\Users\<Username>\AppData\Roaming\llmlean\config.toml
on Windows), and enter the following:
api = "openai"
model = "gpt-4o"
apiKey = "<your-openai-api-key>"
(Alternatively, you may set the API key using the environment variable LLMLEAN_API_KEY
or using set_option llmlean.apiKey "<your-api-key>"
.)
You can also use other providers such as Anthropic, Together.AI, or any provider adhering to the OpenAI API. See other providers.
- Add
llmlean
to lakefile:
require llmlean from git
"https://github.com/cmu-l3/llmlean.git"
- Import:
import LLMlean
Now use a tactic described below.
-
Install ollama.
-
Pull a language model:
ollama pull wellecks/ntpctx-llama3-8b
- Set 2 configuration variables in
~/.config/llmlean/config.toml
:
api = "ollama"
model = "wellecks/ntpctx-llama3-8b" # model name from above
Then do steps (3) and (4) above. Now use a tactic described below.
Note that there are multiple Lean models available for download from Ollama, including Kimina Prover models which use chain-of-thought reasoning. You can find a list of these models and how to configure them in the Ollama Models document.
Next-tactic suggestions via llmstep "{prefix}"
. Examples:
The suggestions are checked in Lean.
Complete the current proof via llmqed
. Examples:
The suggestions are checked in Lean.
LLMLean supports two modes for proof generation with llmqed
:
- Parallel mode: Generates multiple proof attempts in parallel
- Iterative refinement mode: Generates one proof attempt, analyzes any errors, and refines the proof based on feedback
To configure:
# In ~/.config/llmlean/config.toml
mode = "iterative" # or "parallel"
Enable verbose output to see the refinement process:
set_option llmlean.verbose true
For the best performance, especially for the llmqed
tactic, we recommend using Anthropic Claude with iterative refinement mode.
Demo in PFR
Here is an example of proving a lemma with llmqed
(OpenAI GPT-4o):
And using llmqed
to make part of an existing proof simpler:
Please see the following:
Rebuild LLMLean with the config test=on
:
lake -R -Ktest=on update
lake build
Then manually check llmlean
on the files under LLMleanTest
.