Skip to content

cmu-l3/llmlean

Repository files navigation

LLMLean

LLMlean integrates LLMs and Lean for tactic suggestions, proof completion, and more.

News

  • 06/2025: Introduced iterative refinement mode for proof generation
  • 06/2025: Added support for Kimina Prover models via Ollama (thanks to @BoltonBailey)

Here's an example of using LLMLean on problems from Mathematics in Lean:

llmlean_example.mp4

You can use an LLM running on your laptop, or an LLM from the Open AI API or Together.ai API:

LLM in the cloud (default):

  1. Get an OpenAI API key.

  2. Modify ~/.config/llmlean/config.toml (or C:\Users\<Username>\AppData\Roaming\llmlean\config.toml on Windows), and enter the following:

api = "openai"
model = "gpt-4o"
apiKey = "<your-openai-api-key>"

(Alternatively, you may set the API key using the environment variable LLMLEAN_API_KEY or using set_option llmlean.apiKey "<your-api-key>".)

You can also use other providers such as Anthropic, Together.AI, or any provider adhering to the OpenAI API. See other providers.

  1. Add llmlean to lakefile:
require llmlean from git
  "https://github.com/cmu-l3/llmlean.git"
  1. Import:
import LLMlean

Now use a tactic described below.

Option 2: LLM on your laptop:

  1. Install ollama.

  2. Pull a language model:

ollama pull wellecks/ntpctx-llama3-8b
  1. Set 2 configuration variables in ~/.config/llmlean/config.toml:
api = "ollama"
model = "wellecks/ntpctx-llama3-8b" # model name from above

Then do steps (3) and (4) above. Now use a tactic described below.

Note that there are multiple Lean models available for download from Ollama, including Kimina Prover models which use chain-of-thought reasoning. You can find a list of these models and how to configure them in the Ollama Models document.

Tactics

llmstep tactic

Next-tactic suggestions via llmstep "{prefix}". Examples:

  • llmstep ""

  • llmstep "apply "

The suggestions are checked in Lean.

llmqed tactic

Complete the current proof via llmqed. Examples:

The suggestions are checked in Lean.

Proof Generation Modes

LLMLean supports two modes for proof generation with llmqed:

  • Parallel mode: Generates multiple proof attempts in parallel
  • Iterative refinement mode: Generates one proof attempt, analyzes any errors, and refines the proof based on feedback

To configure:

# In ~/.config/llmlean/config.toml
mode = "iterative"  # or "parallel"

Enable verbose output to see the refinement process:

set_option llmlean.verbose true

For the best performance, especially for the llmqed tactic, we recommend using Anthropic Claude with iterative refinement mode.

Demo in PFR

Here is an example of proving a lemma with llmqed (OpenAI GPT-4o):

And using llmqed to make part of an existing proof simpler:

Customization

Please see the following:

Testing

Rebuild LLMLean with the config test=on:

lake -R -Ktest=on update
lake build

Then manually check llmlean on the files under LLMleanTest.

About

LLMs + Lean, on your laptop or in the cloud

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 5

Languages