Skip to content

LeapLabTHU/Absolute-Zero-Reasoner

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

6 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Absolute Zero Paradigm

πŸ”— Links


πŸ“ Roadmap


βœ… Release training code
⏳ Release evaluation code
⏳ Update veRL
⏳ Upgrade Python executor

βš™οΈ Algorithm Flow


Our approach centers on a repeated iterative process of the following two steps:

  1. PROPOSE: The model generates reasoning tasks from abduction, deduction, and induction types. Tasks are validated with Python execution and assigned a learnability reward.

  2. SOLVE: The model then attempts to solve these self-generated tasks. Solutions are verified through Python execution, receiving an accuracy reward.

The model continuously improves through both phases using TRR++, creating a self-evolving loop that strengthens reasoning without external training data.

Absolute Zero Reasoner

πŸ“Š Results


Main Results

Our approach achieves strong performance across both code and math reasoning benchmarks without using any external data:

Model Base #data Code Avg Math Avg Total Avg
Base Models
Qwen2.5-7B - - 52.0 27.5 39.8
Qwen2.5-7B-Ins - - 56.3 37.0 46.7
Qwen2.5-7B-Coder - - 56.6 23.9 40.2
Reasoners Trained on Curated Code Data
AceCoder-RM Ins 22k 58.3 37.4 47.9
AceCoder-RM Coder 22k 57.3 27.5 42.4
AceCoder-Rule Ins 22k 55.4 36.9 46.2
AceCoder-Rule Coder 22k 60.0 28.5 44.3
CodeR1-LC2k Ins 2k 60.5 35.6 48.0
CodeR1-12k Ins 10k 61.3 33.5 47.4
Reasoners Trained on Curated Math Data
PRIME-Zero Coder 484k 37.2 45.8 41.5
SimpleRL-Zoo Base 8.5k 54.0 38.5 46.3
Oat-Zero Math 8.5k 45.4 44.3 44.9
ORZ Base 57k 55.6 41.6 48.6
Absolute Zero Training w/ No Curated Data (Ours)
AZR (Ours) Base 0 55.2 +3.2 38.4 +10.9 46.8 +7.0
AZR (Ours) Coder 0 61.6 +5.0 39.1 +15.2 50.4 +10.2

Scaling Results

AZR shows consistent improvements across model sizes and types:

Model Family Variant Code Avg Math Avg Total Avg
Llama3.1-8b 28.5 3.4 16.0
Llama3.1-8b + AZR (Ours) 31.6 +3.1 6.8 +3.4 19.2 +3.2
Qwen2.5-3B Coder 51.2 18.8 35.0
Qwen2.5-3B Coder + AZR (Ours) 54.9 +3.7 26.5 +7.7 40.7 +5.7
Qwen2.5-7B Coder 56.6 23.9 40.2
Qwen2.5-7B Coder + AZR (Ours) 61.6 +5.0 39.1 +15.2 50.4 +10.2
Qwen2.5-14B Coder 60.0 20.2 40.1
Qwen2.5-14B Coder + AZR (Ours) 63.6 +3.6 43.0 +22.8 53.3 +13.2

✨ Getting Started


πŸŽ„ Environment Setup

conda create -n azr python=3.10
conda activate azr
conda install nvidia/label/cuda-12.4.1::cuda-toolkit
cd verl
pip install -e .
cd ..
pip install wheel
pip install flash-attn --no-build-isolation
pip install -r requirements.txt
pip uninstall vllm
pip install vllm==0.7.3
pip install transformers==4.47.1
pip install "math-verify[antlr4_9_3]"
pip install debugpy

πŸ’Ύ Data Processing

Process evaluation data on CruxEval / LiveCodeBench Execution during AZR Self-play

python -m absolute_zero_reasoner.data_construction.process_code_reasoning_data

πŸ‹οΈ Training


⚠️WARNING⚠️: The Python executor in this repository is very raw and intended for research purposes only. It is not secure for production environments. We plan to update our executor to more secure implementations in the future. Your use of our code is at your own discretion and risk.

πŸ«› Seeding (Optional)

We provide the seed datasets we collected by prompting each model in data/. If you want to create your own seed data, use the following script:

export OUTPUT_SEED_PATH=data/<new_ded_abd_seed_data_name>.jsonl
export OUTPUT_CODE_F_SEED_PATH=data/<new_ind_seed_data_name>.jsonl
bash scripts/seeding/<7b|14b|coder3b|coder7b|coder14b|llama>.sh

β™ŸοΈ Self-play

3b models need 2 X 80gb GPUs, 7/8b models need 4 X 80gb, 14b requires 8 X 80gb

bash scripts/selfplay/<7b|14b|coder3b|coder7b|coder14b|llama>.sh

If you want to use your own ded/abd or ind seed dataset:

export OUTPUT_SEED_PATH=data/<your_ded_abd_seed_data_name>.jsonl
export OUTPUT_CODE_F_SEED_PATH=data/<your_ind_seed_data_name>.jsonl
bash scripts/selfplay/<7b|14b|coder3b|coder7b|coder14b|llama>.sh

🌚 Resuming Runs

When resuming runs, put the original run wandb id into the script, i.e., trainer.wandb_run_id=<run_id>.

πŸ€— Converting veRL checkpoints to HF format

python -m absolute_zero_reasoner.utils.convert2hf \
  <veRL_ckpt_path>/actor \
  <veRL_ckpt_path>/actor/huggingface/ \
  <hf_ckpt_path>

πŸ“ˆDesign Your Own Intrinsic Rewards!

In configs, just add your own rewards to azr.reward.generation_reward_config, check the ones already implemented such as diversity and complexity rewards. Be Creative!

πŸ”§ Usage


We use the Deepseek R1 & tags as prompt template:

A conversation between User and Assistant. The user asks a question, and the Assistant solves it. The assistant first thinks about the reasoning process in the mind and then provides the user with the answer. The reasoning process and answer are enclosed within <think> </think> and <answer> </answer> tags, respectively, i.e., <think> reasoning process here </think> <answer> answer here </answer>. User: {question}\nAssistant: <think>

πŸ“ƒ Evaluation Code


TODO

🎈 Citation


If you find Absolute Zero Reasoner helpful, please cite us.

@misc{zhao2025absolutezeroreinforcedselfplay,
      title={Absolute Zero: Reinforced Self-play Reasoning with Zero Data}, 
      author={Andrew Zhao and Yiran Wu and Yang Yue and Tong Wu and Quentin Xu and Yang Yue and Matthieu Lin and Shenzhi Wang and Qingyun Wu and Zilong Zheng and Gao Huang},
      year={2025},
      eprint={2505.03335},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2505.03335}, 
}

🌻 Acknowledgement


Our reinforcement learning training codebase is a fork of the veRL framework. For rollouts, we used vLLM. The Python executor components are adapted from the QwQ Repository. Additionally, we borrowed our README structure from PRIME. Many thanks to the authors of these projects for their excellent contributions!

πŸ“§ Contact


Feel free to contact Andrew Zhao via email: [email protected]

πŸ“ˆ Star History


Star History Chart

About

Official Repository of Absolute Zero Reasoner

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published