Simulation of peer-prediction incentives for crowd labels. Agents are truthful, random, or biased. A peer-truth style scoring pays by agreement with a random peer weighted by inverse report frequency. Shows payoff ordering and downstream label accuracy with and without the mechanism.
python -m venv .venv
. .venv/bin/activate    # Windows: .\.venv\Scripts\activate
pip install -r requirements.txt
make reproduce
make plot
make test- 
Average payoff by agent type under peer-prediction 
- 
Majority-vote accuracy vs weighted-vote accuracy 
- 
Sensitivity to truthful accuracy and class prior via Streamlit