LLM Unlearning is an open-source project focused on enabling unlearning techniques in Large Language Models (LLMs). With the rise of AI applications and increasing concerns about data privacy, this project introduces exact and approximate unlearning methods designed to make AI models privacy-preserving, trustworthy, and ethical.
By facilitating models to "forget" unwanted or sensitive data, LLM Unlearning helps ensure compliance with data privacy regulations (i.e,. GDPR, CCPA) and fosters ethical AI development. It is a crucial step toward creating AI models that are both transparent and fair, while maintaining high performance.
- Exact and Approximate Unlearning: Methods for forgetting specific data efficiently, while preserving model performance.
- Privacy-Preserving AI: Secure mechanisms to allow models to forget sensitive data and protect user privacy.
- Trustworthy AI: Promotes building ethical models that offer transparency and fairness in data processing.
Project 1: DP2Unlearning [Github]
Paper: DP2Unlearning: An Efficient and Guaranteed Unlearning Framework for LLMs
The DP2Unlearning project focuses on advanced techniques for unlearning within LLMs, offering an efficient and guaranteed framework for data privacy preservation. Navigate to the DP2Unlearning project directory to explore and simulate the results. You can also develop and adapt the methods to your own ideas and research needs.
- Navigate to the
DP2Unlearning
directory. - Follow the setup instructions in the repository to replicate the results from the paper.
- Experiment with unlearning methods, apply them to your use cases, and share your insights!
We welcome contributions from the AI and machine learning community! Whether you are working on improving unlearning techniques, privacy-preserving methods, or AI ethics, we encourage you to join this open-source effort.