Skip to content

The LLM Unlearning repository is an open-source project dedicated to the concept of unlearning in Large Language Models (LLMs). It aims to address concerns about data privacy and ethical AI by exploring and implementing unlearning techniques that allow models to forget unwanted or sensitive data. This ensures that AI models comply with privacy.

Notifications You must be signed in to change notification settings

tamimalmahmud/LLM-Unlearning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

70 Commits
 
 
 
 

Repository files navigation

LLM Unlearning: Privacy-Preserving Unlearning for Trustworthy AI

LLM Unlearning is an open-source project focused on enabling unlearning techniques in Large Language Models (LLMs). With the rise of AI applications and increasing concerns about data privacy, this project introduces exact and approximate unlearning methods designed to make AI models privacy-preserving, trustworthy, and ethical.

By facilitating models to "forget" unwanted or sensitive data, LLM Unlearning helps ensure compliance with data privacy regulations (i.e,. GDPR, CCPA) and fosters ethical AI development. It is a crucial step toward creating AI models that are both transparent and fair, while maintaining high performance.

Key Features:

  • Exact and Approximate Unlearning: Methods for forgetting specific data efficiently, while preserving model performance.
  • Privacy-Preserving AI: Secure mechanisms to allow models to forget sensitive data and protect user privacy.
  • Trustworthy AI: Promotes building ethical models that offer transparency and fairness in data processing.

Project 1: DP2Unlearning [Github]

Paper: DP2Unlearning: An Efficient and Guaranteed Unlearning Framework for LLMs

The DP2Unlearning project focuses on advanced techniques for unlearning within LLMs, offering an efficient and guaranteed framework for data privacy preservation. Navigate to the DP2Unlearning project directory to explore and simulate the results. You can also develop and adapt the methods to your own ideas and research needs.

How to Get Started:

  • Navigate to the DP2Unlearning directory.
  • Follow the setup instructions in the repository to replicate the results from the paper.
  • Experiment with unlearning methods, apply them to your use cases, and share your insights!

Contribute

We welcome contributions from the AI and machine learning community! Whether you are working on improving unlearning techniques, privacy-preserving methods, or AI ethics, we encourage you to join this open-source effort.

About

The LLM Unlearning repository is an open-source project dedicated to the concept of unlearning in Large Language Models (LLMs). It aims to address concerns about data privacy and ethical AI by exploring and implementing unlearning techniques that allow models to forget unwanted or sensitive data. This ensures that AI models comply with privacy.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages