LLM Unlearning for Trustworthy and Privacy-Preserving AI

LLM Unlearning is an open-source project focused on enabling unlearning techniques in Large Language Models (LLMs). With the rise of AI applications and increasing concerns about data privacy, this project introduces exact and approximate unlearning methods designed to make AI models privacy-preserving, trustworthy, and ethical.

Project Overview

By facilitating models to "forget" unwanted or sensitive data, LLM Unlearning helps ensure compliance with data privacy regulations (i.e., GDPR, CCPA) and fosters ethical AI development. It is a crucial step toward creating AI models that are both transparent and fair, while maintaining high performance.

Key Features

Project 1: DP2Unlearning [Github]

Paper: DP2Unlearning: An Efficient and Guaranteed Unlearning Framework for LLMs

The DP2Unlearning project focuses on advanced techniques for unlearning within LLMs, offering an efficient and guaranteed framework for LLM unlearning. Navigate to the DP2Unlearning project directory to explore and simulate the results. You can also develop and adapt the methods to your own ideas and research needs.

How to Get Started

  1. Navigate to the DP2Unlearning directory.
  2. Follow the setup instructions in the repository to replicate the results from the paper.
  3. Experiment with unlearning methods, apply them to your use cases, and share your insights!