LLM Unlearning is an open-source project focused on enabling unlearning techniques in Large Language Models (LLMs). With the rise of AI applications and increasing concerns about data privacy, this project introduces exact and approximate unlearning methods designed to make AI models privacy-preserving, trustworthy, and ethical.
By facilitating models to "forget" unwanted or sensitive data, LLM Unlearning helps ensure compliance with data privacy regulations (i.e., GDPR, CCPA) and fosters ethical AI development. It is a crucial step toward creating AI models that are both transparent and fair, while maintaining high performance.
The DP2Unlearning project focuses on advanced techniques for unlearning within LLMs, offering an efficient and guaranteed framework for LLM unlearning. Navigate to the DP2Unlearning project directory to explore and simulate the results. You can also develop and adapt the methods to your own ideas and research needs.