News
- [2024-10] Our techinical report Skywork-Reward: Bag of Tricks for Reward Modeling in LLMs has been released on arXiv!
- [2024-09] Our paper Large Language Model Unlearning via Embedding-Corrupted Prompts has been accepted to NeurIPS 2024!
- [2024-09] We released Skywork-Reward-Gemma-2-27B and Skywork-Reward-Llama-3.1-8B, two state-of-the-art reward models rank 1st and 3rd on the RewardBench leaderboard!
- [2024-06] I joined Skywork AI as a Research Intern, where I will be working on alignment for large language models.
- [2024-06] We present Embedding-COrrupted (ECO) Prompts, a lightweight unlearning framework for large language models. The paper is available on arXiv and the project page is available here.
- [2024-03] I compiled two lists of papers on machine unlearning for large language models and representation engineering.
- [2023-12] My paper on understanding the role of optimization in double descent has been accepted to the NeurIPS 2023 Optimization for Machine Learning Workshop. The paper is available on arXiv and the poster is available here.