Hi! I am Tianyi :)
I conduct research on AI alignment, with a focus on how it interacts with human truth-seeking and moral progress - what I believe to be the most important problem. Methodology-wise, I aim to employ experimental, formal, and social science methods alike.
Two of the projects I led on this topic were awarded Spotlight (NeurIPS'24) and Best Paper Award (NeurIPS'24 Pluralistic Alignment Workshop) respectively. I co-led another Best Paper Award (ACL'25) project on how theories of data compression predict certain empirical fragilities of alignment - it's almost like language models resist alignment.
I am an Anthropic AI Safety Fellow, based in London. I was previously a research intern at the Center for Human-Compatible AI, UC Berkeley. I have also been a member of the PKU Alignment Team. I mentor at the Supervised Program for Alignment Research and the Algoverse AI Safety Fellowship - please also feel free to cold-email me if you'd like to informally work with me.
I am seeking research positions! Here is my CV.
You may head for my Google Scholar profile to view my other works!
Project: Prevent lock-in, facilitate progress
Here are some silly xkcd-style comics on what we call the lock-in hypothesis.
The Lock-in Hypothesis: Stagnation by Algorithm (accepted to ICML 2025) Tianyi Qiu*, Zhonghao He*, Tejasveer Chugh, Max Kleiman-Weiner (2025) (*Equal contribution)
Risk (pre-mature value lock-in): Frontier AI systems hold increasing influence over the epistemology of human users. Such influence can reinforce prevailing societal values, potentially contributing to the lock-in of misguided moral beliefs and, consequently, the perpetuation of problematic moral practices on a broad scale.
Solution (progress alignment): We introduce progress alignment as a technical solution to mitigate this imminent risk. Progress alignment algorithms learn to emulate the mechanics of human moral progress, thereby addressing the susceptibility of existing alignment methods to contemporary moral blindspots.
Infrastructure (ProgressGym): To empower research in progress alignment, we introduce ProgressGym, an experimental framework allowing the learning of moral progress mechanics from history, in order to facilitate future progress in real-world moral decisions. Leveraging 9 centuries of historical text and 18 historical LLMs, ProgressGym enables codification of real-world progress alignment challenges into concrete benchmarks. (Hugging Face, GitHub, Leaderboard)
ProgressGym: Alignment with a Millennium of Moral Progress (NeurIPS'24 Spotlight, Dataset & Benchmark Track)
Tianyi Qiu†*, Yang Zhang*, Xuchuan Huang, Jasmine Xinze Li, Jiaming Ji, Yaodong Yang (2024) (†Project lead, *Equal technical contribution)
Project: Theoretical deconfusion
Alignment training is easily undone with finetuning. Why so? This work proves that further finetuning degrades alignment performance far faster than it degrades pretraining performance, due to the much smaller amount of alignment training data. We operate under a compression theory-based model of multi-stage training.
Language Models Resist Alignment: Evidence From Data Compression (Best Paper Award, ACL 2025 Main)
Jiaming Ji*, Kaile Wang*, Tianyi Qiu*, Boyuan Chen*, Jiayi Zhou*, Changye Li, Hantao Lou, Juntao Dai, Yunhuai Liu, Yaodong Yang (2024) (*Equal contribution)
Classical social choice theory assumes complete information over all preferences of all stakeholders. It's not true for AI alignment, nor for legislation, indirect elections, etc. Dropping such an assumption, this work designs the representative social choice formalism that models social choice decisions based on a mere finite sample of preferences. Its analytical tractability is established with statistical learning theory, while at the same time, Arrow-like impossibility theorems are proved.
Representative Social Choice: From Learning Theory to AI Alignment (Best Paper Award, NeurIPS 2024 Pluralistic Alignment Workshop)
Tianyi Qiu (2024)
It is well known that classical generalization analysis doesn't work on deep neural nets without prohibitively strong assumptions. This work tries to develop an alternative: an empirically grounded model of reward generalization in RLHF that can derive formal generalization bounds while taking into account fine-grained information topologies.
Reward Generalization in RLHF: A Topological Perspective (accepted to ACL 2025 Findings)
Tianyi Qiu†*, Fanzhi Zeng*, Jiaming Ji*, Dong Yan*, Kaile Wang, Jiayi Zhou, Han Yang, Juntao Dai, Xuehai Pan, Yaodong Yang (2025) (†Project lead, *Equal technical contribution)
Project: Surveying the AI safety & alignment field
Since early 2023 when the alignment field started to undergo rapid growth, there has not yet been a comprehensive review article surveying the field. We have thus conducted a review that aims to be as comprehensive as possible, all the while constructing a unified framework (the alignment cycle). We emphasize the alignment of both contemporary AI systems and more advanced systems that pose more serious risks. Since its publication, it has seen citation by important AI safety works from Dalrymple, Skalse, Bengio, Russell et al. and NIST, and has been featured in various high-profile venues in China and Singapore.
I co-led this project.
AI Alignment: A Comprehensive Survey (preprint)
Jiaming Ji*, Tianyi Qiu*, Boyuan Chen*, Borong Zhang*, Hantao Lou, Kaile Wang, Yawen Duan, Zhonghao He, Jiayi Zhou, Zhaowei Zhang, Fanzhi Zeng, Kwan Yee Ng, Juntao Dai, Xuehai Pan, Aidan O'Gara, Yingshan Lei, Hua Xu, Brian Tse, Jie Fu, Stephen McAleer, Yaodong Yang, Yizhou Wang, Song-Chun Zhu, Yike Guo, Wen Gao (2023) (*Equal contribution)
Aug 2020: Won a gold medal in the Chinese National Olympiad in Informatics 2020
Mar 2021: Started as a visiting student at Peking University
Nov 2021: Started reading and thinking a lot about AI safety/alignment
Sep 2022: Officially started at Peking University, as a member of the Turing Class
May 2023: Started working with the PKU Alignment Group, advised by Prof. Yaodong Yang
Jun 2024: Started as a research intern at Center for Human-Compatible AI, UC Berkeley, co-advised by Micah and Cam
Sep 2024: Started as an exchange student at University of California, via the UCEAP reciprocity program with PKU
Jun 2025: Start as an AI Safety Fellow at Anthropic, working on frontier AI safety and alignment
[Talk] The Lock-in Hypothesis and Truth-Seeking AI (Jul 2025)
[Talk] Belief Entrenchment and How to Remove it (Jun 2025)
[Talk] The Lock-in Hypothesis: Stagnation by Algorithm (May 2025)
[Talk] Representative Social Choice: From Learning Theory to AI Alignment (Dec 2024)
[Talk] Value Alignment: History, Frontiers, and Open Problems (Jun 2024)
[Talk] ProgressGym: Alignment with a Millennium of Moral Progress (Jun 2024)
[Talk] Towards Moral Progress Algorithms Implementable in the Next GPT (May 2024)
Plus a few that are not (yet) public.
Please feel free to reach out! If you are on the fence about getting in touch, consider yourself encouraged to do so :)
I can be reached at the email address qiutianyi.qty@gmail.com, or on Twitter via the handle @Tianyi_Alex_Qiu.