Tianyi Alex Qiu

Hi! I am Tianyi :)

I conduct research on AI alignment, with a focus on how it interacts with human truth-seeking and moral progress - what I believe to be the most important problem. Methodology-wise, I aim to employ experimental, formal, and social science methods alike.

Two of the projects I led on this topic were awarded Spotlight (NeurIPS'24) and Best Paper Award (NeurIPS'24 Pluralistic Alignment Workshop) respectively. I co-led another Best Paper Award (ACL'25) project on how theories of data compression predict certain empirical fragilities of alignment - it's almost like language models resist alignment.

I am an Anthropic AI Safety Fellow, based in London. I was previously a research intern at the Center for Human-Compatible AI, UC Berkeley. I have also been a member of the PKU Alignment Team. I mentor at the Supervised Program for Alignment Research and the Algoverse AI Safety Fellowship - please also feel free to cold-email me if you'd like to informally work with me.

I am seeking research positions! Here is my CV.

 

Selected Works

You may head for my Google Scholar profile to view my other works!

Project: Prevent lock-in, facilitate progress

Project: Theoretical deconfusion

Project: Surveying the AI safety & alignment field

 

Trajectory

 

Talks & Reports

 

Get in touch!

Please feel free to reach out! If you are on the fence about getting in touch, consider yourself encouraged to do so :)

I can be reached at the email address qiutianyi.qty@gmail.com, or on Twitter via the handle @Tianyi_Alex_Qiu.