Tianyi (Alex) Qiu

Hi! I am Tianyi :)

I conduct research on AI alignment, with a focus on how it interacts with human epistemology and moral progress - what I believe to be the most important problem. Methodology-wise, I aim to employ experimental, formal, and social science methods alike.

Two of the projects I led on this topic were awarded Spotlight (NeurIPS'24) and Best Paper Award (NeurIPS'24 Pluralistic Alignment Workshop) respectively. I coauthored another Oral (NeurIPS'24) paper on the idea of alignment by correction.

I was recently a research intern at the Center for Human-Compatible AI, UC Berkeley. I will be graduating from Peking University in 2026, where I have been a member of the PKU Alignment Team. I am seeking research & PhD positions! (My CV)

 

Selected Works

You may head for my Google Scholar profile to view my other publications and the stats!

Project: Progress alignment (to prevent premature value lock-in)

Project: Theoretical deconfusion

Project: Surveying the AI safety & alignment field

 

Trajectory

 

Talks & Reports

 

Get in touch!

Please feel free to reach out! If you are on the fence about getting in touch, consider yourself encouraged to do so :)

I can be reached at the email address qiutianyi.qty@gmail.com, or on Twitter via the handle @Tianyi_Alex_Qiu. You could also book a quick call with me via this Calendly link - it's as simple as a click!