I am the head of AI safety at the US AI Safety Institute. I previously ran the Alignment Research Center and the language model alignment team at OpenAI. Before that I received my PhD from the theory group at UC Berkeley.
You may be interested in my writing about alignment, my blog, my academic publications, or fun and games.
You can reach me at paulfchristiano@gmail.com.