By the end of 2025, which piece of advice will I feel has had the most positive impact on me becoming an effective AI alignment researcher?
Plus
15
Ṁ3382026
5%
get enough sleep
0.6%
write note files/.txt files/gdocs/other single document to describe what you hope to get out of each experiment, before you run it
0.2%
seek worst case bounds, but distrust them
9%
Stress less about general image and status
0.2%
Mark's post on how to do theoretical research: https://www.lesswrong.com/posts/zbFGRRWPwxBwjwknY/how-to-do-theoretical-research-a-personal-perspective-1
0.1%
Do more experiments. Develop research intimacy and intuition
40%
Intentful bodily care (sleep, water, and light exercise)
24%
Careful tracking of burnout and allocating time and resources to explicit relaxation
22%
This is a free response market. You should either vote on or submit meaningful advice (or both).
At the end of 2025, I will select a piece of advice from these responses that I feel has had the most positive impact on me becoming an effective AI alignment researcher.
The advice does not have to be original, but originality helps.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Sort by:
The worst case bounds answer led me to this post by Buck: https://www.lesswrong.com/posts/yTvBSFrXhZfL8vr5a/worst-case-thinking-in-ai-alignment#My_list_of_reasons_to_maybe_use_worst_case_thinking
Related questions
Related questions
In 2025, will I believe that aligning automated AI research AI should be the focus of the alignment community?
59% chance
Will Dan Hendrycks believe xAI has had a meaningful positive impact on AI alignment at the end of 2024?
28% chance
I make a contribution to AI safety that is endorsed by at least one high profile AI alignment researcher by the end of 2026
59% chance
Will I focus on the AI alignment problem for the rest of my life?
63% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
35% chance
Will some piece of AI capabilities research done in 2023 or after be net-positive for AI alignment research?
81% chance
Will I (co)write an AI safety research paper by the end of 2024?
45% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
51% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
Will I have a career as an alignment researcher by the end of 2024?
38% chance