Which of these AI Safety Research Futarchy projects will get Conf Accepted, if chosen?
4
Ṁ1782026
41%
Salient features of self-models
39%
Exploring more metacognitive capabilities of LLMs
37%
Post-training order and CoT Monitorability
36%
Goal Crystallisation
34%
Detection game
26%
Research sabotage dataset
26%
Model Emulation
24%
Model organisms resisting generalisation
23%
Online Learning for Research Sabotage Mitigation
This is a derivative market of the markets linked to from this post.
For projects that do not get chosen by the futarchy, the corresponding market here will resolve N/A. Otherwise, they resolve according to whether both the "uploaded to arXiv" and "accepted to a top ML conference" resolve YES (if either resolves NO or N/A, they resolve NO).
Be aware the resolving markets N/A isn't always easy, I will do my best to ask for mod assistance if there is trouble.
This question is managed and resolved by Manifold.
Get
1,000and
3.00
Related questions
Related questions
Will there be serious AI safety drama at Meta AI before 2026?
15% chance
Will Anthropic be the best on AI safety among major AI labs at the end of 2025?
93% chance
Will I be accepted to these AI safety fellowship programs for Winter 2026?
Is RLHF good for AI safety? [resolves to poll]
40% chance
Major AI research organization announces autonomous AI researcher?
13% chance
Will Destiny discuss AI Safety before 2026?
37% chance
Will "Safety consultations for AI lab employees" make the top fifty posts in LessWrong's 2024 Annual Review?
7% chance
Frontier labs ~trusted by the AI safety community at the end of 2026?
I make a contribution to AI safety that is endorsed by at least one high profile AI alignment researcher by the end of 2026
40% chance
Will a Turing Award be given out for work on AI alignment or existential safety by 2040?
79% chance