Which of these AI Safety Research Futarchy projects will get Conf Accepted, if chosen?
4
Ṁ178
2026
41%
Salient features of self-models
39%
Exploring more metacognitive capabilities of LLMs
37%
Post-training order and CoT Monitorability
36%
Goal Crystallisation
34%
Detection game
26%
Research sabotage dataset
26%
Model Emulation
24%
Model organisms resisting generalisation
23%
Online Learning for Research Sabotage Mitigation

This is a derivative market of the markets linked to from this post.

For projects that do not get chosen by the futarchy, the corresponding market here will resolve N/A. Otherwise, they resolve according to whether both the "uploaded to arXiv" and "accepted to a top ML conference" resolve YES (if either resolves NO or N/A, they resolve NO).

Be aware the resolving markets N/A isn't always easy, I will do my best to ask for mod assistance if there is trouble.

Get
Ṁ1,000
and
S3.00
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules