Eliezer Yudkowsky is impressed by a machine learning model, and believes that the model may be very helpful for alignment research, by the end of 2026
➕
Plus
62
Ṁ8527
2026
27%
chance

EDIT


Let's operationalize it this way:

"If Eliezer either expresses that the model may be very helpful for alignment research, or Eliezer strongly implies that he feels this way (eg. by indicating that it is more useful than an additional MIRI-level researcher), then we consider this market resolved to YES."

Get
Ṁ1,000
and
S3.00
Sort by:

@RobertCousineau He's encouraged that people at OpenAI are trying. That's different from thinking their work or the models they are using are effective.

@Mira I agree with Mira here. Insufficient for resolution.

@RobertCousineau I think "encouraged" is a far cry from "impressed".

Related market, which, unlike this one, is not "muddied" by any possible biases that Eliezer Yudkowsky may have:

Would doubling our chance of survival from 0% to 0% count as being very helpful, or does it need to actually make a difference?

predictedNO

@MartinRandall He does not behave as if chances were 0%. Are you rounding? I have trouble imaginining Yudkowski assigning pure 100% or 0% even to a mathematical theorem being correct/incorrect.

predictedNO

@askdf This is a quote from the "dying with dignity" post.

predictedNO

@MikhailDoroshenko OK, lol. What percent of nonsense on this site is joke and what serious?

@askdf My read of the linked article was and is that it is serious, and I continue to be amazed that others read it differently.

predictedNO

@MartinRandall Yeah, lots of people have really bad reading comprehension.

predictedNO

@MartinRandall The date gave him an opportunity to say some things he knew will not be received well. It is good to mix those with silly jokes, so that the whole text can be interpreted as humor.

predictedNO

@EzraSchott Or even bad comprehension of reading comprehension.

predictedNO

@askdf Sorry for being rude. You have a point.

predictedNO

@EzraSchott Thank you. I apologize for being rude back. I may be wrong about why he did it, but given what he preached in Sequences, he is unlikely to claim he is infallible.

@askdf That aside, I'm still interested in the answer to my original question.

predictedNO

Depends on Alana, but I guess doubling current hope should resolve as yes. It would be impressive given that alignment research is not new and in the bankless video he named a lab which he considered interesting and might join after sabbatical.

@askdf If a specific large language model doubles our hopes of solving alignment according to Eliezer, then this market would almost certainly resolve Yes.

That’s definitely not necessary, though.

I'd buy no on 2023 though

Differentially helpful, as in he believes it helps alignment more than capabilities, or just helpful? And how helpful does it have to be? Would Copilot/ChatGPT helping researchers run experiments or communicate results faster, etc. count?

@agentofuser Just helpful.

Would Copilot/ChatGPT helping researchers run experiments or communicate results faster, etc. count?

Only if the speedup is significant.

Let's operationalize it this way:

"If Eliezer either expresses that the model may be very helpful for alignment research, or Eliezer strongly implies that he feels this way (eg. by indicating that it is more useful than an additional MIRI-level researcher), then we consider this market resolved to YES."

@Alana Imho this operationalization should go in the market description.

@Primer Done

Eliezer Yudkowsky is impressed by a machine learning model, and believes that the model may be very helpful for alignment research, by the end of 2026, 8k, beautiful, illustration, trending on art station, picture of the day, epic composition

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules