By 2028, will I think MIRI has been net-good for the world?
➕
Plus
22
Ṁ3844
2028
85%
chance

Resolves according to my subjective judgement, but I'll take opinions of those I respect at the time into account. As of market creation, people whose opinions I value highly include Eliezer Yudkowsky and Scott Alexander.

As of market creation, I consider that AI safety is important; making progress on it is good and making progress on AI capabilities is bad. If I change my mind by 2028, I'll resolve according to my beliefs at the time.

I will take into account their output (e.g. papers, blog posts, people who've trained at them) but also their inputs (e.g. money and time). I consider counterfactuals valid, like "okay MIRI did X but maybe someone else would have done X anyway"; but currently I think those considerations tend to be weak and hard to evaluate.

If I'm unconfident I may resolve the market PROB.

If MIRI rebrands, the question will pass to them. If MIRI stops existing I'll leave the market open.

I don't currently intend to bet on this market until at least a week has passed, and to stop betting in 2027.

Resolution criteria subject to change. Feel free to ask about edge cases. Feel free to ask for details about my opinions. If you think markets like this are a bad idea feel free to convince me to delete it.

Similar markets:

https://manifold.markets/philh/by-2028-will-i-think-deepmind-has-b

https://manifold.markets/philh/by-2028-will-i-think-openai-has-bee

https://manifold.markets/philh/by-2028-will-i-think-conjecture-has

https://manifold.markets/philh/by-2028-will-i-think-anthropic-has

https://manifold.markets/philh/by-2028-will-i-think-redwood-resear

Get
Ṁ1,000
and
S3.00
Sort by:

net good as in, worth more than their donation spend? I'm interested in times where this market would resolve no but "did ai safety turn out to be approximately as important as miri said it was" turns out to be true. I suspect that MIRI will turn out to have been a colossal waste of money under most metrics, but that their advocacy will have turned out to have made a significant difference in getting others to pay attention to the problem, which, under some credit assignment schemes, could warrant considering the spend on MIRI to have been worth it. But my impression is that MIRI has been very expensive for how much advocacy they've actually done, and that their papers have been mediocre most of the time.

@L
> net good as in, worth more than their donation spend? I'm interested in times where this market would resolve no but "did ai safety turn out to be approximately as important as miri said it was" turns out to be true.
They also have to be worth the time people have sunk into them. But roughly that, yeah. I sure don't expect the combination "AI safety is super important but MIRI is net-bad", but I guess it could happen, e.g. if they pivot to AI capabilities for some reason.

I would count advocacy under their outputs; so if (which I currently expect) I think they've been successfully advocating for something good, that would be a positive for them; and if I think they've been successfully advocating for something bad, that would be a negative for them.

predictedNO

@PhilipHazelden but the question is not just whether AI safety is important, but whether a marginal dollar to MIRI is a good spend. and it seems to me that MIRI is a trash-grade ai safety group, and that it would be better for all their researchers to quit and join literally any other research group where they'll be regularly exposed to less broken ideas in the course of their own work. It's not that the ideas are unsalvageable, it's that it seems to me MIRI is a very bad org for actually doing the work. if advocacy is a worthwhile output, why would you choose MIRI for it? their advocacy is almost entirely terrible, and it was kind of dumb luck that a few specific things they did early in their existence convinced other researchers to make other groups take the problem seriously. Maybe also yudkowsky's alarmism. But again, it seems to me that just funding yudkowsky as an activist writer would have done almost all of MIRI's job over the past 15 years; what ended up mattering was convincing the folks at uc berkeley to make progress on the problem, and they are now one of the key places where work is occurring (fuckin love the simons institute's talk series). at this point, it seems like MIRI wouldn't know a solution to ai safety if they were slapped in the face with it. Remember, these are the same people who were surprised by alphago and several have placed bets against humans being beaten at competitive programming by end of 2023; anyone who was deeply surprised by alphago doesn't know what the fuck they're talking about.

predictedNO

@L (Like, not to say everyone knew alphago was coming. but that if you had your finger on the research pulse like a real ass deep learning researcher, you'd have known there was fast progress being made on go in late 2015 and you would have been able to predict that it wouldn't take much more to beat humans a significant portion of the time. I was surprised it destroyed sedol as bad as it did, I thought he'd beat it several times.)

@L A few things that seem relevant here:
- "a marginal dollar going to MIRI would have been better going to (other org)" does not currently seem like strong evidence that MIRI is net-negative. If there's good reason to think "had MIRI not existed, that org would have had significantly more funding", that would be more convincing. (Similar re researchers, it matters where they'd work if they weren't at MIRI.)
- "a much cheaper version of MIRI could have accomplished just as much" also doesn't mean they're net-negative. If AI safety is as important as I currently think, it doesn't take all that much accomplishment to be worth a lot of money. It does make it harder to be confident.
- "MIRI is exposing people to broken ideas" sounds more compelling. I don't currently believe it (what ideas are you referring to?), but that could change.
- I don't know who the UC berkeley people are or what progress they're making.

predictedNO

@PhilipHazelden
> - "MIRI is exposing people to broken ideas" sounds more compelling. I don't currently believe it (what ideas are you referring to?), but that could change.

not exposing newly, but stuck with a bunch of broken ideas and not moving on a well connected ideas manifold that will lead to new ones. I'm most worried about the miri researchers being in a project of trying to fix broken ideas in a local culture that primarily has exposure to the same broken ideas. I'm hopeful this can change by simply connecting the alignment community enough better, but imo MIRI's job is relatively easy, they should be done by now, and the fact that they aren't is embarrassing and demonstrates low quality research culture. Deepmind had most of the parts to build agi in 2017, whether they acknowledged that at the time or not, and now they definitely have all of them. so, similarly, I suspect that deepmind safety and multiagent teams will, between their various contributions, end up solving safety while each individual team still thinks it's unsolved.

predictedNO

@L
> - "a much cheaper version of MIRI could have accomplished just as much" also doesn't mean they're net-negative. If AI safety is as important as I currently think, it doesn't take all that much accomplishment to be worth a lot of money. It does make it harder to be confident.

but in a tightly connected system, which the current economy is, marginal donation to MIRI probably takes away from other projects, especially because it also attracts talent that could have joined a more promising project.

By 2028, will I think MIRI has been net-good for the world?, 8k, beautiful, illustration, trending on art station, picture of the day, epic composition

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules