Will a large-scale, Eliezer-Yudkowsky-approved AI alignment project be funded before 2025?
➕
Plus
52
Ṁ21k
Jan 2
7%
chance

If Eliezer thinks the project is hopelessly misguided, that's no good. If he thinks the project isn't going to work but was at least a noble effort, that's sufficient to resolve this to YES. In other words the project should be something relativly similar to what he would have done with those resources if he were put in charge. (Or perhaps he says it's a better idea than what he would have thought of himself.)

The total amount of funding must be at least $3 billion. In the event of gradual funding over time, this market can resolve YES if the project ever meets all three criteria at any point in its life.

Get
Ṁ1,000
and
S3.00
Sort by:

I don't seriously believe that Eliezer will approve of anything that is or could lead to AGI

predictedNO

In 2018, Yudkowsky wrote a long post, Challenges to Christiano’s capability amplification proposal, in which he outlined how he thought Paul's proposed research agenda, at the time, didn't seem to hold together.

He sums up:

> I restate that these objections seem to me to collectively sum up to “This is fundamentally just not a way you can get an aligned powerful AGI unless you already have an aligned superintelligence”, rather than “Some further insights are required for this to work in practice.” But who knows what further insights may really bring? Movement in thoughtspace consists of better understanding, not cleverer tools.

But he also says:

> I can’t point to any MIRI paper that works to align an AGI. Other people seem to think that they ought to currently be in a state of having a pretty much workable scheme for aligning an AGI, which I would consider to be an odd expectation. I would think that a sane point of view consisted in having ideas for addressing some problems that created further difficulties that needed to be fixed and didn’t address most other problems at all; a map with what you think are the big unsolved areas clearly marked. Being able to have a thought which genuinely squarely attacks any alignment difficulty at all despite any other difficulties it implies, is already in my view a large and unusual accomplishment. The insight “trustworthy imitation of human external behavior would avert many default dooms as they manifest in external behavior unlike human behavior” may prove vital at some point. I continue to recommend throwing as much money at Paul as he says he can use, and I wish he said he knew how to use larger amounts of money.

This seems like an instance of "the project isn't going to work, but it is at least a noble effort", to me, but it does not meet the bar of "the project [is] something relativly similar to what he would have done with those resources if he were put in charge."

So if this research project had been allocated 3 billion dollars, would this market count that as a "yes" or a "no"?

predictedNO

@EliTyre Hmm, good question. I think that's a NO. It sounds more like he's saying "we need more people making serious attempts" than "this is a research direction I see succeeding with non-negligible probability".

3 billion? what an absurdly high number

predictedNO

@Lewton What is the intended meaning of this comment?

predictedNO

@IsaacKing what's the last big project that was funded to the tune of 3 billion that would match your criteria sans AI alignment and Eliezer?

predictedNO

Also I think it's hilariously ridiculous that this would still resolve no if someone literally handed Eliezer a billion dollars to solve alignment

@IsaacKing I just noticed you're off in the lk-99 market complaining about misleading titles, quite something when this market is the king of misleading titles with this ridiculous 3 billion cutoff

predictedNO

@Lewton You still haven't clarified how it's ridiculous and are just being insulting, so you've earned yourself a block. I also think there's a pretty clear difference between a title that is vague and a title that is wrong. The first is a necessity given the length limit, the second is not. Feel free to message me on Discord if you disagree.

OpenAI is worth $22 billion and their superalignment project was dedicated 20% of their compute, so its resources are plausibly >$3 billion.

@Lewton It is misleading only if :
- You think "large-scale" mean the market will resolve purely to the subjective appreciation to the author
- The author mean something smaller than 3 billions by that.
- You don’t read the description (even when "large-scale" is very unclear here).

Saying it is the king of the misleading titles is a vast exaggeration. I don’t think anybody will be mislead by that.

@Lewton Compare with the budget of https://en.m.wikipedia.org/wiki/Large_Hadron_Collider It’s clearly unrealistic as of now to spend that money on “alignment”, but predicting the future is hard

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules