Will anyone make a YouTube video seriously claiming to "debunk" Eliezer Yudkowsky on AI risk?
➕
Plus
17
Ṁ798
2028
85%
chance

A "debunking" might consist in arguing that Yudkowsky's doom scenarios are impossible - or it might just consist in arguing that they're much more improbable than he thinks they are.

Arguments exclusively about the PR value or political or moral correctness or political advisability of Yudkowsky's arguments do not count for the purposes of this market, even if they purport to be debunkings. For example, someone who does not value the continued existence of the human species is not "debunking" Yudkowsky by saying so, they are merely expressing different core values.

Generic assertions that AI risk is a non-issue, which are common, do not count, because:

  • they are assertions, not arguments, and therefore cannot be debunkings

  • if they do not mention Yudkowsky by name, this means it is unlikely that they even have read all of Yudkowsky's arguments, let alone seriously considered whether they are correct

The counter-argument to Yudkowsky does not have to be correct. It just has to be apparently serious, and in the form of a YouTube video.

Get
Ṁ1,000
and
S3.00
Sort by:

Can't literally any holder in YES just like.. make a YouTube video like this and win?

What do you mean by, "Yudkowsky's claims?" What are his claims? You provided a comment below but I'm not sure that AI Risk is the same as AI Doom, is it?

@PatrickDelaney Also what is the difference between an assertion and an argument in your mind?

predictedNO

@PatrickDelaney His claims are many and various. But that doesn't matter too much for the purposes of this market, I am just trying to meme someone into engaging seriously with at least some of his arguments for AI doom on YouTube. I treat "AI risk" and "AI doom" somewhat interchangeably for the purposes of this market, because the risk of AI killing us all (AI doom) is the kind of risk that Eliezer is concerned with. Given the enormous magnitude of this threat, he doesn't spend a lot of time talking about other kinds of AI risk, such as the risk that AI might be racist. That is a near-term risk that many people are concerned about, but it's not in scope for this market because it's not Eliezer's core focus when he talks about AI risk.

An assertion means claiming something without providing an argument for it. For example, "Eliezer is wrong, we're not all going to die."

An argument means claiming something and providing an argument for it. For example, "Eliezer is wrong, we're not all going to die, because GPT-4 doesn't have a body so I don't see how it would kill us."

In case it wasn't clear, that's a really dumb argument that doesn't engage with Eliezer's actual arguments, because Eliezer has argued that an AI could send a message to a biomed lab to get them to build proteins which could be used to build nanobots to carry out its malevolent plan. He's also said that that isn't the only way an AI could kill us, and he's also says that GPT-4 specifically probably isn't going to kill us, but something more advanced than GPT-4 probably will in the future.

But it is an argument, and if made seriously and genuinely in a YouTube video, it would count for the purposes of this market!

@RobinGreen I have been creating an extensive video attacking AI doomers since January (because making videos takes forever and I get distracted with other things). I wasn't aware of Yudkowsky, I was more highlighting some general hype sentiment about Bing at the time when that was released. If you have a specific Tweet or Quote from Yudkowsky, I will ad it in there. It's not a full debunking of AI doom, it's just an argument as to why it's extremely far out. I don't say in the video, but I think it's probably 200+ years out, if it's possible at all, and I think we humans are perfectly likely to eliminate ourselves without the use of AI and robots thank you very much. However, I do want to be careful about dismissing all AI concerns and risk. I feel compelled to create another video later this year about AI risk, because I do not think this video will age well, because I think it's extremely evident that AI already enables humans to perform extremely evil tasks on the reg, such as tracking and genociding / force removal of minority populations, efficient shifting of dangerous goods to areas where local people can't do anything about it, Nazi-style-over-policing, etc...that being said, a lot of we normal web consumers live very cushy lives far away from these problems and we're very likely to say, "meh," to a lot of this stuff, and wave it off in favor of science fiction inspired stories in the distant future, which a lot of us can relate to much more easily. There's another complicating, dark thought to think about is of course that, a lot of the deadly AI out there may benefit us by lowering costs of goods, whereas ChatGPT puts a lot of us on alert because it threatens our income. John Stewart Chill/Mill thought that people were inherently good, thought utilitarianism was a great solution, but not everyone agrees with that.

predictedNO

@PatrickDelaney I'm not going to help you fulfill this market in a way that makes me lose Mana.

I agree with you that use of AI by malevolent actors is also a major risk - and it's a risk that Yudkowsky doesn't spend a lot of time talking about, because he thinks even that risk is not as major as the risk of misaligned AIs killing everyone!

predictedYES

@RobinGreen OK, I think I understand your point and I think I can do my own research on this topic to try to achieve what you are looking for, thank you. If I don't end up fitting your criteria, no hard feelings because I have done most of the work on the video already.

predictedNO
predictedNO

@IsaacKing Do any of them specifically mention Yudkowsky by name? That's a requirement.

Does it need to specifically call out Yudkowsky or can it just be generally anti-near-term-doom?

predictedNO

@PatrickDelaney Needs to specifically diss Yudkowsky.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules