In 2028, will an AI be able to play randomly selected computer games at human level without getting to practice?
➕
Plus
463
Ṁ380k
2028
60%
chance

Resolves positively if there is an AI which can succeed at a wide variety of computer games (eg shooters, strategy games, flight simulators). Its programmers can have a short amount of time (days, not months) to connect it to the game. It doesn't get a chance to practice, and has to play at least as well as an amateur human who also hasn't gotten a chance to practice (this might be very badly) and improve at a rate not too far off from the rate at which the amateur human improves (one OOM is fine, just not millions of times slower).

As long as it can do this over 50% of the time, it's okay if there are a few games it can't learn.

Get
Ṁ1,000
and
S3.00
Sort by:

A major market uncertainty for me is whether the "rate at which the amateur human improves" is measured in # of games played / in game time, vs # of hours. Like, with highly parallel setups, the AI can play in an RL loop and plausibly get better quickly, but that would represent much much more in game time. I don't think it's plausible for this market to resolve YES in terms of in game time / learning efficiency being equivalent to that of a human for long term video game learning

@Bayesian my read of the market Description (which admittedly is very bad for my position as YES holder, but my honest interpretation regardless) is that it should be one agent vs one human running in similar time, no dilation or parallelisation. The developers get some time to build a harness if needed, then it just starts playing same as a human would - 1 hour of game time is 1 hour of game time, effectively measured by the game engine and not by the wall clock time.

If the game engine timers are sped up to run 60 time as fast (so you can do 1 hour of regular gameplay in 1 minute), that’s compared to what a human would do after 1 hour, not 1 minute.

If you have 60 agents running in parallel and updating each other, the same.

Just my take though.

@TomCohen yeah that would make sense. then ig the issue is that it may well be as fast for 1 hour or however long it takes before its context window fills up to practical limits, and thereafter hits a wall the human doesn't hit. hmmmm

My best guess is that this has trended upwards due to the IMO market resolving YES, leaving AI bulls flush with cash? I'm not really aware of any developments in the last 3 years that are bullish for this market, but I'd love to be wrong!

@DanW I didn't make any new bets, but a couple of important developments are probably:

  • Regular games are starting to be used as benchmarks by the big labs (the so called 'Pokemon benchmark'')

  • The new ARC benchmark has an emphasis on interactivity.

  • Google Deepmind are explicitly saying they're using Genie 3 to train AI models

If I were to make new bets in this direction it would probably be from a "the trend continues to hold is an update too" pov, but I'm comfortable between 50% and 70% atm.

bought Ṁ50 NO

Why is this trading up, has some progress been made?

@benjaminIkuta

insider trading... hopefully?

Gemini beat Pokemon, but that should have been priced in since it was making steady progress for a while.

The fact this was trading below 50% for this one seems surprising, considering "play video games" is a concrete external reward (the kind reasoning models excel at) and multiple major labs are clearly focused on this. Also 50% of games and amateur human are highly achievable targets.

edit:
I didn't even notice the additional "Its programmers can have a short amount of time (days, not months) to connect it to the game" in which case the scaffolding for Gemini plays Pokemon might not even be "cheating"

@LoganZoellner if you're surprised it's below 50%, what solution do you expect to exist for real time games?

@ProjectVictory

A multimodal transformer trained with reinforcement learning on a few thousand video games. It would surprise me if Google and OpenAI weren't both already working on this internally.

@LoganZoellner this solution is currently about two orders of magnitude too slow for anything realtime. To play a first person shooter somewhat competently you need latency of about 300ms at the very minimum. Transformers like Claude and Gemini take tens of seconds to make a move when playing Pokemon, keep in mind that pokemon is on the easiest end in terms of how hard it is to parse visually, so you can't just throw a super lightweight model at the problem.

Is it currently possible for me to get an AI to play with me at all, let alone well?

opened a Ṁ1,000 YES at 60% order

@benjaminIkuta Wait a few months for better computer use.

opened a Ṁ100 YES at 40% order

@AdamK Have you seen Claude pays Pokemon? It's far worse than an amateur human, and the problem is planning, not interfacing with the game.

Also, the LLM approach is useless for realtime games where speed/reaction time is required, you can't exactly feed screenshots to it and wait for reply if it's a competitive shooter.

@ProjectVictory I think latency issues are one of the most plausible paths to AIs failing to meet the resolution criteria for certain classes of games. I'm not worried that AIs in 2028 will fail to plan well.

Above, I was mostly referring to the fact that plugging an AI up to a game requires custom scaffolding, so is not something people can easily do currently. Better general computer use in the next 3-6 months might get us to the point where everyday people can have it actually try to play games, however badly.

@AdamK six months is not a very long time. I'd bet I'm not playing Inflection Point with an AI by then.

@AdamK so once a few months passes, you'll update down?

@benjaminIkuta Yes, my short timelines largely depend on seeing impressive returns to scaling RL.

@ProjectVictory the LLM can code a simple bot that plays a first person shooter, and iterate on the code in the background.

@MartinRandall can it though? According to the resolution criteria "It doesn't get a chance to practice". And writing a bot that used screen capture to play a shooter is anything but simple.

@ProjectVictory it realizes it's a fps in the intro cutscene and deploys a standard fps bot. After that it just needs to adjust the program on what to shoot.

@MartinRandall what's "a standard fps bot"? Can you give me an example of such a program? Or do you expect an LLM to write one from scratch in the time it takes for a game to load?

@benjaminIkuta

> Is it currently possible for me to get an AI to play with me at all, let alone well?


The best model currently accessible is probably UI-Tars . Claude and Gemini have both been making progress playing Pokemon, but I would argue neither of them really count because Pokemon is a turn-based game and the AIs get significant scaffolding in order to be able to play.

@LoganZoellner it's really cool that this is possible at all with a local model, but it takes minutes to do simple tasks in browser so probably not a viable Minecraft buddy yet

https://www.reddit.com/r/LocalLLaMA/comments/1k665cg/anyone_try_uitars157b_new_model_from_bytedance/

@AdamK "Wait a few months for better computer use." Okay, it's been a few months now.

@benjaminIkuta Speaks to the inefficiency of Manifold operationalizations that there weren’t better computer use questions for me to lose mana on

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules