When will AI be better than humans at AI research? (Transformative AI)
➕
Plus
40
Ṁ3419
2101
81%
Before 2030
79%
Before 2035
79%
Before 2040
89%
Before 2050
91%
Before 2070
93%
Before 2100

When will there be an AI which is better at doing AI research than the average human AI researcher not using AI?

The AI must be capable of doing everything that a current AI researcher does, including coming up with new research ideas, brainstorming with coworkers, writing code, debugging, doing code reviews, communicating results, and writing papers.

If this is constrained to a specific domain of AI research such as LLM development or interpretability that still counts.

This question is meant to be another version of "When will we get text AGI / transformative AI"

All answers which are true resolve Yes.

A question which is conditional on this one:

Get
Ṁ1,000
and
S3.00
Sort by:
bought Ṁ35 YES

I think requiring AIs to do brainstorming is a bit pointless, since brainstorming is uniquely human way of coming up with ideas. Maybe it would be better to just judge them on their output.

I.e. you tell an AI "Please generate a better AI algorithm", it thinks for a while and spits out an implementation and a paper that are better than state of the art. I would definitely call this "better than humans at AI research", but it wouldn't fit the detailed criteria of the question.

Related:

Due to the subjective resolution criteria, I’ve sold my positions and will not further bet on this market.

Can you operationalize AI research? For example, does it suffice to have a task-specific model that can improve language models faster than humans can, or does this include all the different types of AI research? Is interpretability part of AI research?

@NoaNabeshima I mean a model that is capable of doing everything that a current AI researcher does, including coming up with new research ideas, brainstorming with coworkers, writing code, debugging, doing code reviews, communicating results, and writing papers.

If this is constrained to a specific domain of AI research such as LLM development or interpretability that still counts.

It's basically equivalent to "When will we get text AGI / transformative AI"

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules