Will a Psychology paper containing false data generated by a LLM tool be published in an accredited journal in 2024?
Plus
15
Ṁ330Dec 31
62%
chance
1D
1W
1M
ALL
LLM assistants and similar tools are notorious for outputting bad data and false citations ("hallucinating"). There has already been a highly public case of this leading to legal malpractice (https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html). Will we see a similar case or cases in the arena of Psychology during 2024?
I'll be considering all journals with an average impact factor >10 for the last 10 years (2024 inclusive), where those journals self-describe as being primarily concerned with the field of Psychology.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Sort by:
@Calvin6b82 That's the rub, yeah. There's a 0% chance this won't happen. Whether or not it's caught, you know...
Related questions
Related questions
Will a Biology paper containing false data generated by a LLM tool be published in an accredited journal in 2024?
49% chance
Will a paper falsified (or containing false data generated) by a LLM tool be published in an accredited journal in 2024?
80% chance
Will OpenAI release an LLM moderation tool in 2024?
61% chance
Will a LLM/elicit be able to do proper causal modeling (identifying papers that didn't control for covariates) in 2024?
41% chance
Will I write an academic paper using an LLM by 2030?
65% chance
At the beginning of 2028, will LLMs still make egregious common-sensical errors?
42% chance
LLM Hallucination: Will an LLM score >90% on SimpleQA before 2026?
55% chance
Will a LLM beat human experts on GPQA by Jan 1, 2025?
85% chance
Will we have a popular LLM fine-tuned on people's personal texts by June 1, 2026?
52% chance
Will we see improvements in the TruthfulQA LLM benchmark in 2024?
74% chance