Will there be a very reliable way of reading human thoughts by the end of 2024?🧠🕵️
➕
Plus
501
Ṁ130k
Dec 31
4%
chance

The method used could be anything, for example scanning the brain and decoding brain waves into language and/or images using machine learning.

Experimentally, we should be able to do things like predict someone's internal monologue with only minor errors, or predict what answers someone will give to questions, with only minor errors.

For example, if someone thinks of a number between 1-100 and we can consistently guess which number they're thinking of, this would be strong evidence for YES.

Reminder to read comments for additional clarifications.

Next:

Get
Ṁ1,000
and
S3.00
Sort by:
bought Ṁ10 NO

Thr back of your tongue unconsciously moves to match your thoughts, what's name for that phenomenon? Does reading that to read the thoughts count?

Does the P300 signal count?

predictedNO

Does the setup matter? So if we get a very reliable way to read someones thoughts but it is super invasive does that count also?

predictedYES

MIndEye takes human brain activity as input and outputs reconstructed images like these.

bought Ṁ50 YES from 16% to 19%

How close to real time does this have to be? Does this resolve to YES if a human thought can be reliable predicted from a scan but it takes days/weeks/months of work and multiple people working full time?

predictedYES

https://ai.meta.com/blog/brain-ai-image-decoding-meg-magnetoencephalography/

Today, Meta is announcing an important milestone in the pursuit of that fundamental question. Using magnetoencephalography (MEG), a non-invasive neuroimaging technique in which thousands of brain activity measurements are taken per second, we showcase an AI system capable of decoding the unfolding of visual representations in the brain with an unprecedented temporal resolution.

This AI system can be deployed in real time to reconstruct, from brain activity, the images perceived and processed by the brain at each instant. This opens up an important avenue to help the scientific community understand how images are represented in the brain, and then used as foundations of human intelligence. Longer term, it may also provide a stepping stone toward non-invasive brain-computer interfaces in a clinical setting that could help people who, after suffering a brain lesion, have lost their ability to speak.

bought Ṁ130 YES
bought Ṁ100 NO from 28% to 26%

@firstuserhere

AI system capable of decoding the unfolding of visual representations in the brain with an unprecedented temporal resolution.

This is very much in line with the resolution. The market should update a bit on this. The title is fairly wild but the description allows for the market to be resolved in much weaker limits.

Warning to new users: Levi Finkelstein has abused unclear resolution criteria/technicalities/loopholes/odd interpretations to resolve markets (usually in their favor) many times before. So I’d be wary of this market.

predictedYES

@ShadowyZephyr Since then, Levi was fined and I don't see him holding a position in this market either. I don't think that he has misresolved any markets post getting fined by SirSalty, so

@firstuserhere I don’t think it’s likely that this is resolved misleadingly but it is a possibility. Like 20% maybe? HMYS is holding quite a bit of yes. And Levi has done other shady things post fine.

predictedYES

@ShadowyZephyr Alright, thanks for the warning. My position here is because of a project i myself did not too long ago creating a dictionary of patterns that the brain's visual cortex exhibits when shown different images and then reconstructing the images from the patterns. This is back in 2018 and we were able to take a fMRI pattern and tell that "Hey that person was looking at a tiger and tree!" and be right 70% of the time

predictedNO

@firstuserhere I think the problem here, is that there are an astronomical quantity of possible thoughts.
What we seem to have, is some ability to get some general ideas from some brain patterns, in a given set of stuffs, but we still seems far to get any details.
If we look at something like that https://www.youtube.com/watch?v=z-OBapDD340, it seems clear the AI is just hallucinating its own stuff from some general concepts.

If I think something like "I will go get my money in my [actual visual image of my room]", and, without it beings in a list of given possibilities, we are about to read basically just that, with my room being actually recognizable as my room, we need to be able to read the detail of the thoughts, and this is probably very specifics to each person, and require to be able to read details of the actual brain pattern.
And I think, we are far from being able to do it.

predictedNO

@ShadowyZephyr It seems to me that here he is trying to be clear, when in the markets which he sort of miss-resolved, he was fuzzy, and the markets where less serious.

I think he will try to resolve it as correctly as possible (I think he would have even before the fine).

But if not, then, I pre-commit to not betting in any of his market again (I say it, otherwise I know I will still think "oh maybe this time it is ok", and get bitten again x) ).

@firstuserhere He has now, e.g. the poop market.

I would guess he plans to resolve this market YES on the grounds that he has a reliable method to read his own thoughts.

@DavidBolin The loophole could also just be that "asking them a question" is a very reliable way of reading human thoughts.

So far, to my knowledge most experiments have focused on categorising brain activity that was also matched in training data. Is reliable generalisation beyond training data a necessity?

Similarly, participants need to consciously focus on "being readable". Is that good enough, or should it work in a more naturalistic setting?

@AdamTreat the comments already have several other examples. all of these i think when taken cumulatively i think should count as resolving YES otherwise what is happening hear is a No True Scotsman fallacy as a market.

predictedNO

@AdamTreat this is the first one that comes close to meeting either of the criteria in the OP:
("Experimentally, we should be able to do things like predict someone's internal monologue with only minor errors, or predict what answers someone will give to questions, with only minor errors." it would meet at least one of those, if the question is what phrase from that particular song they're listening to or if their internal monologue is remembering/experiencing that song, but it fails to obtain generality and it's not actually reconstructing the melody + lyrics from a vacuum, it's essentially picking the correct timestamp of the song and reconstructing a best guess at what should fill in that gap in its training data; the details are more technical and the method involved training on dozens of patients with hundreds of surgically implanted electrodes each and agglomerating their collective data)
Other attempts cited have similar flaws in terms of overfitting to a small and specific population (with no evidence that the model can be generalized across persons and some evidence against, although the training method obviously could be, which might or might not make that a moot point as far as the "95% of the population" criterion is concerned)

Sadly, it's far from meeting the clarifications in a comment:
"However, in broad strokes what I want is something that captures the "mind reading" trope, meaning it should be possible to decode most verbal thoughts, decipher clear mental images precisely enough that we can tell what's in being imagined, and recover answers to "think of one of the items in this set" type questions."
unless, again, the set is "chunks of this particular song," (which it actually passes, by the way), or the verbal thoughts are limited to that song's lyrics, or the thing being imagined is that song.

predictedNO

@AdamTreat No, this is "what music are you listening to right now?"

It has nothing to do with decoding thoughts.

predictedNO

@NicoTerry It is a song which is currently being listened to, not just imagined.

predictedYES

@DavidBolin Eh? The song was reconstructed from the brainwaves of the patients. That is "decoding thoughts"

predictedNO

@AdamTreat People have been doing variants of this for literally decades. If you know what you're looking for, and have trained the brain decoding model to identify that same restricted set of content, it's relatively easy. The problem is when you go open-domain, where the person could be thinking/feeling/seeing/hearing anything. That's the sci-fi trope, and that we do not have, and will not have for many, many years.

predictedYES

@jonsimon Ahh so to be able to reconstruct in a double blind method … eg not knowing what the user was experiencing.

What if the researchers knew they were listening to a song but not which song and could reconstruct it?

predictedNO

@AdamTreat I think what happened here was that these were epilepsy patients that had electrodes already in their brains. This allowed the researchers to collect a lot of data about "when the person hears this sound, this is what their brain waves look like" to the point that they can reconstruct sound from electrode readings.

Then they played a Pink Floyd song to them and reconstructed the audio from the resulting readings.

Note they were *not* readings the person's "thoughts", they were reading the brains real time response to strong external stimulation. Decoding internal monolog would be much harder to collect training data for, and also much harder to detect at all because of how much physiologically subtler they are.

To answer your question, it sounds like they could reconstruct any song in this case, but it's not actually impressive because the person literally *needs to be hearing the song* so there's no secret internal information be leaked out from their mind here.

predictedNO

@AdamTreat It is not decoding thoughts; they are not thinking about it, they are listening to it. It is recognizing the activation of the auditory parts of their brain. It has nothing to do with thinking. E.g. it says nothing about whether they think it is bad or good, which is a possible example of a thought, but not a possible example of something that could be detected by that method.

predictedNO

@AdamTreat No it is not. It is decoding audio activations in the brain resulting from direct stimulation on the ears. That is not what we mean by "thoughts."

To be precise, it is detecting what they are listening to at the moment, not what they are thinking about it or anything else.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules