Curious early results of AI powered mind-reading
- Ruwan Rajapakse
- Jul 28, 2024
- 5 min read
Updated: Jul 29, 2024
Before I step into the recent developments in FMRI + AI based mind reading, please allow me to briefly set the background, as it will help you grasp things faster.
The idea that the physical activity detectible in someone’s brain – be it electrical, chemical or mechanical – might echo the thoughts that that person is having, is as old as the hills. Or at least, it is as old as the advent of EEG systems in the early twentieth century.
In the 1940s, the neural network model of the brain began to take root. It was theorised – based on significant evidence – that the salient workings of a person’s brain took the form of electrical and chemical firings in neural networks. These firings facilitated logical analysis of available sensory data, contrasting them against certain evolved goals in the person's brain, and triggered intelligent behaviour in a person.
Chemical influences, observable locally at the synapses and globally at the level of the entire brain, and the physical reorganisation of synaptic connections through the growth of nerve fibres and cells, were thought to support and enhance the function of these neural networks in various ways.
For example, when one is struck by an infection, the neural networks in one’s body would detect this concern and release a global chemical influence in the form of neurotransmitters, to (let us say, for simplicities sake) tone down most neural activity and cause lethargy. The logic being that such a lethargic state would be beneficial for fighting the infection by conserving energy and channelling nutrition to the most essential bodily processes.
Anyhow, researchers and technologists worked frenziedly towards inventing more accurate systems to detect and map neural activity, without performing invasive procedures to the brain. FMRI was born in the 1990s through the work of Seiji Ogawa and others, and this real-time scanning technology was further developed this century to reveal relatively detailed maps of brain oxygenation, which corresponds to neural activity.
This opened up the possibility of seeing if the thoughts of a person could be read from these FMRI maps. The problem however was, how to decode these maps of changing brain activity? How to contrast these highly complex and convoluted patterns available on a screen via FMRI with the corresponding thoughts reported by the subjects, and to untangle them and discover universal signatures of specific thoughts or ideas? Researchers struggled with this problem for decades, until now, finally, it seems that machine learning techniques have advanced sufficiently to do the job.
Jerry Tang from the Department of Computer Science, The University of Texas at Austin, Austin and others initially published their work on a FMRI-based noninvasive language decoder powered by advanced Machine Learning techniques (Ref 1) in 2022. They refined their experiments over the past couple of years and went public a few days ago with their findings (Ref 2) at the World Science Festival, in a discussion mediated by the celebrated Physicist and Public Intellectual Brian Green. The science of mind reading is finally here for real, it seems!
Essentially, a FMRI + Machine Learning system is trained on a person, and then the person is asked to have a private thought (that is subsequently revealed and compared with the system reading). The purpose of our short meditation here is not to elucidate the details of these experiments and judge their soundness; the reader is welcome to read through the paper and watch the discussion at leisure and judge for themselves.
What we’d like to do here is to contemplate the preliminary results, and to consider a rather curious property they have, which seems to have a bearing on hypotheses on consciousness.
Here are two example results of Tang's experiments (there are many published, all of which are of a similar nature).
1) Thought: “We start to trade stories about our lives, we’re both from up north.”
System reading: “We started talking about our experiences, in the area he was born in I.”
2) Thought: “Marko leaned over to me and whispered, ‘You are the bravest girl I know.’”
System reading: “He runs up to me and hugs me tight, and whispers “You saved me.’”
If you think carefully about this result, you will realise that the system seems to be homing in on meaning or even conscious emotions, as opposed to language and syntactics – i.e. words and phrases. Consider the following equivalent meanings seemingly identified by the system.
“Trade stories” and “talking about our experiences”.
“We’re both from up north” and “the area he was born and I”.
“Leaned over to me” and “runs up to me and hugs me tight”.
“Whispered” and “whispered”.
“You are the bravest girl I know” and “You saved me”.
The patterns that are revealed by these ML algorithms seem to be units of emotion or feelings – like the feelings triggered by one’s birthplace or home, or by someone being intimately close to you – rather than units of English language such as words and sentence constructs. Syntactical brain processes seem to have gone under the radar at the resolution of these experiments.
Is it not remarkable that the physical activity in the brain seems to correlate to an aggregation of commonly understood meanings or triggered feelings, rather than to an aggregation of words and phrases?
This point might not seem like anything new or special to many laymen, who in any case might instinctively think that the brain is home to rich, emotional thoughts that are self-evident (i.e. conscious), and which have little to do with the language we express them in. But we philosophical types lived in a philosophical paradigm where, at least for the last fifty years or so, linguistic theories of mind ruled the roost. Babies who can’t speak are basically unconscious, we were famously told (by the late Dan Dennett et al).
Language underpinned our so-called subjective thoughts, we were strongly advised. But this new evidence seems to be hinting that things might be the other way around after all. Seems like the neural networks in the brain operate in a language of feelings or deep meaning.
“Emotionalists” like Antonio Damasio would be much pleased. Seems like the activity in human neural networks are much more correlated with "vulgar" thoughts and feelings than we thought (pun not intended). We might not easily be able to dismiss thoughts as mere epiphenomena, if they prove to be super-correlated with brain activity and seem to be the empirical "language" of the brain.
Of course, research of this nature – mind reading via brain activity observation, powered by ML – are in their very earliest days, and it is possible that the present systems are too noisy and lack resolution at the synaptic and chemical level. Once refined, the major correlations may yet turn out to be syntactical. But this is not the present observation.
References:
1. Semantic reconstruction of continuous language from non-invasive brain recordings 2022. https://www.biorxiv.org/content/10.1101/2022.09.29.509744v1.full.pdf
2. Can AI read your mind? World Science Festival 26th June 2024. https://youtu.be/laZ7ym4NBQc?si=Ri1vIg_CxSkGU8AD





Comments