World newsВrain monitoring

A new step toward practical contactless brain monitoring

21:12 06 апр 2026.  6769Читайте на: УКРРУС

Can remote cortical monitoring become a common tool in medical screening?

From inner speech to cross-regional cortical decoding:

A year ago, the idea sounded almost futuristic: could a person’s brain responses be decoded remotely, without surgery, without electrodes, and without any physical contact at all?

Підписуйтеcь на наш Telegram-канал Lenta.UA - ЄДИНІ незалежні новини про події в Україні та світі

Today, that vision appears a little closer to reality. In a new preprint (https://www.researchgate.net/publication/403167585_Contactless_optical_decoding_of_cortical_language_responses_via_region-transferable_speckle_dynamics), researchers from Bar-Ilan University and collaborators report that AI models trained on optical signals from one language-related brain region can help decode responses from another. The finding may reduce calibration time and bring contactless cortical monitoring a meaningful step closer to practical use.

We previously covered the team’s earlier breakthrough, and that story drew substantial public interest, with more than 20,000 views. We are therefore continuing to follow this research as it develops into what may become an important new direction in contactless brain monitoring (https://lenta.ua/ru/can-we-read-your-inner-speech-from-a-distance-188326/).

Fig. 1: Setup (figure from the preprint)

The technology is based on a striking combination of physics and artificial intelligence. A laser illuminates the scalp, while a high-speed camera records tiny changes in the resulting speckle pattern — a granular optical pattern formed when coherent light reflects from a rough biological surface. Instead of relying on electrodes, caps, or invasive implants, the system analyzes subtle optical dynamics from outside the head. In the team’s earlier preprint, this approach was used to decode binary inner speech — “yes” versus “no” — from signals recorded over Broca’s area, achieving a mean AUC of 0.97 and accuracy of 95.7% using 40-millisecond inputs and minimal calibration.

The new study pushes that idea further. Rather than training a model from scratch for every cortical region and every task, the researchers used a self-supervised long-video masked autoencoder to learn compact representations from one setting — inner speech in Broca’s area — and then transferred those representations to another: speech-comprehension responses in Wernicke’s area. In the new preprint, the model distinguished intelligible from incomprehensible speech responses with a mean accuracy of 95.7% and an AUC of 0.98, using 40-millisecond segments, corresponding to approximately one-quarter of the duration of a typical English spoken syllable, and less than 1 minute of labeled calibration data per category. The authors interpret this as evidence that the learned optical representations capture structure that can transfer across related language tasks and anatomically distinct cortical areas.

Why does that matter? Because one of the biggest obstacles in brain-computer interface research is practicality. Many powerful systems still depend on surgery, physical contact, lengthy setup, or task-specific retraining. Even non-invasive methods often require careful placement, long calibration, or cumbersome hardware. A system that uses only a laser, a camera, and lightweight AI — and can reuse what it learned in one setting to work in another — points toward something far more scalable.

The long-term potential is considerable. If future studies confirm its robustness outside the laboratory, contactless cortical monitoring could become useful for bedside neurological screening, communication support for patients who cannot tolerate head-mounted systems, and, perhaps eventually, in compact wearable formats. The earlier inner-speech preprint even noted the possibility of integrating this technology into wearables, such as smart glasses or portable bedside devices, for patients with paralysis or contact sensitivity. That future is not here yet — but it is becoming easier to imagine.

What makes this line of research especially compelling is its deep interdisciplinary nature. The project brings together optics, AI, neuroscience, and translational thinking. The work reflects the strength of Prof. Zeev Zalevsky’s lab (lab) in combining optical sensing with advanced machine learning, together with the valuable collaboration of the Gonda Multidisciplinary Brain Research Center and Prof. Moshe Bar’s cognitive neuroscience perspective. This interdisciplinary teamwork is precisely what allows complex experiments to become influential research. The team’s latest preprints and publications list Natalya Segal, Prof. Moshe Bar, Daniel Rubinstein, Yehor Krapovnytskyi, Sergey Agdarov, Dr. Yevgeny Beiderman, Dr. Zeev Kalyuzhner, Dr. Yafim Beiderman, and Prof. Zeev Zalevsky as co-authors, underscoring the collaborative and interdisciplinary character of the work.

Fig. 2. The autoencoder, together with a simple PCA projection, allows visual separation of the categories

When this group of researchers publishes a new study, it tends to signal that something genuinely interesting is underway. That is hardly surprising given the team’s profile: Prof. Zeev Zalevsky’s long-standing expertise in optics, Natalya Segal’s novel AI approaches, Prof. Moshe Bar’s brain-science perspective, and Daniel Rubinstein’s contribution in data analysis. Together, they represent the kind of interdisciplinary collaboration from which important advances often emerge. The team’s previous preprint remains among the top 2% of 2026 publications on ResearchGate by interest score (Fig. 3).

Among the collaborators is Daniel Rubinstein, whose contribution highlights the importance of data analysis in modern interdisciplinary science. In emerging research areas, it is not enough to generate complex experimental data — the data must also be carefully analyzed, organized, and presented in a way that makes underlying patterns visible and scientifically meaningful. That combination of analytical skill, computational understanding, and clarity of presentation is an important part of how complex datasets are translated into influential research and carried beyond the lab. Such contributions are especially important in emerging fields, where rigorous analysis and clear presentation help turn technically complex experiments into results that the wider scientific community can interpret, evaluate, and build upon.

Within that collaboration, Daniel Rubinstein’s role stands out as particularly notable for an early-career researcher. Beyond contributing to data analysis, he helped strengthen the analytical side of the work through model interpretation, clearer framing of the results, and redesigned analyses that made the findings more robust and easier to evaluate. Colleagues credit Rubinstein not only with analytical rigor, but also with substantive original intellectual input: helping refine the interpretation of the models, pushing for stronger controls, and shaping analyses in ways that clarified what the system was truly measuring. That combination of analytical skill, computational understanding, and scientific judgment is an important part of how complex datasets are translated into influential research.

The broader significance of this study lies in its translational potential. Advances of this kind suggest a future in which non-contact optical systems may become more practical for neurological assessment and communication-support technologies. The study reflects the kind of ambitious, interdisciplinary, high-upside research environment in which young scientists with computational and analytical strengths can make meaningful contributions.

Fig 3. The previous preprint describing the autoencoders is among the 2% of publications from 2026 with the highest interest score

Иван Сергиенко

Читайте также:

World news

A new step toward practical contactless brain monitoring

21:12 06 апр 2026.  6769

Самое читаемое