World newsInner Speech

Can We Read Your Inner Speech from a Distance?

14:09 12 фев 2026.  89Читайте на: УКРРУС

The project integrates optics, neuroscience, and artificial intelligence.

Earlier this year, coverage on Lenta.ua about a new generation of brain–computer interfaces drew significant public attention, attracting more than 20,000 readers (en, ua/ru) and sparking widespread curiosity about the possibility of contactless neural decoding. In light of newly released scientific results, we return to the topic.

Fig. 1 Setup

Підписуйтеcь на наш Telegram-канал Lenta.UA - ЄДИНІ незалежні новини про події в Україні та світі

A research team from Bar-Ilan University is exploring a question that once belonged purely to science fiction: can inner speech — the silent “yes” or “no” we say to ourselves — be decoded without touching the head, without electrodes, and without surgery? Using laser-based optical sensing combined with artificial intelligence, the researchers achieved an accuracy of 95.7% in distinguishing silent “yes” versus “no” responses from the Broca language area of the brain (where speech is created), requiring only about one minute of subject-specific calibration per class.

Unlike invasive brain-computer interfaces that require implants or traditional non-invasive systems that rely on physical contact via EEG caps, this approach operates remotely. A low-power laser and a high-speed camera capture subtle speckle dynamics reflected from the scalp. These fluctuations encode tiny vibrations associated with neural activity. Deep learning models then analyze these patterns to identify the internal cognitive state.

The preprint describing the study has already attracted 205 reads on ResearchGate, placing it among the top 3% of research interest for 2026 publications, indicating strong early engagement within the scientific community (Fig. 2).

The project integrates optics, neuroscience, and artificial intelligence. A central challenge was not only detecting meaningful neural signals but also establishing their origin. A significant part of recent work focuses on ensuring that the optical patterns captured by the system correspond to cortical activity rather than to jaw- or tongue-movement artifacts. This analytical refinement required revisiting assumptions, stress-testing interpretations, and carefully defining and refining the methods used to analyze and interpret complex signals. It also required additional effort in visualizing results for explainability. Daniel Rubinstein, one of the co-authors previously highlighted for his emphasis on critical scientific questioning, contributed to this phase by helping to shape the analytical reasoning and experiments that clarified the distinction between neural signals and artifacts.

The updated study reinforces this distinction by training and evaluating additional control-validation models, using recordings from outside the language cortex to strengthen confidence in the localization of the effect. This emphasis on methodological rigor and critical validation, previously associated with co-author Daniel Rubinstein in earlier coverage, continues to shape the project's analytical direction.

To date, experiments have been conducted on healthy volunteers under controlled laboratory conditions. However, the potential implications extend far beyond the research setting. If further validated, such a contactless approach could one day offer a more comfortable alternative for individuals who have lost the ability to speak due to stroke, neurodegenerative diseases, or severe injury — particularly for patients who are sensitive to physical contact or unable to tolerate electrode-based systems. Members of the research team, including Daniel Rubinstein, have expressed particular interest in exploring potential medical applications and assessing how such optical decoding methods might translate into practical assistive communication technologies.

In the longer term, the researchers note that miniaturization and integration into wearable formats could further expand potential applications, though such developments remain at an early stage.

Over the years, the laboratory of Prof. Zeev Zalevsky (lab) has produced influential, disruptive research alongside innovations that have shaped the high-tech industry. It will be worth watching how this technology evolves in the years ahead.

Fig. 2 Statistics

Fig. 3 Reads

https://www.researchgate.net/publication/400258992_Remote_Optical_Decoding_of_Inner_Speech_in_Broca's_Area_via_AI-based_Speckle_Pattern_Analysis

Fig. 3 Most read

 

Евгений Медведев

Читайте также:

World news

Can We Read Your Inner Speech from a Distance?

14:09 12 фев 2026.  89

Самое читаемое