About

Advancing
Non-Invasive BCI
Research

This project is part of Japan's Moonshot R&D Program Goal 1: "Realization of a society in which human beings can be free from limitations of body, brain, space, and time by 2050."

Project Overview

Brain-computer interfaces that can decode internal speech hold transformative potential for restoring communication in individuals with conditions such as amyotrophic lateral sclerosis (ALS) or post-laryngectomy status. While invasive intracortical approaches have achieved remarkable accuracy, they require neurosurgery and remain inaccessible for most patients. Non-invasive EEG offers a practical alternative — but has historically suffered from insufficient data and lack of standardization.

This project, funded under Japan's Moonshot R&D Program Goal 1 — "Realization of a society in which human beings can be free from limitations of body, brain, space, and time by 2050" — builds the infrastructure needed to change that. By constructing the largest open EEG/EMG speech dataset to date (650+ hours, 3 devices) and demonstrating a clear scaling law in neural decoding, we establish a data-driven path toward practical, non-invasive silent speech interfaces.

01

Open Science

All data and code are freely available via OpenNeuro and GitHub

02

Data-driven Scaling

Decoding accuracy follows a scaling law — more data, better performance, across all electrode configurations

03

Clinical Relevance

54.5% accuracy for a patient unable to vocalize

Highlighted Applications

Ultra-High-Density

Real-time Gmail Control via EEG

Using 128-channel EEG and a real-time decoder trained on 5 color words, we demonstrated the first EEG-based Gmail interface: participants navigated their inbox, opened emails, and triggered ChatGPT-generated reply candidates — using only vocalized color words decoded from brain activity.

Generative AI Audio Reconstruction

Ultra-High-Density

Silent Speech Decoding for ALS Patients

A patient with a progressive neuromuscular disease — unable to vocalize or make substantial mouth movements due to ventilator dependence — achieved 54.5% accuracy on a 64-word silent speech task using models pretrained on healthy participants. This represents a 4× improvement over single-subject baseline (13.2%).

Classification Real-time

Team

Ryota Kanai

Ph.D. – Project Manager

Shuntaro Sasai

Ph.D. – Sub Project Manager

Kan Akutsu

Principal Investigator

Eren Doğuş Ateş

Engineering Manager

Masakazu Inoue

Co-Investigator

Motoshige Sato

Ph.D. – Co-Investigator

Ilya Horiguchi

Co-Investigator

Funding

This work was supported by the Japan Science and Technology Agency (JST) under the Moonshot R&D Program (Grant Number: JPMJMS2012).