EEG-Based Discrimination of Imagined Speech Phonemes
نویسندگان
چکیده
This paper reports positive results for classifying imagined phonemes on the basis of EEG signals. Subjects generated in imagination five types of phonemes that differ in their primary manner of vocal articulation during overt speech production (jaw, tongue, nasal, lips and fricative). Naive Bayes and linear discriminant analysis classification methods were applied to EEG signals that were recorded during imagined phoneme production. Results show that signals from these classes can be differentiated from those generated during periods of no imagined speech and that the signals among the classes are discriminable, particularly in data collected on a single day. The simple linear classification methods are suited well to online use in BCI applications.
منابع مشابه
Classification of EEG Signals for Discrimination of Two Imagined Words
In this study, a Brain-Computer Interface (BCI) in Silent-Talk application was implemented. The goal was an electroencephalograph (EEG) classifier for three different classes including two imagined words (Man and Red) and the silence. During the experiment, subjects were requested to silently repeat one of the two words or do nothing in a pre-selected random order. EEG signals were recorded by ...
متن کاملSimulation Experiment of BCI Based on Imagined Speech EEG Decoding
Brain Computer Interface (BCI) can help patients of neuromuscular diseases restore parts of the movement and communication abilities that they have lost. Most of BCIs rely on mapping brain activities to device instructions, but limited number of brain activities decides the limited abilities of BCIs. To deal with the problem of limited ablility of BCI, this paper verified the feasibility of con...
متن کاملVowel Imagery Decoding toward Silent Speech BCI Using Extreme Learning Machine with Electroencephalogram
The purpose of this study is to classify EEG data on imagined speech in a single trial. We recorded EEG data while five subjects imagined different vowels, /a/, /e/, /i/, /o/, and /u/. We divided each single trial dataset into thirty segments and extracted features (mean, variance, standard deviation, and skewness) from all segments. To reduce the dimension of the feature vector, we applied a f...
متن کاملEffects of sound pillow in the treatment of stuttering and cognitive phonemes impairment in children
Introduction:Verbal language is Fundamental component for expressing ideas, social interaction and understanding educational materials. Effective communications require verbal language skills. Sound pillows may partly address the children with behavior problems. The purpose of this study was assessing the effect of educational sound pillow in the treatment of stuttering and cognitive phonemes i...
متن کاملNeural networks based EEG-Speech Models
In this paper, we describe three neural network (NN) based EEG-Speech (NES) models that map the unspoken EEG signals to the corresponding phonemes. Instead of using conventional feature extraction techniques, the proposed NES models rely on graphic learning to project both EEG and speech signals into deep representation feature spaces. This NN based linear projection helps to realize multimodal...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2011