Chrome Extension
WeChat Mini Program
Use on ChatGLM

Towards asynchronous speech decoding

Frontiers in Neuroscience(2018)

Cited 0|Views41
No score
Abstract
Event Abstract Back to Event Towards asynchronous speech decoding Benjamin Wittevrongel1*, Elvira Khachatryan1, Mansoureh Fahimi Hnazaee1, Evelien Carrette2, Ine Dauwe2, Stefanie Gadeyne2, Alfred Meurs2, Paul Boon2, Dirk Van Roost2 and Marc M. Van Hulle1 1 KU Leuven, Belgium 2 Ghent University Hospital, Belgium Introduction A number of neurological diseases can cause verbal communication deficits. Recent advances in brain-computer interfacing have shown great promise in the decoding of performed and inner speech from cortical signals, aiming to predict the intended words (or audio output) and establishing an intuitive language-based communication channel (Herff and Schultz, 2018, Martin et al. 2018). While many speech decoding studies adopt a cued design, focussing on those time windows during which the subject is known to perform a vocal task (performed or imagined), an asynchronous system would allow the subject to control when to use the decoder, which requires the analysis of continuous signals (Lotte et al. 2008). When using current decoding models, continuous decoding is likely to result in unintended predictions, as the natural fluctuations in the brain’s signal (e.g., during the inter-trial period) would also result in word predictions. One possible solution is to detect when the subject intends to use the BCI system (i.e., when the subject prepares to speak), after which a speech decoding algorithm could be activated. In the current study, we investigated whether fluctuations in the alpha band could be used to detect speech intention, using direct electrophysiological recordings from electrodes implanted in the brain’s tissue. Methods We measured the temporal dynamics of the alpha power (8-12 Hz) during a performed and imagined speech paradigm by means of one intracranial depth electrode with six contacts in the right insula from one patient with refractory epilepsy. The experiment was set up in such a way that the planning (preparation) and execution of the performed or imagined speech were separated (figure 1). Previous studies have shown the involvement of the insula in speech production (Oh et al. 2014), but they were mainly based on lesion studies (Dronkers 1996) or indirect metabolic measurements using fRMI (Ackermann and Riecker 2004). In this work, the obtained direct electrophysiological signals allowed to investigate the temporal and spectral characteristics involved in speech planning and production. We focused on the temporal dynamics of the alpha band by conducting an event-related (de)synchronisation (ERD/ERS) analysis (Pfurtscheller and Da Silva 1999, Graimann and Pfurtscheller 2006), as a decrease in alpha power is widely believed to reflect neural recruitment. Results and Discussion During the task window, we found significant desynchronisation in the alpha band during the performed speech task, while no alpha change was found for the imagined speech task (figure 2a). However, following the presentation of the word on the screen, when the subject was planning/preparing for the task, the alpha power showed desynchronisation for both speech conditions (figure 2a). In the control condition, in which the subject was asked to read the same words without imagining or actually pronouncing them, no changes in alpha power were observed (figure 2b). These results suggest that the insula not only contributes in the vocalisation of words, but also in speech planning (preparation), even in case of imagined speech. They also provide a new insight into further development of speech decoding algorithms, which could benefit from the observed desynchronisation on a single-trial level as an indicator that the user prepares to communicate, after which the decoding algorithm could be activated. Further work is needed to confirm these findings in multiple subjects. Layman's summary Technology to decode speech directly from the brain can serve as an alternative communication channel for people with impaired verbal communication. Existing studies mainly base themselves on a cued design, limiting the user’s ability to use the system in a self-paced manner. If the user’s intentions to communicate could be detected, non-intentional decoding could be avoided. This work identified alpha power reductions in the insula, which seems to correlate with the intention to perform or imagine speech. If this pattern can be replicated in other subjects, current speech decoding systems could be extended from a cued to a self-paced design. Figure 1 Figure 2 Acknowledgements BW is supported by a Strategic Basic Research grant, funded by VLAIO. MMVH is supported by research grants from the Financing program (PFV/10/008), an interdisciplinary research project (IDO/12/007), and an industrial research fund project (IOF/HB/12/021) of KU Leuven, the Belgian Fund for Scientific Research–Flanders (G088314N, G0A0914N), the Interuniversity Attraction Poles Programme (IUAP P7/11), the Flemish Regional Ministry of Education (GOA 10/019), and the Hercules Foundation (AKUL 043). References Herff, C., & Schultz, T. (2016). Automatic speech recognition from neural signals: a focused review. Frontiers in neuroscience, 10, 429. Martin, S., Iturrate, I., Millán, J. D. R., Knight, R. T., & Pasley, B. N. (2018). Decoding inner speech using electrocorticography: progress and challenges toward a speech prosthesis. Frontiers in neuroscience, 12, 422. Lotte, F., Mouchere, H., & Lécuyer, A. (2008). Pattern rejection strategies for the design of self-paced eeg-based brain-computer interfaces. In Pattern Recognition, 2008. ICPR 2008. 19th International Conference on (pp. 1-5). IEEE. Oh, A., Duerden, E. G., & Pang, E. W. (2014). The role of the insula in speech and language processing. Brain and language, 135, 96-103. Dronkers, N. F. (1996). A new brain region for coordinating speech articulation. Nature, 384(6605), 159. Ackermann, H., & Riecker, A. (2004). The contribution of the insula to motor aspects of speech production: a review and a hypothesis. Brain and language, 89(2), 320-328. Pfurtscheller, G., & Da Silva, F. L. (1999). Event-related EEG/MEG synchronization and desynchronization: basic principles. Clinical neurophysiology, 110(11), 1842-1857. Graimann, B., & Pfurtscheller, G. (2006). Quantification and visualization of event-related changes in oscillatory brain activity in the time–frequency domain. Progress in brain research, 159, 79-97. Keywords: Alpha desynchronization, intracranial EEG (iEEG), speech planning, Brain computer interface (BCI), inner speech Conference: Belgian Brain Congress 2018 — Belgian Brain Council, LIEGE, Belgium, 19 Oct - 19 Oct, 2018. Presentation Type: e-posters Topic: NOVEL STRATEGIES FOR NEUROLOGICAL AND MENTAL DISORDERS: SCIENTIFIC BASIS AND VALUE FOR PATIENT-CENTERED CARE Citation: Wittevrongel B, Khachatryan E, Fahimi Hnazaee M, Carrette E, Dauwe I, Gadeyne S, Meurs A, Boon P, Van Roost D and Van Hulle MM (2019). Towards asynchronous speech decoding. Front. Neurosci. Conference Abstract: Belgian Brain Congress 2018 — Belgian Brain Council. doi: 10.3389/conf.fnins.2018.95.00085 Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters. The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated. Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed. For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions. Received: 30 Aug 2018; Published Online: 17 Jan 2019. * Correspondence: Mr. Benjamin Wittevrongel, KU Leuven, Leuven, Belgium, benjamin.wittevrongel@gmail.com Login Required This action requires you to be registered with Frontiers and logged in. To register or login click here. Abstract Info Abstract The Authors in Frontiers Benjamin Wittevrongel Elvira Khachatryan Mansoureh Fahimi Hnazaee Evelien Carrette Ine Dauwe Stefanie Gadeyne Alfred Meurs Paul Boon Dirk Van Roost Marc M Van Hulle Google Benjamin Wittevrongel Elvira Khachatryan Mansoureh Fahimi Hnazaee Evelien Carrette Ine Dauwe Stefanie Gadeyne Alfred Meurs Paul Boon Dirk Van Roost Marc M Van Hulle Google Scholar Benjamin Wittevrongel Elvira Khachatryan Mansoureh Fahimi Hnazaee Evelien Carrette Ine Dauwe Stefanie Gadeyne Alfred Meurs Paul Boon Dirk Van Roost Marc M Van Hulle PubMed Benjamin Wittevrongel Elvira Khachatryan Mansoureh Fahimi Hnazaee Evelien Carrette Ine Dauwe Stefanie Gadeyne Alfred Meurs Paul Boon Dirk Van Roost Marc M Van Hulle Related Article in Frontiers Google Scholar PubMed Abstract Close Back to top Javascript is disabled. Please enable Javascript in your browser settings in order to see all the content on this page.
More
Translated text
Key words
asynchronous speech
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined