0% found this document useful (0 votes)
18 views

Bci Survey Book

Uploaded by

egupta1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Bci Survey Book

Uploaded by

egupta1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 138

SPRINGER BRIEFS IN

ELEC TRIC AL AND COMPUTER ENGINEERING

Christoph Guger
Brendan Allison
Mikhail Lebedev Editors

Brain-Computer
Interface
Research
A State-of-the-Art
Summary 6

123
SpringerBriefs in Electrical and Computer
Engineering

Series editors
Woon-Seng Gan, Nanyang Technological University, Singapore, Singapore
C.-C. Jay Kuo, University of Southern California, Los Angeles, CA, USA
Thomas Fang Zheng, Tsinghua University, Beijing, China
Mauro Barni, University of Siena, Siena, Italy
More information about this series at http://www.springer.com/series/10059
Christoph Guger Brendan Allison

Mikhail Lebedev
Editors

Brain-Computer Interface
Research
A State-of-the-Art Summary 6

123
Editors
Christoph Guger Mikhail Lebedev
g.tec Guger Technologies OG Department of Neurobiology
Schiedlberg Duke University
Austria Durham, NC
USA
Brendan Allison
g.tec Guger Technologies OG
Schiedlberg
Austria

ISSN 2191-8112 ISSN 2191-8120 (electronic)


SpringerBriefs in Electrical and Computer Engineering
ISBN 978-3-319-64372-4 ISBN 978-3-319-64373-1 (eBook)
DOI 10.1007/978-3-319-64373-1
Library of Congress Control Number: 2017938537

© The Author(s) 2017


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made. The publisher remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.

Printed on acid-free paper

This Springer imprint is published by Springer Nature


The registered company is Springer International Publishing AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Contents

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Christoph Guger, Brendan Z. Allison and Mikhail A. Lebedev
Advances in BCI: A Neural Bypass Technology to Reconnect
the Brain to the Body . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Gaurav Sharma, Nicholas Annetta, David A. Friedenberg
and Marcia Bockbrader
Precise and Reliable Activation of Cortex with Micro-coils . . . . . . . . . . . 21
Seung Woo Lee and Shelley I. Fried
Re(con)volution: Accurate Response Prediction for Broad-Band
Evoked Potentials-Based Brain Computer Interfaces . . . . . . . . . . . . . . . . 35
J. Thielen, P. Marsman, J. Farquhar and P. Desain
Intracortical Microstimulation as a Feedback Source
for Brain-Computer Interface Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Sharlene Flesher, John Downey, Jennifer Collinger, Stephen Foldes,
Jeffrey Weiss, Elizabeth Tyler-Kabara, Sliman Bensmaia,
Andrew Schwartz, Michael Boninger and Robert Gaunt
A Minimally Invasive Endovascular Stent-Electrode Array
for Chronic Recordings of Cortical Neural Activity . . . . . . . . . . . . . . . . . 55
Thomas J. Oxley, Nicholas L. Opie, Sam E. John, Gil S. Rind,
Stephen M. Ronayne, Anthony N. Burkitt, David B. Grayden,
Clive N. May and Terence J. O’Brien
Visual Cue-Guided Rat Cyborg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Yueming Wang, Minlong Lu, Zhaohui Wu, Xiaoxiang Zheng
and Gang Pan

v
vi Contents

Predicting Motor Intentions with Closed-Loop Brain-Computer


Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Matthias Schultze-Kraft, Mario Neumann, Martin Lundfall,
Patrick Wagner, Daniel Birman, John-Dylan Haynes
and Benjamin Blankertz
Towards Online Functional Brain Mapping and Monitoring
During Awake Craniotomy Surgery Using ECoG-Based
Brain-Surgeon Interface (BSI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
L. Yao, T. Xie, Z. Wu, X. Sheng, D. Zhang, N. Jiang, C. Lin, F. Negro,
L. Chen, N. Mrachacz-Kersting, X. Zhu and D. Farina
A Sixteen-Command and 40 Hz Carrier Frequency Code-Modulated
Visual Evoked Potential BCI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Daiki Aminaka and Tomasz M. Rutkowski
Trends in BCI Research I: Brain-Computer Interfaces
for Assessment of Patients with Locked-in Syndrome
or Disorders of Consciousness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Christoph Guger, Damien Coyle, Donatella Mattia, Marzia De Lucia,
Leigh Hochberg, Brian L. Edlow, Betts Peters, Brandon Eddy,
Chang S. Nam, Quentin Noirhomme, Brendan Z. Allison and Jitka Annen
Recent Advances in Brain-Computer Interface Research—A
Summary of the BCI Award 2016 and BCI Research Trends . . . . . . . . . 127
Christoph Guger, Brendan Z. Allison and Mikhail A. Lebedev
Introduction

Christoph Guger, Brendan Z. Allison and Mikhail A. Lebedev

1 What Is a BCI?

Brain-computer interfaces (BCIs) are devices that directly read brain activity and
use it in a real-time, closed loop system with feedback to the user. Unlike all other
interfaces, BCIs do not require movement. Instead, the information from the brain is
translated into messages or commands without relying on the body’s natural output
pathways. Thus, BCIs can be very helpful to people with severe motor disabilities
that prevent them from speaking or using most (or even all) other devices for
communication.
BCI research has continued to provide new ways to help these types of patients.
In the last several years, BCIs have also broadened far beyond communication and
control devices for severely paralyzed users. Today, BCIs are rapidly gaining
attention for people with a wide variety of other conditions. Major entities and
individuals such as Facebook and Elon Musk have recently announced extremely
ambitious BCI-related projects. These research activities and announcements could
lead to new ways to help many more people, and new hope for future develop-
ments. On the other hand, announcements of overly ambitious and unrealistic goals
could lead to false hope and sour public opinion. It is certainly a dynamic and
eventful time for the BCI research community.

C. Guger (&)
Schiedlberg, Austria
e-mail: guger@gtec.at
B.Z. Allison
San Diego, USA
M.A. Lebedev
Durham, USA

© The Author(s) 2017 1


C. Guger et al. (eds.), Brain-Computer Interface Research, SpringerBriefs
in Electrical and Computer Engineering, DOI 10.1007/978-3-319-64373-1_1
2 C. Guger et al.

2 The Annual BCI-Research Award

G.TEC is a leading provider of hardware, software, and complete systems for BCI
research and related directions. G.TEC is headquartered in Austria and with
branches in Spain and the USA. In 2010, G.TEC decided to create an Annual
BCI-Research Award to recognize and study top new BCI projects. The competi-
tion is open to any BCI group worldwide. There is no limitation or special con-
sideration for the type of hardware and software used in the submission. Since the
first award in 2010, we have followed more or less the same process:
• G.TEC selects a Chairperson of the Jury from a well-known BCI research
institute.
• This Chairperson forms a jury of top BCI researchers who can judge the Award
submissions.
• G.TEC publishes information about the BCI Award for that year, including
submission instructions, scoring criteria, and a deadline.
• The jury reviews the submissions and scores each one across several criteria.
The jury then determines twelve nominees and one winner.
• The nominees are announced online, asked to contribute a chapter to this annual
book series, and invited to a Gala Award Ceremony that is attached to a major
conference (such as an International BCI Meeting or Conference).
• At this Gala Award Ceremony, the twelve nominees each receive a certificate,
and the winner is announced. The winner earns $3000 USD and the prestigious
trophy. The 2nd place winner gets $2000 USD and the 3rd place gets $1000
USD.
We have made some changes over the years, such as increasing the number of
nominees from ten to twelve and adding second and third place awards. Otherwise,
the overall process has not changed. The 2016 jury was:
Mikhail A. Lebedev (chair of the jury 2016),
Alexander Kaplan,
Klaus-Robert Müller,
Ayse Gündüz,
Kyousuke Kamada,
Guy Hotson.
Consistent with tradition, the jury included the winner from the preceding year
(Guy Hotson). The chair of the jury, Dr. Mikhail A. Lebedev, is a top figure in BCI
research and leads the prestigious BCI lab at Duke University, USA. Dr. Mikhail
Lebedev said: “I was very fortunate to work with the 2016 jury. All of the jury
members that I approached chose to join the jury, and we had an outstanding team”.
How does the jury decide the nominees and winners? We have used the same
scoring criteria across different years. These are the criteria that each jury uses to
score the submissions. Earning a nomination (let alone an award) is very
Introduction 3

challenging, given the number of submissions and the very high quality of many of
them. Submissions need to score well on several of these criteria:
• Does the project include a novel application of the BCI?
• Is there any new methodological approach used compared to earlier projects?
• Is there any new benefit for potential users of a BCI?
• Is there any improvement in terms of speed of the system (e.g. bit/min)?
• Is there any improvement in terms of accuracy of the system?
• Does the project include any results obtained from real patients or other
potential users?
• Is the used approach working online/in real-time?
• Is there any improvement in terms of usability?
• Does the project include any novel hardware or software developments?

3 The BCI Book Series

The annual BCI Book Series is another way that we recognize and study the top
BCI projects over time. Each year, the nominees are invited to contribute a chapter.
The authors have considerable flexibility in their chapters. Aside from their nom-
inated work, authors might present even newer achievements, work from related
groups, or future directions and challenges. In addition to the work that was
nominated, authors may also present related material, such as new work since their
submission. We have also had some flexibility across different years, such as
including chapters from “honorable mention” submissions that were not nominated
but had new improvements since their submission (Figs. 1 and 2).
In addition to providing the authors with flexibility, we also asked them to
present material in a relatively readable format. While chapters present advanced
work, the authors and editors have worked to explain some underlying concepts and
why the work is important. The chapters include numerous color figures to help
illustrate the authors’ ideas and results. Thus, we hope that the chapters herein are
of interest not only to experts in different fields, but also to non-experts. For
example, chapters might be useful for students who are enrolled in a relevant course
or are considering a new research or career direction.
Each book also contains an introduction and conclusion. Across different years,
we have used the submissions, nominees, and winners to study trends and issues
within BCI research. These chapters have already led to some conclusions about
what has and hasn’t changed. For example, the types of imaging approaches that are
described in submitted and nominated projects has been fairly consistent over the
years. EEG-based approaches are prevalent, while intracranial methods including
ECoG and depth electrodes are fairly well represented, and other approaches such
as fMRI and fNIRS are relatively less common. Most submissions, nominees, and
winners have come from the USA and Europe, with some submissions from Japan
and China, and many projects that span different groups.
4 C. Guger et al.

Fig. 1 This picture shows the nominees at the BCI Award 2016 ceremony. Tomek Rutkowski,
Eberhard Fetz, Jaime Pereira, Benjamin Blankertz, Jordy Thielen, Shelley I. Fried, Lin Yao,
Sharlene Flesher, Gaurav Sharma, Kyousuke Kamada (jury), and Christoph Guger (organizer)

Fig. 2 Christoph Guger (organizer), Sharlene Flesher (nomination), and Kyousuke Kamada (jury)
Introduction 5

What has changed? The types of applications and patient groups have broadened
considerably over the years. In 2010, projects were relatively focused on communi-
cation and control for persons with severe motor disabilities. Recently, many more
projects have presented achievements such as assessment of consciousness, rehabili-
tation, and functional brain mapping, which could benefit persons with disorders of
consciousness (DOC), stroke, brain injury, cerebral palsy, epilepsy, tumors, and other
conditions. These and other developments that we have noted in prior books are
consistent with, and often precede, more general consensus across other BCI publi-
cations. This year might introduce other new directions that will soon become
prominent. For example, projects focusing on sensory restoration and new directions
with intracranial BCIs were nominated, which are directions that were also nominated
in recent years. Two of the 2016 nominees explored autism, and another 2016 nominee
included a new game that also addresses classic issues in free will.
This year, we have decided to extend our focus on growing trends and issues
with a new type of chapter: “Trends in BCI Research”. This is a new type of chapter
that spans different authors and research groups, including some nominees and
winners along with top outside experts. Each year, our book may include one or
more of these special chapters that highlights a topical research field. Our first such
chapter focuses on BCI technology for persons with disorders of consciousness
(DOCs). This direction has advanced well beyond initial research. Several groups
worldwide have published dozens of papers that include bedside assessment,
communication, and/or outcome prediction with patients in real-world settings. Our
new chapter includes recent achievements from different groups that were presented
at the BCI Meeting 2016 in Pacific Grove, CA, the same conference where the 2016
BCI Awards Ceremony occurred.

4 Projects Nominated for the BCI Award 2016

This year’s jury reviewed all of the submissions based on the scoring criteria
presented above. After tallying the scores across all reviewers, the twelve sub-
missions that were nominated for a BCI Award 2016 were:
A P300-based brain-computer interface for social attention rehabilitation in
autism
Carlos Amaral1, João Andrade1, Marco Simões1, Susana Mouga1,2, Bruno Direito1,
Miguel Castelo-Branco1,3
1 IBILI-Institute for Biomedical Imaging and Life Sciences, Faculty of Medicine—
University of Coimbra, Coimbra, Portugal
2 Unidade de Neurodesenvolvimento e Autismo do Serviço do Centro de
Desenvolvimento da Criança, Pediatric Hospital, Centro Hospitalar e
Universitário de Coimbra, Coimbra, Portugal
3 ICNAS—Brain Imaging Network of Portugal.
6 C. Guger et al.

Sixteen Commands and 40 Hz Carrier Frequency Code-modulated Visual


Evoked Potential BCI
Daiki Aminaka, Tomasz M. Rutkowski
University of Tsukuba, Japan.
Natural movement with concurrent brain-computer interface control induces
persistent dissociation of neural activity
Luke Bashford1,2, Jing Wu3, Devapratim Sarma3, Kelly Collins4, Jeff Ojemann4,
Carsten Mehring2
1 Imperial College London, Bioengineering, UK
2 Bernstein Centre, Faculty of Biology, BrainLinks-BrainTools, Univ. of
Freiburg, Germany
3 Bioengineering, Ctr. For Sensorimotor Neural Eng.
4 Dept. of Neurolog. Surgery, Ctr. For Sensorimotor Neural Eng., Univ. of
Washington, USA.
Intracortical Microstimulation as a Feedback Source for Brain-Computer
Interface Users
Sharlene Flesher2,3, John Downey2,3, Jennifer Collinger1,2,3,4, Stephen Foldes1,3,4,
Jeffrey Weiss1,2, Elizabeth Tyler-Kabara1,2,5, Sliman Bensmaia6, Andrew
Schwartz2,3,8, Michael Boninger1,2,4, Robert Gaunt1,2,3
1 1,2,5,8 Departments of Physical Medicine and Rehabilitation, Bioengineering,
Neurological Surgery, Neurobiology, University of Pittsburgh, Pittsburgh, PA,
USA
2 3 Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
3 4 Department of Veterans Affairs Medical Center, Pittsburgh, PA, USA
4 6 Department of Organismal Biology and Anatomy, University of Chicago,
Chicago, IL, USA.
Minimally invasive endovascular stent-electrode array for high-fidelity,
chronic recordings of cortical neural activity
Thomas J. Oxley, Nicholas L. Opie, Sam E. John, Gil S. Rind, Stephen M.
Ronayne, Clive N. May, Terence J. O’Brien
Vascular Bionics Laboratory, Melbourne Brain Centre, Departments of Medicine
and Neurology, The Royal Melbourne Hospital, The University of Melbourne,
Parkville, Victoria, Australia.
Brain-Computer Interfaces based on fMRI for Volitional Control of Amygdala
and Fusiform Face Area: Applications in Autism
Jaime A. Pereira1,2, Ranganatha Sitaram1,3, Pradyumna Sepulveda2,4,5, Mohit
Rana2, Cristián Montalba5, Cristián Tejos3,4,5, Sergio Ruiz1,2,3
1 Department of Psychiatry and Interdisciplinary Center for Neuroscience, School
of Medicine, Pontificia Universidad Católica de Chile
2 Laboratory of Brain-Machine Interfaces and Neuromodulation, Pontificia
Universidad Catolica de Chile
Introduction 7

3 Institute for Medical and Biological Engineering, Schools of Engineering,


Medicine and Biology, Pontificia Universidad Católica de Chile
4 Department of Electrical Engineering, Pontificia Universidad Católica de Chile.
5 Biomedical Imaging Center, Pontificia Universidad Católica de Chile.
Reclaiming the Free Will: A Real-Time Duel between a Human and a
Brain-Computer Interface
Matthias Schultze-Kraft, Daniel Birman, Marco Rusconi, Carsten Allefeld, Kai
Görgen, Sven Dähne, Benjamin Blankertz, John-Dylan Haynes
Neurotechnology Group, Technische Universität Berlin, Berlin, Germany.
An Implanted BCI for Real-Time Cortical Control of Functional Wrist and
Finger Movements in a Human with Quadriplegia
Gaurav Sharma1, Nick Annetta1, Dave Friedenberg1, Marcie Bockbrader2, Ammar
Shaikhouni2, W. Mysiw2, Chad Bouton1, Ali Rezai2
1 Battelle Memorial Institute, 505 King Ave, Columbus, OH 43201
2 The Ohio State University, Columbus, OH, USA 43210.
Broad-band BCI: finding structure in noisy data
Jordy Thielen, Pieter Marsman, Colleen Monaghan, Jason Farquhar and Peter
Desain
Donders Center for Cognition, Radboud University Nijmegen
Vision-Augmented Rat Cyborg
Yueming Wang1, Minlong Lu2, Zhaohui Wu2, Liwen Tian2, Kedi Xu1, Xiaoxiang
Zheng1, Gang Pan2
1 Qiushi Academy for Advanced Studies, Zhejiang University, China.
2 College of Computer Science, Zhejiang University, China
Precise and reliable activation of cortex with micro-coils
Seung Woo Lee and Shelley I. Fried
Boston VA Healthcare System, Boston, Massachusetts, USA, Department of
Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston,
MA, USA.
Towards Online Functional Brain Mapping and Monitoring during Awake
Craniotomy Surgery using ECoG-based Brain-Surgeon Interface (BSI)
L. Yao1, T. Xie2, Z. Wu3, X. Sheng2, D. Zhang2, C. Lin1, F. Negro1, L. Chen3, N.
Mrachacz-Kersting4, X. Zhu2, D. Farina1
1 Institute of Neurorehabilitation Systems, University Medical Center Goettingen,
Goettingen, Germany.
2 State Key Laboratory of Mechanical System and Vibration, Institute of
Robotics, Shanghai Jiao Tong University, Shanghai, China.
3 Department of Neurosurgery, Huashan Hospital, Fudan University, China.44
Center for Sensory-Motor Interaction, Aalborg University, Aalborg, Denmark.
8 C. Guger et al.

5 Summary

Since 2010, the annual BCI Awards and book series have recognized the top BCI
projects worldwide. Our books have also identified and highlighted major trends
and issues in BCI research. The procedures relating to jury selection, scoring cri-
teria, and the awards have been updated somewhat over the years, and this book
introduces the new Trends in BCI Research chapter. We plan to continue admin-
istering and editing the BCI Awards and book series, and look forward to next
year’s submissions!
Advances in BCI: A Neural Bypass
Technology to Reconnect the Brain
to the Body

Gaurav Sharma, Nicholas Annetta, David A. Friedenberg


and Marcia Bockbrader

1 Introduction

Millions of people worldwide suffer from diseases that lead to paralysis through
disruption of signal pathways between the brain and the muscles. Neuroprosthetic
devices aim to restore or substitute for a lost function such as motion, hearing,
vision, cognition, or memory in patients suffering from neurological disorders.
Current neuroprosthetics systems have successfully linked intracortical signals from
electrodes in the brain to external devices including a computer cursor, wheelchair
and robotic arm [1–11]. In non-human primates, these types of signals have also
been used to drive activation of chemically paralyzed arm muscles [12, 13].
However, technologies to link intracortical signals in real time to a neuroprosthetic
device to re-animate a paralyzed limb to perform complex, functional tasks had not
yet been demonstrated.
We recently showed, for the first time, that intracortically-recorded signals can
be linked in real-time to muscle activation to restore functional and rhythmic
movements in a paralyzed human [14, 15]. We utilized a chronically-implanted
intra-cortical microelectrode array to record multiunit activity from the motor cortex
in a study participant with quadriplegia from cervical spinal cord injury. Then,
using an innovative system of our design, signals from the cortical implant were
decoded and re-encoded continuously, in real-time, to drive a custom neuromus-
cular electrical stimulation (NMES) cuff that enabled the patient to regain lost
function. In essence, we have demonstrated an electronic ‘neural bypass technology
(NBT)’ that has the ability to circumvent disconnected neurological pathways.

G. Sharma (&)  N. Annetta  D.A. Friedenberg


Battelle Memorial Institute, 505 King Ave, Columbus, OH 43201, USA
e-mail: sharmag@battelle.org
M. Bockbrader
The Ohio State University, Columbus, OH 43210, USA

© The Author(s) 2017 9


C. Guger et al. (eds.), Brain-Computer Interface Research, SpringerBriefs
in Electrical and Computer Engineering, DOI 10.1007/978-3-319-64373-1_2
10 G. Sharma et al.

Figure 1 shows the NBT system used by the participant. The system translates the
patient’s intentions to move his wrist and fingers into evoked movements that
smoothly combine stimulated wrist and finger movements with voluntary shoulder
and elbow movements and enables him to complete functional tasks relevant to
daily living. Clinical assessment showed that when using the system, the patient’s
motor impairment level improved from C5-C6 to a C7-T1 level unilaterally, con-
ferring on him the critical abilities to grasp, manipulate and release objects. This is
the first demonstration of successful control of muscle activation utilizing
intracortically-recorded signals in a paralyzed human. These results have significant
implications in advancing neuroprosthetic technology for people worldwide living
with the effects of paralysis.

Fig. 1 Experimental neural bypass technology (NBT) system in use with the participant (seated in
a wheelchair) in front of a table with a computer monitor
Advances in BCI: A Neural Bypass … 11

2 Methods

2.1 Study Design and Surgery

The neural bypass technology has been successfully demonstrated during a Food and
Drug Administration (FDA) and Institutional Review Board (IRB)-approved study
[14, 15]. The study participant is a 25-year-old male who sustained a C5/C6 level
spinal cord injury (SCI) from a diving accident. At baseline (without the BCI), he
retains the ability to voluntarily control shoulder and some elbow movements, but
has lost finger, hand and wrist function. A 96-channel microelectrode array (Utah
array, Blackrock Microsystems, Salt Lake City, UT) was implanted in the left pri-
mary motor cortex of the participant. As shown in Fig. 2a, the hand area of the
primary motor cortex was identified preoperatively by performing functional mag-
netic resonance imaging (fMRI) while the participant attempted to mirror videos of
hand movements. The NeuroportTM system was used to acquire neural data.

2.2 Novel Hardware and Software Development

The study required the development of novel hardware and software components.
A custom neuromuscular electrical stimulator system was developed, including a
high-definition, flexible, circumferential NMES cuff that adheres to the user’s
forearm. The cuff is comprised of up to 160 electrodes, allowing precise control of
individual forearm muscles (Fig. 2b). The high number of electrodes not only
allowed stimulation of isolated superficial forearm muscles but also enabled electric
field steering to activate individual deep muscles. This combination proved
essential for generating isolated finger movements as well as multiple forms of

Fig. 2 Implant locations and NMES cuffs a Red regions are brain areas active during attempts to
mimic hand movements, where the t-values for the move-rest T1-weighted fMRI contrast are
greater than 7; The implanted microelectrode array location from post-op CT is shown in green;
The overlap of the red and green regions is shown in yellow. b Neuromuscular electrical
stimulation cuffs
12 G. Sharma et al.

functional grips. Electrical stimulation was provided intermittently in the form of


current-controlled, monophasic, rectangular pulses of a 50 Hz pulse rate and 500 µs
pulse width. Pulse amplitudes ranged from 0 to 20 mA and were updated every
100 ms.
Software development included novel machine learning-based decoding algo-
rithms that are robust to context changes (such as arm position), which allowed the
participant to perform complex tasks using a combination of both non-paralyzed
and paralyzed muscles simultaneously. Algorithms also included methods to
remove stimulation artifacts from the neural data due to electrical stimulation being
applied to the arm, giving the participant the ability to start and stop stimulation at
will. Our approach to intra-cortically recorded neural data was also innovative:
Instead of using single unit activity, which is known to decline over months, we
used a wavelet decomposition method to approximate multi-unit neural activity.
Wavelet decomposition has shown to be an effective tool in neural decoding
applications and provides information encompassing single unit, multiunit, and
LFP, without requiring spike sorting [16]. In this study, four wavelet scales (3–6)
were used, corresponding to the multiunit frequency band spanning approximately
235–3750 Hz to estimate a feature termed mean wavelet power (MWP). This
allowed for recording and analysis of signal features that were detectable and robust
over time, providing the potential for a system that could be used for long term
applications.

2.3 Participant Sessions and Neural Decoder Training

The study sessions with the participant were typically conducted three times per
week, lasting approximately 3–4 h. Stimulation patterns were first calibrated for the
desired movements. Decoders were trained for a given movement by asking the
participant to imagine mimicking hand movements cued to him by an animated
virtual hand on a computer monitor. The neural decoders were trained in training
blocks, each consisting of multiple repetitions of each desired motion. This full set
of data was used as input for training a nonlinear Support Vector Machine
(SVM) algorithm to generate a robust set of decoders. A decoder for each motion
(against all other motions and rest) was built using a nonlinear Gaussian radial basis
function kernel [17] to process this full set of data and a non-smooth SVM algo-
rithm that uses sparsity optimization to improve performance [18]. During the test
period, all decoders ran simultaneously and the decoder with the highest output
score above zero was used to drive the NMES.
Advances in BCI: A Neural Bypass … 13

3 Results

The applicability of NBT was demonstrated in three different contexts, highlighting


different facets of the technology. In the first experiment, the participant was asked
to mimic a virtual hand on a computer screen in front of him. The virtual hand cued
him to perform six different movements with his right hand: thumb extension, wrist
flexion, wrist extension, middle finger flexion, thumb flexion, and hand open. Each
movement was cued five times and the presentation order was randomized so the
participant could not anticipate the next movement. The participant was able to
successfully achieve the movement on 29 of the 30 cues, although he was not
always able to maintain the movement for the duration of the cue. Overall, he was
able to match the cue 70.4% ± 1.0% (mean ± S.D., P < 0.01 by permutation test)
of the time. Examples of neural modulation, decoder output, and physical move-
ment for each of the six cues are shown in Fig. 3. This was the first demonstration
of a tetraplegic human regaining volitional control of six distinct hand and wrist
movements with an intracortical BCI system.
The second demonstration used the Graded and Redefined Assessment of Strength,
Sensibility, and Prehension (GRASSP) test [19] to quantify the participant’s senso-
rimotor impairment level both with and without the neural bypass system. Five
domains were evaluated; strength, dorsal sensation, ventral sensation, gross grasping
ability (qualitative prehension), and prehensile skills (quantitative prehension). Since
the NBT was only expected to improve motor function and not sensory outcomes, we
focused on the strength, quantitative prehension and qualitative prehension measures.

Fig. 3 Mean wavelet power and system performance for individual hand movements For each
movement, (top) heat maps of MWP and (bottom) neural decoder (dashed line) with physical hand
movements (solid line). The vertical dashed lines indicate the start and end of the movement cue,
while the break in the heat map indicates when the stimulation turns on. When the stimulation is
on, we introduce stimulation artifacts into the data, hence the modified color scale. These artifacts
can be partially removed as detailed in Bouton et al. [14]
14 G. Sharma et al.

Fig. 4 GRASSP performance on the three motor function domains The brown triangle shows the
participant’s baseline score without the use of the system, and the green triangle shows his scores
while using the system. The grayscale triangles show the International Standards for Neurological
Classification of Spinal Cord Injury and the American Spinal Injury Association Impairment Scale
for comparison

Figure 4 shows that when the participant used the NBT, his Manual Muscle Test
(MMT) strength improved from C6 to C7–C8 level, his gross grasping ability
improved from C7–C8 to C8–T1 level, and his prehensile skills improved from C5 to
C6 level. Taken together, these results quantify the improvement the participant
gained while using the system, and suggest that a system that users could take home
would significantly improve their ability to live independently.
Finally, the participant demonstrated that he could use the system to complete
complex functional tasks that are relevant to tasks of daily living. The functional
task required him to pick up a bottle, pour the contents of the bottle into a jar,
replace the bottle, then pick up a stir stick and stir the contents of the jar (Fig. 5).
This task required the participant to combine his residual shoulder and elbow
movement with three hand movements using the NBT (hand open, cylindrical grip,
pinch grip). We observed differences in neural patterns when the participant was
performing shoulder and elbow movements, which necessitated including those
movements in the training process to assist in building robust decoders.
In this study, for the first time, a human with quadriplegia regained volitional,
functional movement using intracortically-recorded signals linked to neuromuscular
stimulation in real-time. Using our investigational system, our C5/C6 participant
gained wrist and hand function consistent with a C7/T1 level of injury.
Advances in BCI: A Neural Bypass … 15

Fig. 5 Grasp-pour-and-stir functional movement task Sequential snapshots (a–f) from the
functional movement task showing the participant opening his hand (a), grasping the glass bottle
(b), pouring its contents (dice) into a jar (c), grasping a stir stick from another jar (d), transferring
the stir stick without dropping it (e), and using it to stir the dice in the jar (f)

This improvement in function is meaningful for reducing the burden of care in


patients with SCI, as most C5/C6 patients require assistance for activities of daily
living, while C7/T1 level patients can live independently. The technology also has
potential applications in the field of BCI-controlled neuroprosthetics, which could
improve patient independence through improved motor function.

4 Current Work and Outlook Towards Future

Our current efforts are focused on adapting the NBT for home use. To make this
technology ready for home use, the system must be made smaller and easier to use,
with fewer adjustments needed from the user over long-term use. Making these
improvements to this system will require several technological hurdles to be
overcome as detailed below.
The current NBT system was designed for the research setting where space and
mobility is not a constraint. However, for home use, the technology will need to be
miniaturized. On the recording side, Blackrock Microsystems has made progress in
developing a wireless headstage that can handle the high bandwidth data, but it does
not yet have FDA approval for human use. It also uses a large receiver to interface
with the PC, which must be shrunken down. The PC used to control the system
would ideally be replaced with a small device such as a tablet or a custom designed,
small form factor device with an embedded processor. This can be challenging due
to the complexity of the algorithm and the amount of data that needs to be pro-
cessed, and it will require the algorithms to be streamlined. The NMES will also
need to be simplified and made more user friendly. The high voltage and high
16 G. Sharma et al.

channel count as well as the size of the battery that is needed makes it challenging
to shrink the NMES. Shrinking the entire system will increase portability, but even
more improvements must be made to the electrode cuff before the system can be
easily used in home settings. These stimulation electrodes will need to be embedded
in a sleeve form that can be donned as a piece of garment that can keep the
electrodes in good contact with the skin.
The decoding algorithms need to be adapted to make them more robust to any
variability in neural modulation. Currently, the user needs to go through a retraining
process every few hours to build new decoders. The decoders need to be rebuilt
because neural activity in the brain changes, even over the course of just a few
hours. Many environmental conditions, mental states of the user (e.g. emotions,
focus level, etc.), sensory feedback, and other movements the user is making,
among other things, will all factor into how the neural activity changes. Decoders
must be developed that can account for these changes so that the user does not
constantly have to go through time-consuming retraining. Use of deep neural
networks can be one possible way to improve decoder performance.

5 Neurorehabilitation Outcomes and Need


for Standardized Tests for Evaluating SCI
Neuroprosthetics

As the options for BCI-neuroprosthetics expand to include a range of more or less


invasive control options (from brain implants to surface EEG to myoelectric con-
trol) and more or less cybernetic effector mechanisms (from surface electrical
stimulation, surgical implants and tendon transfers, to robotic arms), it is increas-
ingly important to be able to counsel consumers on both costs and risks—whether
financial, technological, surgical, or self-image related—and comparative device
performance. However, there is no consensus for how to evaluate device perfor-
mance. Multiple upper limb standardized tasks have been evaluated by expert
reviews [20–22] and consensus panels (SCI EDGE task force [23, 24], and the
SCI RE project [25, 26]), with general agreement that the ideal evaluation tasks
have the following established psychometric properties:
• Ecological and construct validity, such that arm and hand functional tasks be
relevant to Activities of Daily Living (ADLs) but do not confound hand function
with other impairments, like balance;
• Sensitivity to detect small clinically significant changes important for evaluating
treatment effects and comparing interventions;
• Performance range sufficient to avoid ceiling and floor effects;
• Reliability associated with repeatable, standardized, unambiguous scoring that
(1) does not confound performance speed with degree of ability to complete the
task or level of assistance needed, (2) provides some estimate of trial-to-trial
Advances in BCI: A Neural Bypass … 17

performance variability, (3) is based on observed measurements and not on


subjective reports, and (4) is not subject to practice effects;
• Clinical relevance, such that the measurement domain falls within the arm and
hand activity domain of the International Classification of Functioning,
Disability and Health [27]; and
• Prognostic implications for functional independence, established by presence of
normative performance data for patients with SCI by ASIA Impairment Scale
level.
A recent review from the tendon-transfer literature [21] identifies 8 measures that
fall within the ICF Arm and Hand Activity domain: The Grasp and Release Test
(GRT [28]), the Capabilities of Upper Extremity Questionnaire (CUE-Q [29, 30]),
the Van Lieshout Test (VLT, [31, 32]), the Action Research Arm Test (ARAT,
[33]), the Tetraplegia Hand Activity Questionnaire (THAQ [34]), the AuSpinal Test
[35], the Sollerman Hand Function Test (SHFT [36]) and the Graded and Redefined
Assessment of Strength, Sensibility, and Prehension (GRASSP [19, 22, 37, 38]). Of
these, only 6 are based on rater observations of performance (GRT, VLT, ARAT,
AuSpinal, SHFT, GRASSP), only 4 of which (GRT, VLT, SHFT, GRASSP) have
been endorsed for research in spinal cord injury either by the SCI EDGE task force
of the Neurology Section of the APTA [23, 24] or the Canadian SCI Research
Evidence (SCIRE) Project [25, 26]. To date, BCI-neuroprosthetic studies have
alternatively used the GRT (Freehand and/or tendon transfers [39–43] [44]),
ARAT (Robotic Applied Physics Laboratory arm [45, 46]), Box and Block Test
(BBT [47]—which is also endorsed by SCIRE; Robotic Applied Physics
Laboratory arm [45]), or GRASSP (NBT system [14]).
Individually, these 4 measures address different aspects of the performance
profile of BCI-neuroprosthetics (see Fig. 6), so they may be best utilized as a
battery. Of these 4 measures, only the ARAT assesses fine pincer grip, which is
important for manipulating small objects. However, the ARAT incorporates only
the power grip into dynamic object manipulation (pouring a cup), and not fine
fingertip grasps. The GRASSP evaluates power and fine grips in dynamic tasks, but
has potential floor and ceiling effects. The GRASSP can provide prognostic
implications for functional independence, as its scoring is normed to AIS
Impairment Scale levels, which are widely recognized by patients and clinicians
alike. For example, the participant using our NBT system was scored on the
GRASSP as improving from C5/6 to C7/T1 level function using the device, which
would correlate to a significant improvement in functional independence if used in
the home setting. The BBT does not specify grip type, and some patients can do the
object transfer task without the neuroprosthetic, with only their baseline adaptive
grip. This task may help identify speed and efficiency limitations of BCI systems,
and when patients should not use the BCI-neuroprosthetic over their adaptive
grip. Lastly, the GRT was developed to assess hand and wrist function in isolation
from trunk and arm control for a range of light to heavy objects. It has also been
widely used to assess recovery of function after tendon transfer surgery, which is
18 G. Sharma et al.

Power/ Lateral Tip-to-Tip Fine


Palmar Key Opposition Pincer

Grip
Type

GRT
BBT ARAT BBT ARAT BBT ARAT ARAT

Static
GRASSP GRT GRASSP GRT GRASSP

ARAT

Dynamic
GRASSP GRASSP GRASSP

Fig. 6 Grip types featured in standardized tests of upper limb motor function for
BCI-neuroprosthetic research and between-device comparisons No single outcome measure
adequately assesses performance across static and dynamic performance across the four essential
grip types (power/palmar, lateral key, tip-to-tip opposition, and fine pincer). Static tasks isolate
hand and wrist function from other upper limb movements, while dynamic tasks require stable grip
through forearm pronation/supination for successful completion

the only neuroprosthetic-like intervention that has been widely translated into
clinical practice.
In summary, research and development, clinical translation and future pre-
scriptions for upper limb BCI-neuroprosthetics depend on the rational development
of a battery of upper limb functional measures to compare devices in meaningful
ways. Together, the ARAT, BBT, GRT, and GRASSP provide complementary
measures to assess device strengths and limitations across 4 essential grip types
(power/palmar, lateral key, tip-to-tip opposition, and fine pincer) in static and
dynamic hand and wrist tasks.

References

1. Aflalo T et al. (2015) Neurophysiology. Decoding motor imagery from the posterior parietal
cortex of a tetraplegic human. Science 348(6237):906–910
2. Bansal AK et al. (2012) Decoding 3D reach and grasp from hybrid signals in motor and
premotor cortices: spikes, multiunit activity, and local field potentials. J Neurophysiol 107(5):
1337–1355
Advances in BCI: A Neural Bypass … 19

3. Chapin JK (1999) et al. Real-time control of a robot arm using simultaneously recorded
neurons in the motor cortex. Nat Neurosci 2(7):664–670
4. Hochberg LR et al. (2012) Reach and grasp by people with tetraplegia using a neurally
controlled robotic arm. Nature 485(7398):372-U121
5. Hochberg LR et al. (2006) Neuronal ensemble control of prosthetic devices by a human with
tetraplegia. Nature 442(7099):164–171
6. Kennedy PR, Bakay RAE (1998) Restoration of neural output from a paralyzed patient by a
direct brain connection. Neuroreport 9(8):1707–1711
7. Santhanam G et al. (2006) A high-performance brain-computer interface. Nature 442(7099):
195–198
8. Serruya MD et al. (2002) Instant neural control of a movement signal. Nature 416(6877):
141–142
9. Taylor DM, Tillery SI, Schwartz AB, (2002) Direct cortical control of 3D neuroprosthetic
devices. Science 296(5574):1829–1832
10. Velliste M et al. (2008) Cortical control of a prosthetic arm for self-feeding. Nature 453(7198):
1098–1101
11. Wessberg J et al. (2000) Real-time prediction of hand trajectory by ensembles of cortical
neurons in primates. Nature 408(6810):361–365
12. Ethier C et al. (2012) Restoration of grasp following paralysis through brain-controlled
stimulation of muscles. Nature 485(7398):368–371
13. Moritz CT, Perlmutter SI, Fetz EE (2008) Direct control of paralysed muscles by cortical
neurons. Nature 456(7222):639-U63
14. Bouton CE et al. (2016) Restoring cortical control of functional movement in a human with
quadriplegia. Nature 533(7602):247–250
15. Sharma G et al. (2016) Using an artificial neural bypass to restore cortical control of rhythmic
movements in a human with quadriplegia. Sci Rep 6:33807
16. Sharma G et al. (2015) Time stability of multi-unit, single-unit and LFP neuronal signals in
chronically implanted brain electrodes. Bioelectronic Medicine (in press)
17. Scholkopf B et al. (1997) Comparing support vector machines with Gaussian kernels to radial
basis function classifiers. IEEE Trans. Signal Process 45:2758–2765
18. Humber C, Ito K, Bouton C (2010) Nonsmooth formulation of the support vector machine for
a neural decoding problem. arXiv
19. Kalsi-Ryan S et al. (2012) Development of the Graded Redefined Assessment of Strength,
Sensibility and Prehension (GRASSP): reviewing measurement specific to the upper limb in
tetraplegia. J Neurosurg Spine 17(1 Suppl):65–76
20. Mulcahey MJ, Hutchinson D, Kozin S (2007) Assessment of upper limb in tetraplegia:
Considerations in evaluation and outcomes research. J. Rehabil Res Dev 44(1):91–102
21. Sinnott KA et al. (2016) Measurement outcomes of upper limb reconstructive surgery for
tetraplegia. Arch Phys Med Rehabil 97(6 Suppl 2):S169–81
22. Kalsi-Ryan S et al. (2016) Responsiveness, sensitivity, and minimally detectable difference of
the graded and redefined assessment of strength, sensibility, and prehension, version 1.0.
J Neurotrauma 33(3):307–314
23. Kahn J et al. (2013) SCI EDGE outcome measures for research, N.S. American Physical
Therapy Association, Alexandria, VA
24. Kahn J et al. (2013) Spinal Cord Injury EDGE Task Force Outcome Measures
Recommendations, American Physical Therapy Association, Neurology Section
25. Mille WC et al. (2013) Outcome measures. In: Eng JJ et al. (eds) Spinal cord injury
rehabilitation evidence, version 4.0. Vancouver, CA, pp 28.1–28.366
26. Hsieh JTC et al. (2011) Outcome Measures Toolkit: Implementation Steps, S. Project.
London, ON, CA, p 1–58
27. Organization WH, International Classification of Functioning, Disability and Health:
ICF2001: World Health Organization
28. Wuolle KS et al. (1994) Development of a quantitative hand grasp and release test for patients
with tetraplegia using a hand neuroprosthesis. J Hand Surg 19(2):209–218
20 G. Sharma et al.

29. Oleson CV, Marino RJ (2014) Responsiveness and concurrent validity of the revised
capabilities of upper extremity-questionnaire (CUE-Q) in patients with acute tetraplegia.
Spinal Cord 52(8):625–628
30. Marino RJ et al. (2015) Reliability and validity of the capabilities of upper extremity test
(CUE-T) in subjects with chronic spinal cord injury. J Spinal Cord Med 38(4):498–504
31. Spooren A et al. (2006) Measuring change in arm hand skilled performance in persons with a
cervical spinal cord injury: responsiveness of the Van Lieshout Test. Spinal Cord 44(12):
772–779
32. Franke A et al. (2013) Arm hand skilled performance in persons with a cervical spinal cord
injury—long-term follow-up. Spinal cord 51(2):161–164
33. Yozbatiran N, Der-Yeghiaian L, Cramer SC (2008) A standardized approach to performing
the action research arm test. Neurorehabilitation Neural Repair 22(1):78–90
34. Land N et al. (2004) Tetraplegia Hand Activity Questionnaire (THAQ): the development,
assessment of arm–hand function-related activities in tetraplegic patients with a spinal cord
injury. Spinal Cord 42(5):294–301
35. Coates S et al. (2011) The AuSpinal: a test of hand function for people with tetraplegia. Spinal
Cord 49(2):219–229
36. Sollerman C, Ejeskär A (1995) Sollerman hand function test: a standardised method and its
use in tetraplegic patients. Scand J Plast Reconstr Surg Hand Surg 29(2):167–176
37. Kalsi-Ryan S et al. (2012) The graded redefined assessment of strength sensibility and
prehension: reliability and validity. J Neurotrauma 29(5):905–914
38. Kalsi-Ryan S et al. (2009) Assessment of the hand in tetraplegia using the Graded Redefined
Assessment of Strength, Sensibility and Prehension (GRASSP) impairment versus function.
Top Spinal Cord Inj Rehabil 14(4):34–46
39. Peckham PH et al. (2001) Efficacy of an implanted neuroprosthesis for restoring hand grasp in
tetraplegia: a multicenter study. Arch Phys Med Rehabil 82(10):1380–1388
40. Smith B, Mulcahey M, Betz R (1996) Quantitative comparison of grasp and release abilities
with and without functional neuromuscular stimulation in adolescents with tetraplegia. Spinal
Cord 34(1):16–23
41. Kilgore KL et al. (1997) An implanted upper-extremity neuroprosthesis. follow-up of five
patients. J Bone Jt Surg Am 79(4):533–541
42. Kilgore KL et al. (2008) An implanted upper-extremity neuroprosthesis using myoelectric
control. J Hand Surg 33(4):539–550
43. Mulcahey M et al. (1999) A prospective evaluation of upper extremity tendon transfers in
children with cervical spinal cord injury. J Pediatr Orthop 19(3):319–328
44. Mulcahey M, Smith B, Betz R (2003) Psychometric rigor of the Grasp and Release Test for
measuring functional limitation of persons with tetraplegia: a preliminary analysis. J Spinal
Cord Med 27(1):41–46
45. Wodlinger B et al. (2015) Ten-dimensional anthropomorphic arm control in a human
brain-machine interface: difficulties, solutions, and limitations. J Neural Eng 12(1):016011
46. Collinger JL et al. (2013) High-performance neuroprosthetic control by an individual with
tetraplegia. Lancet 381(9866):557–564
47. Mathiowetz V et al. (1985) Adult norms for the box and block Test of manual dexterity. Am J
Occup Ther 39(6):386–391
Precise and Reliable Activation of Cortex
with Micro-coils

Seung Woo Lee and Shelley I. Fried

1 Introduction

The optimization of brain-computer interfaces (BCIs) will require the delivery of


feedback signals to the somatosensory and/or proprioceptive cortices of the device
user. Ultimately, the precision and reliability with which such signals can be
delivered will underlie the quality and consistency of the information that can be
conveyed. Unfortunately, the use of implantable micro-electrodes to deliver elec-
trical signals directly into cortex has several inherent drawbacks that limit their
efficacy and can reduce their consistency over time. For example, implantation
triggers a series of complex biological reactions that can alter the structural and
functional properties of the surrounding neural tissue [9, 22]. In severe cases, these
changes can lead to high-impedance glial encapsulation around individual elec-
trodes, thereby disrupting the flow of current into the surrounding tissue. Even
without the loss in effectiveness that can occur over time, conventional electrode
implants are limited by their inability to selectively target (or avoid) specific types
of neurons. This is of particular concern with passing axons from distal neurons, as
their high sensitivity to stimulation can greatly expand the size of the region acti-
vated by a given electrode (Fig. 1a) and can also lead to a wide array of undesirable
side effects [1, 10, 28]. In addition to limiting the potential effectiveness of BCI
feedback, the challenges associated with implantable micro-electrodes are similarly
problematic for other types of cortically-based prostheses that require focal

S.W. Lee (&)  S.I. Fried


Department of Neurosurgery, Massachusetts General Hospital,
Harvard Medical School, Boston, MA, USA
e-mail: lee.seungwoo@mgh.harvard.edu
S.I. Fried
e-mail: fried.shelley@mgh.harvard.edu
S.I. Fried
Boston VA Healthcare System, Boston, MA, USA

© The Author(s) 2017 21


C. Guger et al. (eds.), Brain-Computer Interface Research, SpringerBriefs
in Electrical and Computer Engineering, DOI 10.1007/978-3-319-64373-1_3
22 S.W. Lee and S.I. Fried

Fig. 1 Enhanced control of cortical activation with micro-coil magnetic stimulation. a Schematic
illustration of a micro-electrode implanted into cortex; conventional electrodes produce electric
fields that are largely symmetric (red arrows) and therefore create uniform activating forces on all
nearby neurons and processes (red shaded region). b Similar to (a) except for implantation of a
micro-coil. The induced electric field is spatially asymmetric and creates a relatively strong
activating force on vertically oriented neurons while creating only a relatively weak activating
force on horizontally oriented processes

activation, e.g. visual prostheses that strive to restore sight to the blind by stimu-
lating primary visual cortex (V1) [6, 19, 25]. Focal and predictable activation of
cortex is also crucial for fundamental research studies in which specific regions of
cortex are targeted by electric stimulation in order to resolve details of brain
structure and function.
It is well known that magnetic stimulation can overcome many of the limitations
associated with microelectrode-based stimulation of cortex. Unlike electric fields,
magnetic fields pass readily through biological tissue. Thus, even if coils become
severely encapsulated, their ability to stimulate remains stable over time. While
magnetic fields are not thought to directly activate neurons, time-varying magnetic
fields induce the electric fields that are effective. Magnetic fields can thereby ‘carry’
the electric field beyond any region of encapsulation. The magnetic fields arising
from coils are also spatially asymmetric and can therefore produce stronger acti-
vating forces in some directions than others (Fig. 1b). Thus, in the cortex for
example, a suitably oriented coil can create a strong activating force for
vertically-oriented pyramidal neurons without simultaneously creating a strong
activating force for the horizontally-oriented passing axons that can arise from
distant regions of the brain. As a result, activation with micro-coils can be confined
to a focal region around the coil, a considerable advantage over the spatially broad
activation that arises with conventional electrode implants [10]. While most
Precise and Reliable Activation of Cortex with Micro-coils 23

previous studies with magnetic stimulation have focused on the use of large coils
for non-invasive activation of neurons, several recent efforts have shown that tiny
coils, e.g. small enough to be safely implanted into cortex, can strongly activate
surrounding neurons. Here, we describe some of this recent work and review some
of the advantages of this approach. We also discuss some of the challenges that will
need to be overcome before micro-coil based implants can be safely utilized in
clinical applications.

2 Neuronal Activation with Submillimeter-Sized


Inductors

Despite the potential benefits of magnetic stimulation, the reduction in the size of the
coil needed for safe implantation into the brain greatly limits the strength of the
electric field that can be induced. Fortunately, the initial computational analyses with
micro-coils [2] suggested that field strengths in excess of the known thresholds
required for activation [17] could still be obtained, as long as the distance between
the coil and targeted neurons was limited. Much previous work with electric stim-
ulation has shown that in many cases the magnitude of the gradient of the electric
field is actually the driving force for activation [24], and so the electric field gradients
arising from small coils were also confirmed to exceed known thresholds [14, 15].
Initial electrophysiological testing utilized a millimeter-sized, commercially-
available inductor (Panasonic ELJ-RFR10JFB, 21 turns, 500 um diameter, 1 mm
length, 5  10 µm copper wire, 100 nH) and confirmed its ability to activate neu-
rons. Further, initial testing also showed that the asymmetric electric fields (and
electric field gradients) could be harnessed to preferentially activate specific neu-
ronal sub-populations within the region surrounding the coil. For example, if the
central axis of the coil was held parallel to the surface of a retinal explant (Fig. 2a),
the component of the electric field penetrating vertically into the retina, e.g. parallel
to the long axes of bipolar cells, was strong and therefore optimized to activate these
cells. The resulting spiking patterns that arose in ganglion cells closely matched the
characteristic patterns known to arise when bipolar cells are artificially activated
(Fig. 2c), e.g. bursts of spikes with a relatively long onset latency [7, 12, 13]. This
same orientation did not simultaneously activate ganglion cells or their axons
(Fig. 2, pink neurons), consistent with the relatively weak electric fields and gra-
dients arising in the horizontal direction. Rotation of the coil so that its central axis
was now perpendicular to the retinal surface (Fig. 2b) resulted in induced electric
fields that were now strongest in the horizontal direction, e.g. along the length of
ganglion cell axons, and resulted in single, short-latency (<1 ms) action potentials
(Fig. 2f), characteristic of the response known to arise when ganglion cells are
activated directly [7]. This new orientation did not simultaneously produce the
prolonged burst spiking associated with bipolar cell activation. Thus, these results
suggest that the spatial asymmetry of coil-based fields can be harnessed to selec-
tively target specific neuronal sub-populations within the retina, and therefore raise
24 S.W. Lee and S.I. Fried
Precise and Reliable Activation of Cortex with Micro-coils 25

JFig. 2 Submillimeter-sized inductor coils can selectively target specific neuronal sub-populations.
a Schematic of the coil arrangement optimized to activate vertically-oriented neurons. b Schematic
of the arrangement optimized to activate horizontally oriented neurons. c Typical responses arising
in retinal ganglion cells, in response to stimulation from the configuration in (a). d A single
biphasic waveform elicited in response to magnetic stimulation (solid line) has identical amplitude
and kinetics to the action potentials elicited by light stimuli (dashed line). e Peri-stimulus time
histogram for the responses in the panel C (bin-size: 10 ms). f Typical responses arising from
direct activation of ganglion cells; the expanded time-scale (right) reveals a single, short-latency
action potential within the stimulus artifact. (These figures were modified from Bonmassar et al.
2012 published in Nature Communications. [2])

the possibility that selective targeting of specific neuronal populations might be


achievable within other portions of the brain as well.

3 Enhanced Control of Cortical Pyramidal Neurons


with Micromagnetic Stimulation

We therefore explored the potential of micro-coils for modulating the activity of


cortical pyramidal neurons. These neurons transmit the output of local computa-
tions within a given cortical column, and are therefore a likely target for BCIs and
other applications in which the cortex is artificially stimulated. Coronal slices from
the mouse brain were used to explore the sensitivity of layer 5 pyramidal neurons
(L5PNs) to stimulation from the coil. This in vitro preparation is highly attractive
because the long central axis of the L5PNs runs parallel to the slice surface,
allowing the effectiveness of different field strengths and orientations to be directly
compared (Fig. 3). Use of the micro-coil is further attractive because the spatial
extent of the fields it generates is considerably smaller than the length of a single
L5PN and therefore the sensitivity of individual portions of the neuron can be
compared [15].
Stimulation was delivered with the coil initially centered above the proximal
axon (Fig. 3a). The central axis of the coil was both parallel to the surface of the
slice and perpendicular to the long axis of targeted L5PNs. This alignment created a
strong electric field along the long-axis of the cell, and the specific location we used
was chosen to center the peak of the electric field over the portion of the cell
thought to have the highest sensitivity to electric stimulation [8] (Fig. 3b, top left).
Even though this same arrangement had been successful for activation of other
types of neurons in earlier experiments (e.g. retinal ganglion cells and STN neu-
rons) [2, 16], it was not effective for eliciting spikes here. This suggests that the
activation thresholds of L5PNs may be higher than those of other neurons, and is
consistent with previous studies utilizing electric stimulation [23]. In a second set of
experiments, we applied repetitive stimulation (10 per second, duration of 4 s, each
individual stimulus was a single period of a 500 Hz sinusoid) and found such trains
to be highly effective (Fig. 3b, top right). Extending the duration of the stimulus
26 S.W. Lee and S.I. Fried

Fig. 3 Coil orientation influences the response to stimulation. a Schematic of the experimental
setup used typically for micro-coil stimulation of the cortex. b Left panels depict the different
locations and orientations used in our experiments; red arrows indicate the orientation in which the
induced electric field is strongest. Note that the coil was positioned over the proximal axon in the
top panel and over the apical dendrite in the middle and bottom panels. Right panels depict a
typical response for each arrangement. Red horizontal bars: 4 s duration of 10 Hz stimulus. c (Top)
Schematic depicting oblique orientation of the coil relative to the long axis of the apical dendrite
(left) and perpendicular orientation of the coil (right). (Middle) Typical responses for each
orientation of the coil. (Bottom) Corresponding PSTHs for the coil orientations. (These figures are
modified versions from Lee and Fried 2016 published in IEEE TNSRE. [15])

train to 30 s produced similarly robust spiking for the entire duration (not shown),
suggesting that prolonged stimulation of cortex does not result in desensitization or
any other form of suppression, e.g. that which was observed during prolonged
stimulation of STN neurons [16].
Similar to the results with the coil over the proximal axon, translation of the coil
to a location over the apical dendrite (Fig. 3b, middle left) did not produce spiking
in response to single stimuli (not shown). Somewhat surprisingly, the same trains of
repetitive stimuli that were effective over the proximal axon did not elicit spiking
for this new coil location. Interestingly however, more prolonged stimulation did
eventually lead to a sudden onset of spiking (Fig. 3b, middle right). Once the onset
of spiking occurred, individual neurons became more sensitive to subsequent
stimuli [15]. This included a faster onset of spiking to subsequent trains of stimuli,
as well as a reduction in the thresholds for activation. These changes persisted for
Precise and Reliable Activation of Cortex with Micro-coils 27

the duration of our experiments (typically 30–60 min) and therefore suggest that
prolonged stimulation of the targeted neuron led to a change in its state.
The coil location that was most effective for inducing state changes was
approximately centered over the proximal axon of L2/3 pyramidal neurons, neurons
that are known to deliver strong excitatory input to L5PNs. To explore whether the
state change was mediated via activation of L2/3 neurons, we added 10 µM
6-cyano-7-nitroquinoxalene—2,3-dione (CNQX) and 50 µM D-2-amino-5-
phosphono-pentanoic acid (APV) to the perfusion bath. These pharmacological
antagonists of excitatory glutamatergic input prevented state changes from occur-
ring, suggesting such changes are mediated through repetitive activation of L2/3
pyramidal neurons. Rotation of the coil by 90º (Fig. 3b, bottom panels) resulted in a
complete loss of effectiveness (no spiking or state changes), even if the stimulus
duration was increased by a factor of 5–10 (range: 6000–12,000 pulses). We tested
other coil orientations as well and found that a 45º rotation resulted in a transient
suppression during the period of stimulation with an onset of spikes that began only
after completion of the stimulus (‘OFF’ response, left panels, Fig. 3c). When the
coil orientation was rotated back to the original ‘perpendicular’ orientation, stim-
ulation again resulted in ‘ON’ responses (right panels, Fig. 3c), suggesting the
orientation of the coil strongly influences the effectiveness of stimulation, and also
that coil orientation can be used to either activate or inactivate L5PNs.
The change in state observed in vitro may help to explain some of the variations
in efficacy reported for the different transcranial magnetic stimulation paradigms
used clinically. We replicated the timing of several common clinical paradigms in
our in vitro experiments and found that, although most could induce a change in
state, there were significant differences in the number of stimuli required to induce
the state change across paradigms [15] (not shown). The number depended on both
the pattern and rate at which stimuli were delivered. It is interesting to note that the
number of stimuli needed to induce a state change in vitro was highly similar to the
number of stimuli used by several of the more common clinical paradigms [21],
raising the possibility that clinical effectiveness of some TMS paradigms may result
from their ability to induce a state change in targeted L5PNs. The neuronal state
changes seen here may also help to explain some discrepancies in previous studies
that evaluated sensitivity as a function of the depth of penetration during stimula-
tion of the visual cortex in non-human primates. In one series of experiments
utilizing long duration trains, shallower depths of penetration were more effective
[5] while in other studies with shorter trains, deeper penetrations were the most
effective [27]. Our in vitro results described above raise the possibility that a change
in state arising from the longer stimulus trains could underlie the higher sensitivity
reported for shallower penetration depths. Finally, the changes in state seen here at
the neuronal level may contribute to or even underlie state changes previously
reported at the behavioral level [11, 20, 26]. For example, the extended period of
stimulation utilized in some previous studies led to a significant reduction in
threshold that persisted beyond the duration of stimulus [11, 26].
28 S.W. Lee and S.I. Fried

Fig. 4 Reduced-size micro-coils are highly effective. a Schematic illustration showing the c
orientation of the new micro-coil relative to targeted L5PNs after implantation into cortex.
b Photograph of the microfabricated coil. c Schematic of the in vitro experimental setup—the
alignment here is identical to that which would occur after implantation; the tip of the coil was
centered over the proximal axon of L5PNs. d Typical responses from L5 pyramidal neurons to
stimulation from the new coil. e The probability of eliciting a spike plotted as a function of the
stimulus amplitude. f Onset latency of spiking during 100 Hz stimulation. g Schematic of the two
orthogonal coil orientations used in the experiments. h Typical responses of L5PNs to each coil
orientation. (These figures are adapted from Lee et al. 2016, Science Advances. [14])

4 Development of Implantable Micro-coils


for Intracortical Magnetic Stimulation

Although the cross-sectional area of the commercial inductor used in the above
experiments was smaller than that of commonly used DBS leads (1.0 vs. 1.1 mm in
diameter), the size was still too large for implantation into cortex, especially if
individual coils were to be incorporated into a multi-coil array. To determine
whether even smaller coils would be effective, we re-visited the computer simu-
lations, this time exploring the contributions of single turns of the coil [14].
Interestingly, the field (and field gradient) arising from even a single loop of the coil
was found to exceed the threshold required for neuronal activation (not shown).
Consistent with electromagnetic theory, induced fields were strongest at the loca-
tions where there was a bend in the coil, suggesting that the relatively large
diameter of the loop could be replaced with a single bend in a small micro-wire and
still be comparably effective. To evaluate the model prediction, we microfabricated
a small ‘bent wire’ design (copper trace on silicon) (Fig. 4a, b) [14]; its
cross-sectional area was 50  100 µm, identical to that of existing NeuroNexus
cortical electrodes. Because such electrodes are used routinely for chronic
implantation into cortex, it was highly likely that the new coil design could also be
safely implanted into cortex.
Microfabricated coils were first tested in in vitro experiments using coronal
slices from the mouse brain. Similar to the previous in vitro experiments with the
commercial inductor, the coil was oriented so that the strongest component of the
induced electric field was aligned with the long axis of targeted neurons and
positioned so that the peak of the gradient was centered over the axon initial
segment (AIS, Fig. 4c). With this arrangement, stimulation consistently elicited
spiking in targeted pyramidal neurons (n = 11/11). Bath application of the synaptic
blockers (NBQX, Bicuculline, and APV) did not eliminate spiking (Fig. 4d),
suggesting responses arose from direct activation of the targeted neuron, and not
secondary to activation of one or more presynaptic neurons. Onset latencies of
elicited spikes ranged from 0.3–0.7 ms (n = 11, Fig. 4f); their short duration lends
additional support to direct activation of the L5PN. Addition of tetrodotoxin
(TTX) to the bath, a blocker of voltage-gated sodium channels, eliminated the
responses, confirming they were indeed action potentials and also allowing the
isolated stimulus artifact to be visualized. Subtraction of the artifact from the raw
Precise and Reliable Activation of Cortex with Micro-coils 29
30 S.W. Lee and S.I. Fried

signal revealed the precise details of the elicited spike waveform (Fig. 4d, bottom
right, blue trace); such waveforms were nearly identical to those that arose spon-
taneously in the same cell, providing further confirmation that the responses elicited
via stimulation from the coil were indeed action potentials. Analogous to electric
stimulation, the probability of spikes increased as the amplitude of the input
waveform to the coil was increased (n = 7, Fig. 4e). The mean threshold for acti-
vation was 44.21 ± 7.31 mA (SD).
Rotation of the coil orientation by 90º led to the complete loss of activation
(Figs. 4g, h, compare top and bottom panels), suggesting that the new coils simi-
larly target only those neurons aligned in a specific orientation, e.g. the
horizontally-oriented passing axons are not activated by coil orientations that
activate vertically-oriented pyramidal neurons. We therefore expected coil-based
activation to be spatially confined to a focal region around the coil—this would be
in sharp contrast to the spatially extensive activation that arises with
electrode-based stimulation. To test this, we compared the spatial extent of acti-
vation in coronal slices from GCaMP6f mice [14]; the pyramidal neurons of these
mice express a green fluorophore that is sensitive to the level of intracellular cal-
cium [4]. As expected, magnetic stimulation produced activation that was
well-confined around the coil while the pattern of activation from stimulation with
an electrode was more spatially expansive (Fig. 5).

Fig. 5 Comparison of spatial extent of cortical excitation. a Photograph of a microelectrode


situated in primary visual cortex (V1) of coronal brain slices from Thy1-GCaMP6f transgenic
mice. b The change in fluorescence in response to three different levels of stimulation from the
microelectrode. The yellow triangle and the dashed line indicate the approximate orientation of
the cortical pyramidal neurons. c Similar to (a), showing the micro-coil situated in the V1 slice.
d The change in fluorescence in response to three different levels of magnetic stimulation from the
micro-coil. (These figures are adapted from Lee et al. 2016, Science Advances. [14])
Precise and Reliable Activation of Cortex with Micro-coils 31

Based on the effectiveness for in vitro activation, the reduced-size micro-coils


were inserted into whisker (motor) cortex of anesthetized mice (Fig. 6). Stimulation
with single pulses or trains of pulses reliably produced whisker movements
(n = 10/10). Importantly, responses to coil stimulation were highly consistent with
the findings from several previous electric stimulation studies [3, 18]. For example,
single whiskers or rows of whiskers could be activated, depending upon the specific
site at which the coil was implanted. In addition, increasing the frequency of
stimulation from 10 to 100 Hz reversed the direction of whisker movement from
protraction to retraction. Taken together, these findings suggest that micro-coil
stimulation drives cortical circuits in a manner comparable to that from electrodes.

Fig. 6 Implanted micro-coils drive cortical circuits in vivo. a Schematics of the stimulation
patterns used for in vivo testing. b Schematic illustration of the micro-coil location in whisker
cortex (left) and the corresponding whisker motion arising from stimulation (right). c Similar to
(b) except for implantation into whisker sensory cortex. d Peak angle of the whisker movements
elicited by micro-coil stimulation. e Latencies of the corresponding whisker movements. (These
figures are adapted from Lee et al. 2016, Science Advances. [14])
32 S.W. Lee and S.I. Fried

5 Discussion

We are trying to develop a safer and more effective neural prosthesis that can be
implanted into cortex and used for a variety of different applications, including the
delivery of a feedback signal for BCIs. As part of these efforts, we showed that the
magnetic stimulation arising from submillimeter-sized inductors was strong enough
to activate retinal neurons. In addition, specific sub-populations of retinal neurons
could be selectively targeted, and the specific population targeted depended strongly
on the orientation of the coil. Based on the success with the millimeter-sized
inductor, an even smaller micro-coil (50  100 um cross-section) was developed.
The cross-sectional area of the new design was highly similar to micro-fabricated
electrodes that are commonly implanted into cortex, and therefore strongly suggests
that the new coil could be safely implanted as well. This was confirmed during
in vivo testing in anesthetized mice, whereby stimulation from the coil successfully
activated nearby cortical neural circuits and elicited behavioral responses that were
consistent with the known circuitry of the mouse whisker and barrel cortices. The
new micro-coils were designed to be implanted in a way that created strong electric
field gradients in a direction perpendicular to the brain’s surface and weak gradients
in the orthogonal directions. As a result, vertically-oriented pyramidal neurons could
be targeted without simultaneously activating the horizontally-oriented passing
axons that arise from other brain regions. This led to a relatively smaller region of
activation using coils, in contrast to the much broader region activated by electrodes.
The results of ongoing computational studies suggest that the size of the micro-coil
can be reduced even further without a significant loss in efficacy, e.g. to 25  50 µm
cross section or possibly even to 12.5  25 µm. Additional simulations suggest that
with some advanced coil materials and more complex designs, we may be able to
shrink coil sizes down to the level of those electrodes used in conventional
multi-electrode arrays (MEAs). We believe that the combination of increased relia-
bility with no loss in performance over time will make these coils a highly attractive
alternative for artificial stimulation of the cortex and can help to optimize delivery of
important feedback signals about proprioception and/or somatosensation, informa-
tion thought to be essential for the ultimate success of BCIs.

Acknowledgments Research supported by the Veterans Administration—RR&D (1I01


RX001663), the Rappaport Foundation, and by the NIH (NEI R01-EY023651 and NINDS
U01-NS099700).

References

1. Behrend MR, Ahuja AK et al. (2011) Resolution of the epiretinal prosthesis is not limited by
electrode size. IEEE Trans Neural Syst Rehabil Eng 19(4):436–442
2. Bonmassar G, Lee SW et al. (2012) Microscopic magnetic stimulation of neural tissue. Nat
Commun 3:92
Precise and Reliable Activation of Cortex with Micro-coils 33

3. Brecht M, Schneider M et al. (2004) Whisker movements evoked by stimulation of single


pyramidal cells in rat motor cortex. Nat 427(6976):704–710
4. Dana H, Chen TW et al. (2014) Thy1-GCaMP6 transgenic mice for neuronal population
imaging in vivo. PLoS One 9(9):e108697
5. DeYoe EA, Lewine JD et al. (2005) Laminar variation in threshold for detection of electrical
excitation of striate cortex by macaques. J Neurophysiol 94(5):3443–3450
6. Fernandez E, Normann R (1995) Introduction to visual prostheses. In: Kolb H, Fernandez E,
Nelson R (eds) Webvision: the organization of the retina and visual system. Salt Lake City
(UT)
7. Fried SI, Hsueh HA et al. (2006) A method for generating precise temporal patterns of retinal
spiking using prosthetic stimulation. J Neurophysiol 95(2):970–978
8. Fried SI, Lasker AC et al. (2009) Axonal sodium-channel bands shape the response to electric
stimulation in retinal ganglion cells. J Neurophysiol 101(4):1972–1987
9. Grill WM, Norman SE et al. (2009) Implanted neural interfaces: biochallenges and engineered
solutions. Annu Rev Biomed Eng 11:1–24
10. Histed MH, Bonin V et al. (2009) Direct activation of sparse, distributed populations of
cortical neurons by electrical microstimulation. Neuron 63(4):508–522
11. Histed MH, Ni AM et al. (2013). Insights into cortical mechanisms of behavior from
microstimulation experiments. Prog Neurobiol 103:115–130
12. Jensen RJ, Ziv OR et al. (2005) Thresholds for activation of rabbit retinal ganglion cells with
relatively large, extracellular microelectrodes. Investig Ophthalmol Vis Sci 46(4):1486–1496
13. Lee SW, Eddington DK et al. (2013) Responses to pulsatile subretinal electric stimulation:
effects of amplitude and duration. J Neurophysiol 109(7):1954–1968
14. Lee SW, Fallegger F et al. (2016). Implantable microcoils for intracortical magnetic
stimulation. Sci Adv 2(12):e1600889
15. Lee SW, Fried S (2016) Enhanced control of cortical pyramidal neurons with micro-magnetic
stimulation. IEEE Trans Neural Syst Rehabil Eng
16. Lee SW, Fried, SI (2015) Suppression of subthalamic nucleus activity by micromagnetic
stimulation. IEEE Trans Neural Syst Rehabil Eng 23(1):116–127
17. Maccabee PJ, Amassian VE et al. (1993) Magnetic coil stimulation of straight and bent
amphibian and mammalian peripheral nerve in vitro: locus of excitation. J Physiol 460:201–219
18. Matyas F, Sreenivasan V et al. (2010) Motor control by sensory cortex. Science 330
(6008):1240–1243
19. Normann RA, Greger B et al. (2009) Toward the development of a cortically based visual
neuroprosthesis. J Neural Eng 6(3):035001
20. Pasley BN, Allen EA et al. (2009) State-dependent variability of neuronal responses to
transcranial magnetic stimulation of the visual cortex. Neuron 62(2):291–303
21. Pell GS, Roth Y et al. (2011) Modulation of cortical excitability induced by repetitive
transcranial magnetic stimulation: influence of timing and geometrical parameters and
underlying mechanisms. Prog Neurobiol 93(1):59–98
22. Polikov VS, Tresco PA et al. (2005) Response of brain tissue to chronically implanted neural
electrodes. J Neurosci Methods 148(1):1–18
23. Ranck JBJr. (1975) Which elements are excited in electrical stimulation of mammalian central
nervous system: a review. Brain Res 98(3):417–440
24. Rattay F (1999) The basic mechanism for the electrical stimulation of the nervous system.
Neuroscience 89(2):335–346
25. Schmidt EM, Bak MJ et al. (1996) Feasibility of a visual prosthesis for the blind based on
intracortical microstimulation of the visual cortex. Brain 119(2):507–522
26. Silvanto J, Pascual-Leone A (2008) State-dependency of transcranial magnetic stimulation.
Brain Topogr 21(1):1–10
27. Tehovnik EJ, Slocum WM (2009) Depth-dependent detection of microampere currents
delivered to monkey V1. Eur J Neurosci 29(7):1477–1489
28. Weitz AC, Nanduri D et al. (2015) Improving the spatial resolution of epiretinal implants by
increasing stimulus pulse duration. Sci Transl Med 7(318):318ra203
Re(con)volution: Accurate Response
Prediction for Broad-Band Evoked
Potentials-Based Brain Computer
Interfaces

J. Thielen, P. Marsman, J. Farquhar and P. Desain

1 Introduction

Broad-band evoked potentials (BBEPs) are brain signals in response to


non-periodic stimuli, opposed to the steady-state evoked potentials (SSEPs) that are
evoked by periodic stimuli. Commonly, well-designed pseudo-random noise-
sequences (PRNS) are used, which are binary sequences designed to have minimal
auto-correlation and cross-correlation. Because of the tight coupling between the
codes used as stimuli and the evoked brain response, BBEPs are also called
code-modulated evoked potentials (cEPs).
There are two main properties of BBEPs that are beneficial for a Brain Computer
Interface (BCI). First, a PRNS evokes a broad-band response (i.e., BBEP) because
of their pseudo-random, non-periodic, and spread-spectrum nature. This makes the
signal of interest robust to narrow-band noise sources such as line noise, and
additionally may overcome any subjective sensitivity for certain frequencies.
Second, assuming that the responses are likely to be uncorrelated when the pre-
sented stimuli are uncorrelated, presenting PRNSs will create responses that are
easily discriminable by a BCI.
BBEP-based BCIs have been predominantly studied in the visual domain using
so called broad-band visually evoked potentials (BBVEPs), or code-modulated
visually evoked potentials (cVEPs). The field was initiated in 1984 and first tested
with an ALS patient who was able to spell about 10 to 12 words per minute with a
BBVEP-based speller and intracranial recordings [1, 2]. Since then, BBVEPs have
continuously been proven to be successfully decodable from EEG and to enable fast
and reliable communication [3–7].

J. Thielen  P. Marsman  J. Farquhar  P. Desain (&)


Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen,
Nijmegen, Netherlands
e-mail: p.desain@donders.ru.nl

© The Author(s) 2017 35


C. Guger et al. (eds.), Brain-Computer Interface Research, SpringerBriefs
in Electrical and Computer Engineering, DOI 10.1007/978-3-319-64373-1_4
36 J. Thielen et al.

A commonly used strategy in BBVEP-based BCI (but also many other BCI
approaches) is to collect a fixed number of responses from each possible output
class, e.g. target letters. Specifically, to learn the template response to a certain
stimulus sequence (i.e., the PRNS), many example responses are collected by
repeating the same stimulus many times (i.e. about a hundred examples). Then, to
obtain the template response, one would average all examples, which suppresses
any activity that is not time-locked to the presentation of the stimulus, thereby
increasing the template signal-to-noise ratio (SNR). There are two main reasons
why this approach is too conservative. Firstly, and most obvious, this approach
requires much time during training to collect examples from all output classes.
Secondly, the number of examples needed for training of one particular output class
to achieve a given template SNR is tightly coupled to that of the single-trial SNR.
Thus, using a fixed number of responses may waste the subject’s time (if the
single-trail SNR is high) or yield a poor template (if it is low).
Here, we discuss a generative method that overcomes the limitations mentioned
above. The method, reconvolution, decomposes responses to full stimulation
sequences into responses to individual events within the sequences by utilizing the
internal structure of the stimuli. After training, this method can generalize to stimuli
it had not seen during training. Additionally, it can be applied without training at
all, thereby enabling a zero-training approach for BBEP-based BCI.

2 Related Work

When synchronization of stimulation and data recording is available by cleverly


choosing the kind of PRNS, it is possible to create templates for all output classes,
using only training responses from a single class. This approach is used in most
common BBVEP-based BCIs [4, 6]. These BCIs use a single so called m-sequence
[8], which is a PRNS that exhibits an auto-correlation function that is near-zero at
all time-shifts except for the zero time-lag. Therefore, circular shifting an
m-sequence creates stimuli that are near-orthogonal. After learning one template
response, the others can be created by shifting the template accordingly. This
method works well, but requires accurate synchronization of stimulation and data
recording. Additionally, it is an implicit attempt to solving the problem of long
training sessions, and still requires a certain fixed and rather arbitrary number of
examples for one of its stimuli.

3 Generative Model

Here, we discuss an approach that can ultimately run without training at all. This
approach, called reconvolution, relies on the superposition hypothesis, which posits
that the response to a sequence of events is the linear summation of the responses to
Re(con)volution: Accurate Response Prediction … 37

the individual events. Let us take a step back. The PRNS sequences are binary
sequences containing variable length runs of ones and zeros (i.e., stimulus ON and
OFF), in the visual domain translatable to flashes of variable durations. Let us define
an event as a fixed-duration flash, meaning there are several types of events that each
entail a flash with unique duration. Let us assume that such an event will always
evoke the same response (i.e., assuming linearity, shown to be sufficient for SSVEPs
[9]). Knowing the transient responses to these individual events, we can generate a
prediction of the response to a sequence of such events, by shifting the transient
responses to the onsets of the events and summing all of them (i.e., performing
convolution). The reverse is also possible. We can learn the transient responses by
decomposing the responses to the individual events from recorded data by imposing
the structure in the stimulation sequences (i.e., performing deconvolution). In the
next few sections, we describe the algorithm to learn these individual transient
responses, and to use them to predict templates in order to do classification.

3.1 Staged Approach

Reconvolution was inspired by a first attempt in the auditory domain [10], and was
further developed as a two-staged approach [7], where responses to Gold codes [11]
were predicted with an average explained variance of about 50 percent. The first stage
involves learning the temporal dynamics in order to predict responses (see Fig. 1),

Fig. 1 Reconvolution. An
illustration of the optimization
of (two) transient responses to
(two) event types in the
stimulus design, in order to
optimize the correspondence
between the single-channel
predicted and observed
response. For this
optimization, there exists an
analytic solution
38 J. Thielen et al.

and the second stage involves learning the spatial distribution of the responses (i.e., a
spatial filter).
Here, given recorded single-channel data X 2 Rm;1 in response to some stimulus,
we first create a design matrix Mi 2 Rm;l for each ith event type (in [7] two event
types were used, namely a short and a long flash), which in the first column lists a 1
whenever the ith event type occurred, and is zero elsewhere, and in each subsequent
column the one shifts down a row (i.e., Toeplitz, see Fig. 1). We assume that the
observed response X is the discrete convolution of all n design matrices Mi with the
individual transient responses Ri 2 Rl;1 :
2 3
R1
X
n
6 . 7
X¼ Mi Ri ¼ ½M1 . . .Mn 4 .. 5 ¼ MR
i
Rn

Here, X and M may contain multiple trials with either the same or different
stimuli by concatenating them along the time-axis (i.e., down the rows). This means
we can find R by solving the linear equation, for which the solution can be found as
follows:

R ¼ MþX

where M þ is the pseudo-inverse of M. We can then predict the template response


Tj for each jth sequence that is built up from the same subset of event types, by
constructing its design matrix Mj and multiplying it with the learned transient
responses R:

Tj ¼ Mj R

The estimation of R and generation of all Tj is performed for all channels


separately.
The second step is to learn the spatial distribution, which is achieved by per-
forming Canonical Correlation Analysis (CCA). CCA finds two transformation
matrices WX and WY for two multivariate variables X and Y by maximizing the
correlation between the projection spaces XWX and YWY .
0
WX X  Y 0 WY
WX ; WY ¼ argmaxWX ;WY pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
0 0

WX XX 0 WX  WY YY 0 WY

Here, CCA is used to learn two spatial filters WX 2 Rc;1 and WY 2 Rc;1 , where
X 2 Rm;c is the raw data, and Y ¼ T 2 Rm;c is the template response. Again, when
multiple trials are recorded either with the same or different stimuli, X and T
represent the concatenation of all trials along the time-axis. We can then use WX to
spatially filter the raw EEG data, and WY to spatially filter the templates. In order to
Re(con)volution: Accurate Response Prediction … 39

classify a new multi-channel single-trial X 2 Rm;c , we can match X with the tem-
plate Tj for each jth stimulus, and select the one with highest correlation:
0 0
WX X  WY Tj
y ¼ argmaxj qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

0 0 0
WX XX 0 WX  WY Tj Tj WY

3.2 Integrated Approach

Rather than performing the two-step approach, which optimizes the temporal and
spatial dynamics sequentially, we can instead integrate reconvolution in the CCA
(see Fig. 2). This is achieved by performing CCA with Y ¼ M, so that in turn
WY ¼ R. Thereby we optimize a spatial filter WX 2 Rc;1 as well as the temporal
dynamics WY 2 Rl;1 in such a way that the correlation between XWX (i.e., spatially
filtered data) and YWY ¼ MR (i.e., the convolution, or the predicted responses) is
maximized. Classification then becomes:
0
WX X  R0 Mj
y ¼ argmaxj qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
0 0
WX XX 0 WX  R0 Mj Mj R

Fig. 2 Integrated CCA-based Reconvolution. An illustration of the optimization of (two) transient


responses to (two) event types in the stimulus design, and a spatial filter, in order to optimize the
correspondence between the predicted and observed response. For this optimization, there exists an
analytic solution
40 J. Thielen et al.

3.3 Zero-Training Approach

In the previous sections, some data was available to train the model. In a
zero-training setting, no data is available. However, since we do know the set of
possible stimuli, we can create the design matrices Mj for all stimuli. When we want
to classify a single-trial X, we perform the CCA-based reconvolution as above for
each of the stimuli individually, and find the one that best explains the observed
data. Specifically, we learn a WX ð jÞ and WY ð jÞ for each jth stimulus, and perform
template matching again:
0
WX ð jÞ X  R0 Mj
y ¼ argmaxj qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
0 0
WX ð jÞ XX 0 WX ð jÞ  R0 Mj Mj R

During the first trial, this model is applied without any prior knowledge, though
obviously one could initialize a model with cross-participant information. Once
trials get classified, we assume those classifications to be correct, and use them as
training data in subsequent classifications. Specifically, in finding the WX ð jÞ and
   
X1:t1 M1:t1
WY ð jÞ with CCA, we let X ¼ and Y ¼ M ¼ . This can be done
Xt Mt
efficiently, since for the CCA approach, only the covariance and cross-covariance
of X and Y are required, and these can be efficiently incrementally updated when
new data arrives.
In pilot studies, we typically find that the first single-trial (i.e., a single output
class selection) takes about 10–20 s before classification, which converges to a
standard 1 to 2 s within 5–10 trials with 95% accuracy. At convergence, the per-
formance in terms of Information Transfer Rate (ITR) is not different than a
pre-trained system. For such convergence to happen automatically, a dynamic
stopping algorithm is developed that requires no prior training either, which is
discussed next.

4 Dynamic Stopping

Commonly, single-trials have a fixed duration with classifications generated when a


fixed trial length is reached. Here, instead we incrementally compute an updated
classification every 500 ms, as new data comes in. This process continues until
some confidence level is reached, after which the trial is stopped and the final
classification is emitted. This enables faster classification when the system is
confident, thereby gaining speed, or slower classification when there is little con-
fidence, gaining accuracy.
Normally, a threshold confidence level is computed based on statistics estimated
from training data. However, for the dynamic stopping method to work even in the
Re(con)volution: Accurate Response Prediction … 41

case of zero-training, we define an online algorithm that does not require prior
information.
For each classification, we collect all correlation values qj and stop the trial when
the maximum correlation qmax is unlikely to be a random maximum in accordance
with a preset targeted accuracy. Specifically, we normalize the correlation values to
q ¼ ðq þ2 1Þ and take all values b
be in the interval 0 to 1 by b q j except the b
q max and fit a
beta distribution:

a; b ¼ argmaxa;b Betapdf ð b
q nonmax _ a; bÞ

We then compute the probability that b q max is larger than the expected maximum
q max ; a; bÞq and verify if it exceeds the preset accu-
of the distribution by Betacdf ð b
racy threshold (i.e., 95%). Here, q denotes the maximum number of segments in a
single-trial, to account for multiple comparisons within a trial.

5 Optimized Stimuli

The response to any stimulation sequence can be predicted as long as the stimu-
lation sequence is built up from the same event types as used during training.
Because of this generative property, one can also use other stimulation sequences
during the testing phase as opposed to the training phase. Thus, it becomes possible
to optimize the stimulation sequences that are used during testing, and thereby
increase the performance of the BCI even further. Two approaches for such code
optimization are discussed in [7]. First, from the set of possible codes, only a small
subset was needed. This subset was selected so that the responses to the selected
codes contained minimized pair-wise cross-correlations. Second, the selected subset
was allocated to a 6  6 speller grid so that neighboring cell’s correlation was
minimized. Both optimizations significantly decreased the correlation between pairs
of stimulus responses. Such optimizations are only possible once the responses to
all stimuli are known, which is an endeavor that is not feasible without a generative
model.

6 Conclusion

The discussed method, reconvolution, shifts the approach to learning of responses


from simply estimating by averaging to careful modeling of underlying dynamics.
This approach makes the model generalize to novel stimuli, and with that allows the
optimization of stimulation schemes to further improve system performance.
Additionally, the method can be applied without any prior information, thereby
enabling zero-training approaches and plug-and-play use. Together, reconvolution
provides a generative and robust means for BCI for communication and control.
42 J. Thielen et al.

References

1. Sutter EE. (1984) The visual evoked response as a communication channel. In: Proceedings of
the IEEE Symposium on Biosensors. pp 95–100
2. Sutter EE (1992) The brain response interface: communication through visually-induced
electrical brain responses. J Microcomput Appl 15(1):31–45. doi:10.1016/0745-7138(92)
90045-7
3. Bin G, Gao X, Wang Y, Hong B, Gao S (2009) VEP-based brain-computer interfaces: time,
frequency, and code modulations [Research Frontier]. Comput Intell Mag IEEE 4(4):22–26.
doi:10.1109/MCI.2009.934562
4. Bin G, Gao X, Wang Y, Li Y, Hong B, Gao S (2011) A high-speed BCI based on code
modulation VEP. J Neural Eng 8(2):025015. doi:10.1088/1741-2560/8/2/025015 PMID:
21436527
5. Spüler M, Rosenstiel W, Bogdan M (2012) One class SVM and canonical correlation analysis
increase performance in a c-VEP based brain-computer interface (BCI). In: Proceedings of
20th European Symposium on Artificial Neural Networks (ESANN 2012). Bruges, Belgium,
pp 103–108
6. Spüler M, Rosenstiel W, Bogdan M (2012) Online adaptation of a c-VEP brain-computer
interface (BCI) based on error-related potentials and unsupervised learning. PLoS ONE 7(12):
e51077. doi:10.1371/journal.pone.0051077 PMID: 23236433
7. Thielen J, van den Broek P, Farquhar J, Desain P (2015) Broad-band visually evoked
potentials: re(con)volution in brain-computer interfacing. PLoS ONE 10(7):e0133797. doi:10.
1371/journal.pone.0133797
8. Golomb SW, Welch LR, Goldstein RM, Hales AW (1982) Shift register sequences. Aegean
Park Press, Laguna Hills, CA, p 78
9. Capilla A, Pazo-Alvarez P, Darriba A, Campo P, Gross J (2011) Steady-state visual evoked
potentials can be explained by temporal superposition of transient event-related responses.
PLoS ONE 6(1):e14543. doi:10.1371/journal.pone.0014543 PMID: 21267081
10. Farquhar J, Blankespoor J, Vlek R, Desain P (2008) Towards a noise-tagging auditory
BCI-paradigm. In: Proceedings of the 4th Int BCI Workshop and Training Course 2008. Graz,
Austria, pp 50–55
11. Gold R (1967) Optimal binary sequences for spread spectrum multi-plexing. IEEE Trans Inf
Theory 13:619–621. doi:10.1109/TIT.1967.1054048
Intracortical Microstimulation
as a Feedback Source for Brain-Computer
Interface Users

Sharlene Flesher, John Downey, Jennifer Collinger, Stephen Foldes,


Jeffrey Weiss, Elizabeth Tyler-Kabara, Sliman Bensmaia,
Andrew Schwartz, Michael Boninger and Robert Gaunt

1 Introduction

Intracortical brain-computer interfaces (iBCIs) now exist that enable people with
paralysis to control prosthetic arms using the decoded patterns of electrical activity
generated by hundreds of neurons in the primary motor cortex (M1) [1–3]. These
iBCI controlled devices will restore the ability of people with upper limb impair-
ments to interact with their environment and enable them to independently perform
activities of daily living. In a survey of people with tetraplegia, >75% rated
restoration of arm/hand function as very important to improving their quality of life,
making this technology a priority for the population [4].

J. Collinger  S. Foldes  J. Weiss  E. Tyler-Kabara  M. Boninger  R. Gaunt (&)


Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh,
PA, USA
e-mail: rag53@pitt.edu
S. Flesher  J. Downey  J. Collinger  J. Weiss  E. Tyler-Kabara  A. Schwartz 
M. Boninger  R. Gaunt
Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
e-mail: Snf12@pitt.edu
S. Flesher  J. Downey  J. Collinger  S. Foldes  A. Schwartz  R. Gaunt
Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
J. Collinger  S. Foldes  M. Boninger
Department of Veterans Affairs Medical Center, Pittsburgh, PA, USA
E. Tyler-Kabara
Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, USA
S. Bensmaia
Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL, USA
A. Schwartz
Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA

© The Author(s) 2017 43


C. Guger et al. (eds.), Brain-Computer Interface Research, SpringerBriefs
in Electrical and Computer Engineering, DOI 10.1007/978-3-319-64373-1_5
44 S. Flesher et al.

Our prior work with iBCIs has shown clinically significant restoration of arm
control with both seven and ten controllable degrees-of-freedom using a robotic
arm [1, 2]. In both cases, users had continuous and simultaneous control of three
translational degrees of freedom (hand location) and three degrees of orientation
(wrist rotation). In addition, whole-hand grasp, as a single dimension, was con-
trolled in the seven degree-of-freedom paradigm, while 10 degree-of-freedom
control expanded independent movements of the hand. These dimensions consisted
of combined flexion of the thumb, index and middle fingers, opposition of the
thumb, combined flexion of the ring and pinky fingers, as well as ab/adduction of
the four fingers. This increased the capabilities of the user from performing simple
grasping to include dexterous hand movements. These new capacities were inten-
ded to enable appropriate handling of a variety of objects, however, from a func-
tional perspective, there was no significant improvement in performance on object
transfer tasks [2]. Limited improvements in functionality, even with added control
dimensions in the hand, may have been due to the fact that current iBCI systems are
limited to visual feedback. Lacking any cutaneous somatosensory feedback, it is
possible that additional control degrees-of-freedom cannot be used skillfully. It is
also possible that restored sensation itself could prove more beneficial than addi-
tional controllable movements themselves.
Somatosensory feedback is necessary for skilled movement [5–9]. In healthy
subjects, the loss of cutaneous feedback alone can make even simple motor tasks
nearly impossible [5, 8]. However, state of the art iBCI paradigms do not provide
somatosensory feedback and instead rely solely on visual feedback. Therefore,
since somatosensation is a critical component of natural movement, and the loss of
sensation impairs movement, we believe that cutaneous sensations should be
restored for iBCI systems (Fig. 1). One possible method of delivering this feedback
is through intracortical microstimulation (ICMS) of primary somatosensory cortex
(S1). Indeed, ICMS in S1 can be used to guide the behavior of implanted animals
[11–14] and can be delivered safely over many months [10]. Recently, we extended
this work to a human participant and showed that ICMS in S1 is spatially selective
and evokes percepts that are naturalistic and are perceived to span a range of
intensities [15]. Here, we summarize these findings and begin to investigate the
impact of providing sensation on motor control tasks designed to benefit from
ICMS feedback.

2 Methods

Implant
This study was conducted under an Investigational Device Exemption from the Food
and Drug Administration, approved by the Institutional Review Boards at the
University of Pittsburgh (Pittsburgh, PA) and the Space and Naval Warfare Systems
Center Pacific (San Diego, CA), and registered at ClinicalTrials.gov (NCT01894802).
Intracortical Microstimulation as a Feedback Source … 45

Fig. 1 iBCI paradigm. Neural activity, in the form of threshold crossings, was recorded from two
intracortical microelectrode arrays (Utah arrays, top left) placed in M1. These signals were
transformed into velocity commands to control the endpoint of a prosthetic limb via an optimal
linear estimator decoder. Up to 10 degrees of freedom were simultaneously controlled. The
paradigm depicted above includes the addition of somatosensory feedback, provided via ICMS
delivered to microelectrode arrays (Utah arrays, bottom left) implanted in S1 (Picture courtesy of
Timothy Betler, UPMC Media Relations)

Informed consent was obtained before any study procedures were conducted. To test
the feasibility of ICMS as a feedback source for BCI users, a twenty-eight-year-old
participant with a chronic C5 motor and C6 sensory AIS B spinal cord injury was
implanted with two, 32-channel, stimulating intracortical microelectrode arrays in S1
and two, 88-channel, recording arays in M1. All arrays were implanted in the left
hemisphere and were placed based on pre-surgical imaging. The two recording arrays
were placed in the upper limb representation in M1 targeting the shoulder and hand
region. Neural activity from these arrays was decoded to control a robotic limb. The
stimulating arrays were targeted to the hand region of area 1 of S1. Microelectrode
arrays were placed so that ICMS would elicit cutaneous percepts that projected to the
right hand and thus relay tactile information from the sensors in the robotic limb
(Modular Prosthetic Limb, Johns Hopkins Applied Physics Lab).
Microstimulation Tasks
To assess the viability of ICMS as a feedback source for BCI users, we sought to
characterize the sensory quality, location, detection threshold, and the ability to
evoke a range of perceived intensity. Stimulus pulse trains consisted of
cathodic-first, asymmetric, charge-balanced pulses delivered at 100 Hz (Fig. 2).
Pulse amplitude was modulated by task parameters.
To determine the location and perceptual qualities of percepts evoked via ICMS,
individual electrodes were stimulated for 1 s at a supraliminal intensity (60 µA).
Pulse trains were repeated as many times as necessary for the participant to fully
describe the evoked sensation. The subject was shown either a segmented hand or
an unlabeled schematic of a hand, which he used to describe the location of the
percepts. In the first 10 months, the participant reported which of the predefined
segments (see Fig. 3a) were closest to the location of the projected fields. In later
46 S. Flesher et al.

Fig. 2 Pulse trains and 2 alternative forced-choice ICMS tasks. a Pulse waveform. Pulses were
delivered at 100 Hz and pulse amplitude was modulated by task parameters. b Detection
thresholds and just noticeable differences (JND) were measured using a 2-alternative forced-choice
paradigm, as shown above. For detection thresholds (middle row), the interval that contained the
pulse train was identified. For JNDs (bottom row), the interval containing the more intense pulse
train was identified

Fig. 3 Locations of projected fields. a Segmented hand that was shown to the participant during
60lA surveys to describe the locations of evoked sensations. b Arrays color-coded by location
based on segmentation and colors in (a). Colors indicate participant’s reported projected field for
each electrode. Gray squares indicate electrodes that were not in use. Pink squares indicate
electrodes that had diffuse projected fields. c Adapted from Flesher et al.15. Pre-operative MEG
imaging showing array locations in S1 and areas of activation when the participant watched
different regions of a hand being stroked by a cotton swab. The somatotopy observed from the
MEG imaging is largely reflected in the spatial arrangement of projected fields shown in (b)

experiments, the participant drew the locations of the projected fields using a tablet
computer and a stylus that was placed in his hand. The quality of the evoked
sensations was described using the set of descriptors listed in Table 1. Evoked
percepts could be described using any of the suggested words, a combination
thereof, or any words the participant chose.
We also investigated the relationship between stimulation amplitude and the
perceived intensity of evoked percepts in a free magnitude estimation task. Pulse
trains ranging in intensity from 10 to 80 µA were presented, in random order, to the
participant. Following each pulse train, the participant was asked to report how
Table 1 Perceptual qualities of evoked sensations. The participant described the conscious perception of ICMS using a variety of words that are included in
this table. The percentage of stimulus trains that elicited each type of sensations are shown. Only one option for naturalness, depth, and pain could be selected.
However, any combination of mechanical, movement, temperature, or miscellaneous descriptions could be reported
Naturalness % Depth % Pain % Mechanical % Movement % Temperature % Misc %
Totally 0.2 Skin 2.3 0 (no 100 Touch 1.8 Vibration 10.4 Warm 24.5 Tingle 79.2
natural Surface pain)
Almost 13.6 Below 3.0 1,2,3 0 Pressure 36.9 Movement 0 Cool 0 Electric 3.3
natural Skin current
Possibly 78.0 Both 94.7 4,5,6 0 Sharp 3.3 Tickle 0.0
Intracortical Microstimulation as a Feedback Source …

natural
Rather 8.2 7,8,9 0 Itch 0.0
unnatural
Totally 0.0 10 0
unnatural
47
48 S. Flesher et al.

intense the pulse train felt, using a self-selected numerical scale. Each amplitude
was presented in a random order, and the process was repeated a total of six times.
Unbeknown to the participant, the first presentation of each amplitude was excluded
from analysis. The participant was instructed to report a “0” if the stimulus was not
felt and to report a number twice as large for a stimulus that felt twice as intense as a
previous stimulus.
A two-alternative forced choice (2AFC) paradigm was used to measure detection
thresholds and just noticeable differences (JNDs). To measure detection thresholds,
the participant was instructed to indicate which of two consecutive time windows
contained an ICMS train. Pulse amplitude was selected dynamically based on task
performance such that it decreased if the participant correctly identified the window
that contained the stimulus pulse on three consecutive trials at the same pulse
amplitude. Pulse amplitude was increased on any trial following the incorrect
identification of the window that contained the stimulus. Detection thresholds were
measured for each electrode at least twice in the 18-month study period, with some
electrodes measured more frequently.
The JNDs from a subset of electrodes were measured in a similar fashion. Pulse
trains were presented in both time windows and the participant was instructed to
identify which window contained the more intense stimulus. The amplitude of one
pulse train was held constant in all trials, at either 20 or 70 µA. If this standard
amplitude was 20 µA, all comparison amplitudes were greater than this value. For
the high standard amplitude of 70 µA, all comparison amplitudes were smaller.
Using this approach, we could compare the participant’s ability to distinguish
between pulse trains that had the same amplitude difference, but were presented in a
low or high amplitude regime.
Neural Decoding and Control
Neural signals were acquired at 30 kHz using the NeuroPort signal processor
(Blackrock Microsystems). Raw signals were acquired with a 0.3–7.5 kHz band
pass filter and then further filtered using a first order Chebyshev high-pass filter
(250 Hz, 10 dB ripple) filter for spike thresholding. Thresholds were set at −4.5
times the root mean squared value of the raw signal. Threshold crossings were
counted in 20 ms bins, then filtered using a 440 ms exponential filter.
To train the decoder, the participant observed the robotic limb completing a
two-dimensional hand shaping task. The task consisted of 9 targets made up of all
combinations of the flexed, neutral, and extended positions for both “pinch”
(thumb/index/middle flexion-extension) and “scoop” (ring/pinky flexion/extension)
hand shapes. Each of the 9 targets had a unique name, which was presented as an
audio cue at the beginning of a trial. After the audio cue, the hand automatically
moved to achieve the appropriate target position and hold it for 1 s. The participant
was instructed to act as though he were controlling the robotic hand, essentially
attempting to perform the movements with the robotic hand.
The firing rates from the M1 electrodes during 27 trials of this task were then fit
to the movement velocities of the robotic limb to create an optimal linear estimator
decoder using methods described in detail elsewhere [1, 2]. Once this decoder was
Intracortical Microstimulation as a Feedback Source … 49

trained, the participant completed the same task with orthogonal assistance, where
the computer constrained the decoded movement velocities to the ideal path [16].
Once 27 trials had been collected with orthogonal assistance, a new decoder was
trained on the most recent data. This velocity decoder was then used, without
computer assistance, to complete a two-dimensional force matching task, described
below.
Real-Time ICMS Feedback Tasks
To investigate the ability of the participant to use ICMS as a feedback source, we
first performed a location discrimination task. An experimenter touched individual
fingers on the prosthetic hand and the blindfolded participant was asked to respond
with the identity of the finger. Torque sensor data derived from the D2-D5 finger
motors of a prosthetic limb were linearly mapped to groups of electrodes that
elicited percepts on the corresponding fingers. When the fingers were touched,
motor torque increased, and these torque values were used to modulate pulse train
amplitude in real-time.
To investigate the utility of providing feedback about contact location and
intensity in a motor control task, the participant performed a continuous
two-dimensional force matching task. The participant was instructed to pinch (in-
dex and middle finger flexion), scoop (ring and little finger flexion), or grasp (all
finger flexion) a foam object either gently or firmly. ‘Gentle’ targets were defined to
be 12-36% of the maximum grasp torque, while ‘firm’ targets were specified to be
36–60% of the maximum grasp torque. The participant had to apply the instructed
torque with the specified fingers for 750 ms within 7 s of the start of a trial to be
successful. During all trials, the participant used the BCI to continuously control
both the pinch and scoop dimensions while trying to achieve instructed torque
targets. This task was performed with and without ICMS feedback. The task was
conducted in blocks of 6 trials, such that each combination of the three grasp
postures and two force targets were presented once, in random order. The success
rate per block was used as the metric for task performance.

3 Results

3.1 Projected Fields and Perceptual Quality

The projected fields of the electrodes (see Fig. 3a and b) were located in digits 2–5,
primarily at the base of each digit. Sensations were usually reported as originating
from a single digit, and if projected fields were reported for multiple fingers, reports
were from adjacent fingers.
During supraliminal intensity surveys of all the implanted electrodes, sensations
were reported from 55 of the 60 electrodes used in this task. Due to some electrodes
exhibiting an abnormally high interphase voltage at 60 uA, four electrodes were not
used in this task. No painful sensations or paresthesias were ever reported. Of the
50 S. Flesher et al.

reported sensations, 36.9% were described as “pressure” and 79.2% as “tingle” with
most being described as “possibly natural” (Table 1). Percepts mostly felt as though
they contained sensory elements that occurred both at and below the skin surface.
Sensation qualities, as listed in Table 1, were not mutually exclusive, so the par-
ticipant could describe sensations using combinations of the qualities, and/or using
any other descriptors he deemed appropriate.

3.2 Psychometric Evaluation

Detection thresholds were measured for all 62 tested electrodes (Fig. 4a). The
median detection threshold was 29.5 µA with lower and upper quartiles of 19.3 µA
and 37.6 µA, respectively. Additionally, we measured JNDs on seven electrodes
(Fig. 4a), as previously shown [15]. Data were fit with cumulative normal curves,
and the JND was defined to be the difference in stimulus amplitudes where the more
intense stimuli was correctly identified 75% of the time. The JNDs were found to be
15.4 ± 3.9 µA and were the same regardless of standard amplitude (Wilcoxon
signed rank, p = 1).
Using the free magnitude estimation task, we found the relationship between
pulse amplitude and perceived intensity to be highly linear (R2 = 0.996, linear
regression). The results from all ten tested electrodes are shown in Fig. 4c.

Fig. 4 Psychometric evaluation of evoked sensations. a. Distribution of mean detection threshold


for each electrode. b. JND results fit with a cumulative normal curve. Low and high standard
amplitudes yielded similar curves and JNDs. c. Perceived intensity increased linearly with
stimulation amplitude. Responses from all ten electrodes were normalized and a line fit to the raw
data points. The mean (dots) and S.E.M. (error bars) for each test amplitude are shown
Intracortical Microstimulation as a Feedback Source … 51

3.3 Real-time ICMS Feedback Tasks

As we previously reported [15], the participant could accurately identify which


robotic fingers were being touched while blindfolded. Across 14 sessions con-
taining a total of 69 repetitions on each finger, the correct finger was identified 84 ±
12.2% (mean accuracy across fingers +/− standard deviation) of the time. Repeated
tests did not improve the accuracy, indicating that training was not a factor. The
index (D2) and little (D5) fingers were correctly identified most consistently (96.9
± 7.2% and 93.9 ± 12.1%, respectively), while the middle (D3) and ring (D4)
fingers were less accurately identified (73.5 ± 18.1% and 73.1 ± 24.6%, respec-
tively), with errors typically being reports of an adjacent finger (see Fig. 5a).
In the continuous force matching task, the participant continuously controlled
the flexion/extension of two grasp dimensions while the applied torque for each of
these dimensions was evaluated. The success rate for achieving the six possible
targets (pinch, scoop or grasp with gentle or firm forces) was significantly improved
with the addition of ICMS feedback (Fig. 5, p < 0.001, Wilcoxon signed-rank test).
This was even though many of the trials could be successfully completed without
ICMS feedback, which we attribute to the simple nature of the task. On successful
trials, time to target was not significantly different between feedback paradigms
(Wilcoxon rank-sum test, p > 0.05). Interestingly, the participant could describe
why failed trials were unsuccessful with ICMS feedback, but was unable to do so
without it. That is, if the instructed target was “firm pinch”, the pinch fingers should
exert a force from 36–60% of the maximum torque while the scoop fingers should
not make contact with the object at all. Immediately following a failed trial, the
participant could report that he did not exert enough force with the pinch fingers but
also made contact with the scoop fingers. Reports of this nature were voluntarily

Fig. 5 Performance on real-time ICMS feedback tasks. a. Confusion matrix of participant’s


ability to correctly identify which robotic finger was being touched, using the data reported by
Flesher et al. b. Proportion of trials correct in the continuous force matching iBCI task under the
two feedback paradigms. Proportion correct was calculated on a block-by-block basis, with blocks
consisting of 6 trials each. These six trials included one repetition of each combination of hand
posture and force level. ICMS feedback was either provided or not for each block
52 S. Flesher et al.

provided and demonstrated that ICMS feedback provided additional interpretable


information, even if the information could not be acted upon. With no object
present, the subject was very proficient at achieving position-only targets, with a
success rate of 93.5 ± 7.4% (mean ± standard deviation).

4 Discussion

ICMS delivered to area 1 of S1 has the potential to provide behaviorally relevant


somatosensory feedback to people who use iBCI systems. We found that percepts
were evoked at expected somatotopic locations and that the perceived intensity of
stimuli scaled linearly over the tested amplitude range (up to 100 lA). These
features enable us to relay both the location and intensity of object contact, two
sources of information that would be helpful for BCI users to interact with objects.
Further, we have shown that a BCI user can use ICMS feedback, generated by a
prosthetic limb in real-time, to improve performance in a simple motor control task.
There are many challenges in implementing experiments that involve both neural
recording and microstimulation. In these experiments, developing a task that was
within the motor control capabilities of the decoder, yet could also benefit from
somatosensory feedback without vision being artificially removed, proved chal-
lenging. Further, robotics control issues and sensor data stability increase the
technical complexity of even simple experiments. Perhaps the most significant
challenge is that the optimal way to encode measured sensor data into stimulus
trains across many electrodes is unknown. Here, only the most simple stimulus
encoding function was tested. Measured reaction torque values from the prosthetic
fingers were linearly scaled to the stimulus amplitude even though it has been
shown that non-linear function more accurately represents the relationship between
stimulus amplitude and skin indentation [17]. The actual neural activity in the
cortex in response to skin indentation is a complex pattern that represents the
activity of both slowly and rapidly adapting neurons [18]. Encoding prosthetic
sensor data using these biomimetic principles may improve the integration of
somatosensory feedback into motor control tasks. However, in these experiments,
the stimulus pulse frequency was held constant at 100 pulses per second and pulses
were delivered synchronously across all electrodes. This synchronous pulsing
scheme was used to minimize stimulus artifact. In future, more complex encoding
schemes will be tested and it remains to be seen what aspects of the natural code
must be replicated to improve the naturalness of sensation or improve sensorimotor
integration.
Nevertheless, even with the simplistic encoding scheme used here, performance
for the two dimensional force matching task was significantly improved with the
addition of ICMS feedback. We expect that this is because in the absence of ICMS
feedback, it was difficult, if not impossible, for the participant to correct errors in
the applied grasp force. This interpretation is supported by verbal reports that the
study participant began supplying during the task itself. After trials that included
Intracortical Microstimulation as a Feedback Source … 53

ICMS, the participant often reported why he was unsuccessful. For instance, he
might report that he was pinching too hard. Such reports were only occurred when
ICMS feedback was provided. In the absence of ICMS feedback, the study par-
ticipant was unable to provide any information about what was wrong in an
incorrect trial. This suggests the feedback was accurately relaying both intensity
and location of object contact in such a way that was easily interpretable to the user
with no training. The inability to correct these errors, despite accurate knowledge of
what needed to occur, may reflect the short duration of the trials (7 s) or an issue
with the decoding performance of the controller.

5 Conclusions

We have shown that intracortical microstimulation in the hand area of primary


somatosensory cortex provides intuitive feedback about the intensity of force
applied by each finger individually. This feedback enabled a user to improve his
ability to provide metered force to an object with different finger combinations. This
proof-of-concept is an important first step in the development of bidirectional
neuroprosthetic arms for people with paralysis to allow them to have more natural
interactions with their environment and ultimately increase their independence by
enabling them to complete a wide variety of tasks without assistance.

Acknowledgements This study was funded by the Defense Advanced Research Projects
Agency’s (Arlington, VA, USA) Revolutionizing Prosthetics program (contract number
N66001-10-C-4056) and Office of Research and Development, Rehabilitation Research and
Development Service, Department of Veterans Affairs (Washington DC, USA, grant numbers
B6789C, B7143R, and RX720). S.N.F. was supported by the National Science Foundation
Graduate Research Fellowship under Grant No DGE-1247842.

References

1. Collinger JL, Wodlinger B, Downey JE, Wang W, Tyler-Kabara EC, Weber DJ,
McMorland AJ, Velliste M, Boninger ML, Schwartz AB (2012) High-performance
neuroprosthetic control by an individual with tetraplegia. Lancet. doi:10.1016/S0140-6736
(12)61816-9
2. Wodlinger B, Downey JE, Tyler-Kabara EC, Schwartz AB, Boninger ML, Collinger JL
(2015) Ten-dimensional anthropomorphic arm control in a human brain-machine interface:
difficulties, solutions, and limitations. J Neural Eng 12:016011
3. Hochberg LR, Serruya MD, Friehs GM, Mukand JA, Saleh M, Caplan AH, Branner A,
Chen D, Penn RD, Donoghue JP (2006) Neuronal ensemble control of prosthetic devices by a
human with tetraplegia. Nature 442(7099):164–171
4. Collinger JL, Boninger ML, Bruns TM, Curley K, Wang W, Weber DJ (2013) Functional
priorities, assistive technology, and brain-computer interfaces after spinal cord injury.
J Rehabil Res Dev 50(2):145
54 S. Flesher et al.

5. Rothwell JC, Traub MM, Day BL, Obeso JA, Thomas PK, Marsden CD (1982) Manual motor
performance in a deafferented man. Brain 105(Pt 3):515–542
6. Ghez C, Gordon J, Ghilardi MF (1995) Impairments of reaching movements in patients
without proprioception. II. Effects of visual information on accuracy. J Neurophysiol 73:
361–372
7. Sainburg RL, Poizner H, Ghez C (1993) Loss of proprioception produces deficits in interjoint
coordination. J Neurophysiol 70:2136–2147
8. Johansson RS, Hger C, Bäckström L (1992) Somatosensory control of precision grip during
unpredictable pulling loads. III. Impairments during digital anesthesia. Exp Brain Res
89:204–213
9. Jenmalm P, Johansson RS (1997) Visual and somatosensory information about object shape
control manipulative fingertip forces. J Neurosci 17:4486–4499
10. Monzée J, Lamarre Y, Smith AM (2003) The effects of digital anesthesia on force control
using a precision grip. J Neurophysiol 89:672–683
11. Chen KH, Dammann JF, Boback JL, Tenore FV, Otto KJ, Gaunt RA, Bensmaia SJ (2014)
The effect of chronic intracortical microstimulation on the electrode-tissue interface, J Neural
Eng 11:026004. Kim S, Callier T, Tabot GA, Gaunt RA, Tenore FV, Bensmaia SJ (2015)
Behavioral assessment of sensitivity to intracortical microstimulation of primate somatosen-
sory cortex. Proc Natl Acad Sci USA, 201509265
12. Dadarlat MC, O’Doherty JE, Sabes PN (2014) A learning-based approach to artificial sensory
feedback leads to optimal integration. Nat Neurosci. doi:10.1038/nn.3883
13. Romo R, Hernández A, Zainos A, Salinas E (1998) Somatosensory discrimination based on
cortical microstimulation. Nature 392:387–390
14. O’Doherty JE, Lebedev MA, Ifft PJ, Zhuang KZ, Shokur S, Bleuler H, Nicolelis MAL (2011)
Active tactile exploration using a brain-machine-brain interface. Nature 479:228–231
15. Flesher SN, Collinger JL, Foldes ST, Weiss JM, Downey JE, Tyler-Kabara EC, Bensmaia SJ,
Schwartz AB, Boninger ML, Gaunt, RA (2016) Intracortical microstimulation of human
somatosensory cortex. Sci Transl Med 8(361):361ra141–361ra141
16. Velliste M, Perel S, Spalding MC, Whitford AS, Schwartz AB (2008) Cortical control of a
prosthetic arm for self-feeding. Nature 453(7198):1098–1101
17. Tabot GA, Dammann JF, Berg JA, Tenore FV, Boback JL, Vogelstein RJ, Bensmaia SJ
(2013) Restoring the sense of touch with a prosthetic hand through a brain interface. Proc Natl
Acad Sci USA 110(45):18279–18284
18. Saal HP, Harvey MA, Bensmaia SJ (2015) Rate and timing of cortical responses driven by
separate sensory channels. Elife 4:e10450
A Minimally Invasive Endovascular
Stent-Electrode Array for Chronic
Recordings of Cortical Neural Activity

Thomas J. Oxley, Nicholas L. Opie, Sam E. John, Gil S. Rind,


Stephen M. Ronayne, Anthony N. Burkitt, David B. Grayden,
Clive N. May and Terence J. O’Brien

1 Introduction

Cross discipline collaboration is heralded as an evolutionary pathway when


exploring alternate means of tackling complex biological disorders. The field of
Neural Bionics epitomizes this statement, merging the worlds of both engineering
and medicine. It has not only made significant advances where conventional
methods have failed to address physical disablement but also captured the public
imagination in the process. The development of chronic recording devices has
facilitated advancement across the field, notably in the areas of movement disorders
[1], seizure monitoring and prediction [2, 3], willful control of prosthetics [4, 5],
and restorative devices for the auditory [6] and visual [7] senses. Behind every BCI
control system lays the heart of its technology: the interface. Chronic interfaces can
take the form of scalp electrodes, epidural, subdural and penetrating arrays. Whilst
desirably non-invasive, signal quality and positional stability are pitfalls when
deliberating on the use of scalp arrays. Penetrating arrays provide superior spatial
resolution but must breach the blood brain barrier, as can surface arrays.
Consequences of opening this membrane are chronic inflammation and glial scar-
ring. In turn this has negative bearing on the device itself with a gradual reduction

T.J. Oxley (&)  N.L. Opie  S.E. John  G.S. Rind  S.M. Ronayne
Vascular Bionics Laboratory, Departments of Medicine and Neurology,
Melbourne Brain Centre, The Royal Melbourne Hospital,
The University of Melbourne, Parkville, VIC, Australia
e-mail: thomas.oxley@unimelb.edu.au
T.J. Oxley  N.L. Opie  S.E. John  G.S. Rind  S.M. Ronayne  C.N. May
The Florey Institute of Neuroscience and Mental Health,
The University of Melbourne, Parkville, VIC, Australia
S.E. John  A.N. Burkitt  D.B. Grayden  T.J. O’Brien
The Department of Electrical and Electronic Engineering,
The University of Melbourne, Parkville, VIC, Australia

© The Author(s) 2017 55


C. Guger et al. (eds.), Brain-Computer Interface Research, SpringerBriefs
in Electrical and Computer Engineering, DOI 10.1007/978-3-319-64373-1_6
56 T.J. Oxley et al.

in the number of viable electrodes for signal acquisition. Placement of intracranial


arrays carries further complexities as they require intricate craniotomic surgery. The
Vascular Bionics Laboratory proposed a device that could minimise the invasive-
ness of electrode array placement and circumnavigate the long-term issues asso-
ciated with crossing the blood brain barrier. The results of the feasibility study were
published in Nature Biotechnology in March 2016 [8]. In this chapter, the major
outcomes of the paper are outlined.

2 Mapping Cerebral Vessels

Yanagisawa et al. have demonstrated the potential for information rich recordings
that may be acquired from the anterior sulcus of the motor area [9]. As such, the
central sulcus became an appropriate target for the deployment of a BCI. The initial
inquiry to using an endovascular approach was primarily the accessibility of suit-
able cerebral vessels. A study was conducted on vasculature in close proximity to
the sensorimotor sulcus. Magnetic Resonance Imaging (MRI) was used to identify
the venous anatomy surrounding the sensorimotor cortex in human subjects (n = 50,
34.5, 18–73 years). Veins were categorized as pre or post CSV by their position
relative to the central sulcal vein. 4 interconnected structures were identified in the
immediate surrounds, the superior sagittal sinus (SSS), precentral sulcal vein
(preCSV), central sulcal vein (CSV) and post central sulcal vein (postCSV). Vessel
diameters were characterised at set points.
The study yielded the results shown in Table 1. An interrogation into the vessel
sizes of venous structure in the sheep brain (n = 13, 4.3, 2.5–5 years) revealed the
ovine superior sagittal sinus as an apt correlate to human vessels to embark on an
animal feasibility study (Figs. 1 and 2).

Table 1 Comparison between the relevant vessel diameters of the ovine and human brain
Human Proximal diameter Mid diameter Distal diameter
vasculature (5 mm) (40 mm) (80 mm)
Pre CSV 4.8 mm (4.8–3.3 mm) 3.3 mm (3.6–8.5 mm) 2.3 mm (3.4–6.1 mm)
CSV 4.9 mm (2.2–4.6 mm) 3.1 mm (2.2–4.5 mm) 2.3 mm (2.5–5.3 mm)
Post CSV 4.8 mm (1.7–3.7 mm) 3.5 mm (1.6–4.5 mm) 2.7 mm (1.8–5 mm)
Ovine Proximal diameter Mid diameter Distal diameter
vasculature (0 mm) (30 mm) (60 mm)
SSS 2.4 mm (1.6–1.8 mm) 1.7 mm (1–1.5 mm) 1.1 mm (1–1.2 mm)
A Minimally Invasive Endovascular Stent-Electrode … 57

Fig. 1 MRI based reconstruction of the ovine brain with overlying superior sagittal sinus and
branching vessels

Fig. 2 Device Design and Delivery. (Top) Images depicting the deployment action of the interface
from within a 1.1 mm lumen catheter. (Bottom) An X-ray image showing the implanted device
58 T.J. Oxley et al.

3 Device Design and Delivery

Following the determination of a suitable animal vessel to test the feasibility of an


endovascular recording device, attention was turned towards creating a system that
could access it. A co-axial catheter system was used to achieve SSS entry via a
vascular puncture in the external jugular vein in the neck. X-ray angiography was
employed to navigate to the system to the sinus. Early experiments showed that a
4F catheter was the largest sized catheter that could reliably access the ovine sinus
without causing vessel damage. To navigate through a catheter to the desired
location, the interface had to fit inside a 1.1 mm lumen catheter (044 DAC,
Concentric Medical). It then needed to expand from within it with adequate radial
force to create an appropriate vessel wall apposition to allow recording. To fit this
design requirement, the concept of a stent like scaffold was developed. Precedence
exists for this technology as intracranial stents are used to alleviate both arterial and
venous complications [10, 11]. Initial interface prototypes were constructed using
commercially available, self-expanding, Nitinol stent retrievers (Solitaire SAB,
Covidien). Platinum disc electrodes were mounted to the Nitinol scaffold with a
trailing lead to transmit acquired signal out of the vessel to a percutaneous con-
nector (Micro plastic series, Omnetics), which exited the skin above the stern-
ocleidomastoid. This design allowed device connection at the convenience of the
researcher.

4 Sinus Endothelialisation

With the device in situ, the next logical progression became the exploration of its
interaction with the vessel wall. The process of incorporation has been demon-
strated for arterial stenting with endothelialisation occurring in as little as a week
[12]. Cerebral venous stenting has not been as widely characterised thus far.
Endothelialisation of the recording head serves the function of removing the
structure from direct interaction with the blood flow. Other potential benefits are an
increased proximity to the neuronal population and positional stability of the device
inhibiting migration. An exploratory study was undertaken to assess the level of
endothelial growth and what effect it may have on the electrode interface properties.
To quantify growth Synchrotron X-ray imaging was used to determine the distances
between the scaffold struts and the vessel lumen. The results show that the struts
move away from the lumen and deeper into the vessel in a relatively short amount
of time, as can be seen in Table 2 and Fig. 3.
Prior to cull, impedance measurements were taken daily for 2 weeks and weekly
thereafter. Significant changes (p < 0.0001) were noted at 100 Hz in both impe-
dance measurements and phase angle, within the first 6 days. This is indicative of
biological activity at the tissue electrode interface. Further changes from day 8 to
day 28 show no significant effects (P > 0.619). These results enforce the
A Minimally Invasive Endovascular Stent-Electrode … 59

Table 2 Scaffold strut to Time point Strut to lumen distance (mean ± SEM)
lumen distance showing
vessel incorporation over time Day 1 21 µm ± 8 µm (n = 97 struts, 2 subjects)
3 Weeks 309 µm ± 22 µm (n = 89 struts, 4 subjects)
4 Months 320 µm ± 22 µm (n = 72 struts, 4 subjects)

Fig. 3 Synchrotron imaging


of the superior surface of the
brain and sinus showing both
vessel patency and device
incorporation

Synchrotron data suggesting that incorporation of the device begins almost


immediately following implantation and interface stability occurs as early as the
first week.

5 Chronic Vessel Patency

The assumption of vascular incorporation also brought forth the question of vas-
cular occlusion. A study was undertaken to assess chronic sinus patency to ensure
venous drainage was maintained following implantation. Implanted animals (n =
20) were administered with 100 mg Aspirin daily as antiplatelet therapy for the
duration of the study. A 3-day loading dose period was provided prior to implan-
tation. Repeated lumen diameter measurements were taken up to a 12-week time
point after which the animals were culled and ex vivo brain samples were
Synchrotron imaged Fig. 4.
• Synchrotron imaging confirmed patency of the SSS in all animals at the loci of
stentrode implantation
• Imaging analysis (n = 78 slices) on 4 subjects with implantation periods longer
than 20 weeks had an observed SSS median lumen diameter of 4.77 mm2
(2.19–6.03 mm2)
• Cortical veins entering the sinus demonstrated mild obstruction following
implantation with
– 92% (11/12) patency after 2 weeks
– 63% (5/8) patency after 3 months
60 T.J. Oxley et al.

Fig. 4 Plot of anaesthesia induced neural signal modulation transitioning between deep and light
states (left to right). The colour key denotes Minimum Alveolar Concentration (MAC)

• No animals were seen to present with symptomatic behavioural indicators


during the study

6 Vascular ECoG

Recordings via the a cerebral blood vessel has been described previously; however,
the studies conducted have not lasted more than a few hours in highly controlled
environments [13–18]. We took several approaches to verify the devices capacity to
record cortical signal via the vasculature.
Somatosensory Evoked Potentials (SSEPs) were elicited using direct tibial nerve
stimulation. Recording were taken over a 28-day period.
• SSEP’s were detectable in 98% of all functional channels
• Peak to Peak amplitudes experienced no significant change over this period,
P = 0.42, n = 703 (linear regression model) indicating stability during recording.
• Correlating with endothelialisation and impedance data, SSEP detection was
seen to improve in the initial days following implantation i.e. the number of
channels detecting SSEP signal increased, possibly due to interface changes.
Day1 50% (25–100%), n = 62 channels, 5 subjects
Day2 79% (62–96%), n = 44 channels, 5 subjects
Day4 92% (77–100%), n = 34 channels, 5 subjects

6.1 Modulatory Effects of Anaesthesia

Comparisons were made between states of deep and light anaesthesia at day 0 and
1 month following implantation. Anaesthesia has been shown by Lukatch to induce
A Minimally Invasive Endovascular Stent-Electrode … 61

theta burst suppression within neural activity [19]. Duration of implantation had a
significant effect (F1,8 = 12.2, P = 0.008, n = 5 (2-way ANOVA)) and larger
burst-suppression ratio was detected at the 1 month time point.
• Day 0 0.12 ± 0.05 (mean ± SEM)
• Day 28 0.51 ± 0.07

6.2 Bandwidth and Power Spectra

Recordings were taken from freely moving animals implanted with an endovascular
array and epidural and subdural surface recording arrays. Power spectra and
bandwidth were assessed within neural signals of subjects in a physically relaxed
state. All 3 devices presented with a characteristic 1/f decrease in power typically
associated with neural signal. The subdural array performed better on power
averages than the endovascular device in the mid–upper gamma bands with no
significant differences in the lower bands. There was no significant difference
between the endovascular device and the epidural array across the relevant spectra.
This finding suggests that any signal attenuation caused by the dura was not fur-
thered by the vessel itself, and that the performance of the endovascular array in its
prototype form can match the performance of an epidural surface array.

7 Discussion and Long Term Perspectives

BCI research has shown enormous potential in a wide range of applications. Public
awareness about BCIs is growing, along with curiosity and enthusiasm. As a
medical aid, BCIs can provide fundamental utility to those who have lost or were
born without ability. Progressing past a medical aid, its implications are only
limited by the mind considering them.
The key to pushing this technology forward is to create the safest possible
surgical delivery with long term biological and functional stability. Reducing the
capacity for clinical complication, long term reliability and ease of use for
patient/surgical/research users shall be vital for the growth of a reputable technol-
ogy. The StentrodeTM system in its initial manifestation is being interrogated for use
in patients suffering severe paralysis. It is nonetheless being considered as a plat-
form technology with capabilities to benefit numerous clinical indications.
Presently, the group is pushing towards a first in human trial with a highly
developed fully implantable system. Successful outcomes will be used to drive
future clinical translation.
62 T.J. Oxley et al.

Acknowledgements The Vascular Bionics Laboratory would like to acknowledge all participants
and contributors to our work thus far. In particular, we would like to recognise the input of
The University of Melbourne
• Dept. of Medicine
• Dept. of Electrical and Electronic Engineering
The Florey Institute of Neuroscience and Mental Health
The Royal Melbourne Hospital

References

1. Deuschl G, Schade-Brittinger C, Krack P et al (2006) A randomized trial of deep-brain


stimulation for parkinson’s Disease. N Engl J Med 355:896–908. doi:10.1056/
NEJMoa060281
2. Cook MJ, O’Brien TJ, Berkovic SF et al (2013) Prediction of seizure likelihood with a
long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: A
first-in-man study. Lancet Neurol 12:563–571. doi:10.1016/S1474-4422(13)70075-9
3. Morrell MJ (2011) Responsive cortical stimulation for the treatment of medically intractable
partial epilepsy. Neurology 77:1295–1304. doi:10.1212/WNL.0b013e3182302056
4. Hochberg LR, Serruya MD, Friehs GM et al (2006) Neuronal ensemble control of prosthetic
devices by a human with tetraplegia. Nature 442:164–171. doi:10.1038/nature04970
5. Yanagisawa T, Hirata M, Saitoh Y et al (2012) Electrocorticographic control of a prosthetic
arm in paralyzed patients. Ann Neurol 71:353–361. doi:10.1002/ana.22613
6. Wilson BS, Finley CC, Lawson DT et al (1991) Better speech recognition with cochlear
implants. Nature 352:236–238. doi:10.1038/352236a0
7. Weiland JD, Cho AK, Humayun MS (2011) Retinal Prostheses: Current Clinical Results and
Future Needs. Ophthalmology 118:2227–2237. doi:10.1016/j.ophtha.2011.08.042
8. Oxley TJ, Opie NL, John SE et al., Minimally invasive endovascular stent-electrode array for
high-fidelity, chronic recordings of cortical neural activity. Nat Biotechnol 34:320–327.
doi:10.1038/nbt.3428
9. Yanagisawa T, Hirata M, Saitoh Y et al (2009) Neural decoding using gyral and intrasulcal
electrocorticograms. Neuroimage 45:1099–1106. doi:10.1016/j.neuroimage.2008.12.069
10. Chimowitz MI, Lynn MJ, Derdeyn CP et al (2011) Stenting versus Aggressive Medical
Therapy for Intracranial Arterial Stenosis. N Engl J Med 365:993–1003. doi:10.1056/
NEJMoa1105335
11. Puffer RC, Mustafa W, Lanzino G (2013) Venous sinus stenting for idiopathic intracranial
hypertension: a review of the literature. J Neurointerv Surg 5:483–486. doi:10.1136/
neurintsurg-2012-010468
12. van der Giessen WJ, Serruys PW, van Beusekom HM et al., (1991) Coronary stenting with a
new, radiopaque, balloon-expandable endoprosthesis in pigs. Circulation 83:1788LP–1798.
http://circ.ahajournals.org/content/83/5/1788.abstract
13. Watanabe H, Takahashi H, Nakao M et al (2009) Intravascular neural interface with nanowire
electrode. Electron Commun Japan 92:29–37. doi:10.1002/ecj.10058
14. Mikuni N, Ikeda A, Murao K et al (1997) ‘Cavernous Sinus EEG’: A new method for the
preoperative evaluation of temporal lobe epilepsy. Epilepsia 38:472–482. doi:10.1111/j.1528-
1157.1997.tb01738.x
15. Bower MR, Stead M, Van Gompel JJ et al (2013) Intravenous recording of intracranial,
broadband EEG. J Neurosci Methods 214:21–26. doi:10.1016/j.jneumeth.2012.12.027
16. Boniface SJ, Antoun N (1997) Endovascular electroencephalography: the technique and its
application during carotid amytal assessment. J Neurol Neurosurg Psychiatry 62:193–195
A Minimally Invasive Endovascular Stent-Electrode … 63

17. Penn RD, Hilal SK, Michelsen WJ et al (1973) Intravascular intracranial EEG recording -
technical note. J Neurosurg 38:239–243. doi:10.3171/jns.1973.38.2.0239
18. Driller J, Hilal SK, Michelsen WJ et al., (1969) Development and use of the POD catheter in
the cerebral vascular system. Med Res Eng 8:11–6. http://europepmc.org/abstract/MED/
5823257
19. Lukatch HS, Kiddoo CE, Maciver MB (2005) Anesthetic-induced burst suppression EEG
activity requires glutamate-mediated excitatory synaptic transmission. Cereb Cortex 15:1322–
1331. doi:10.1093/cercor/bhi015
Visual Cue-Guided Rat Cyborg

Yueming Wang, Minlong Lu, Zhaohui Wu, Xiaoxiang Zheng


and Gang Pan

1 Introduction

An animal robot is an animal that is connected to a machine system, usually via a


brain-computer interface or BCI [1, 2]. This BCI is combined with a device to
deliver electrical stimuli to specific brain areas, thereby driving the animal to take
actions that are specified by humans [3]. The stimuli are delivered to specific brain
areas via implanted electrodes. In particular, animal robots can be controlled by
humans to navigate along a specified path. Because of the specific motion and
perceptual abilities of animals [4], animal robots have great potential for use in
search and rescue applications [5].
A rat robot is a typical animal robot [6], which can navigate along a
human-specified route. A major disadvantage is that humans need to identify the
arrangements of objects in the environment before giving appropriate instructions to

This is a brief version of the published article in IEEE Computational Intelligence Magazine,
2015 [14].

Y. Wang (&)  X. Zheng


Qiushi Academy for Advanced Studies, Zhejiang University, Hangzhou, China
e-mail: ymingwang@zju.edu.cn
X. Zheng
e-mail: zxx@zju.edu.cn
M. Lu  Z. Wu  G. Pan
College of Computer Science, Zhejiang University, Hangzhou, China
e-mail: ymlml@zju.edu.cn
Z. Wu
e-mail: wzh@zju.edu.cn
G. Pan
e-mail: gpan@zju.edu.cn

© The Author(s) 2017 65


C. Guger et al. (eds.), Brain-Computer Interface Research, SpringerBriefs
in Electrical and Computer Engineering, DOI 10.1007/978-3-319-64373-1_7
66 Y. Wang et al.

facilitate navigation in the environment. This limits the possible applications of rat
robots in environments that humans cannot observe. In some applications, only a
few objects are of interest to a rat robot, such as human faces or indication signs. If
the rat robot system can find these objects and a motion action is specified for each
object, this would allow the rat robot to perform human-specified navigation
automatically. In this preliminary study, we attempt to address this problem. We
construct a rat robot where the rat can accept stimuli and perform a few basic
actions, such as turning left, turning right, and walking forward. Our novel system
makes two major contributions, as follows.
(1) To allow the rat robot to find “human-interesting” objects, i.e., the objects that
easily attract humans’ attentions such as human faces and indication signs, a
miniature camera is mounted on the back of the rat robot and the video captured
by the camera is transferred to a computer. Interesting objects in the video, such
as human faces and arrow signs, are then identified by object detection algo-
rithms and the detection results are used to control the rat robot.
(2) To allow the rat robot to navigate automatically while being guided by the
identified objects/cues, we develop a stimulation model that drives the rat robot
to perform a unique motion action in response to the detection of an object.
A problem with automatic control is that a single stimulus, e.g., a stimulus for
turning left, does not allow the rat to perform a successful turning left motion.
Humans usually give a series of stimuli to the rat for this purpose, according to
the state of the rat and the objects in front of it. Inspired by this manual control
process, we develop a closed-loop stimulation model that mimics the human
control procedure, which issues a stimulus sequence automatically according to
the state of the rat and the objects detected until the rat completes the motion
successfully.
We refer to our system as a rat cyborg. We evaluate the key features of our rat
cyborg through extensive experiments, which demonstrate that the rat cyborg can
achieve successful visual cue-guided automatic navigation. The present study
comprises neuroengineering [7] and a preliminary study in the new field of “cyborg
intelligence,” i.e., incorporating biological intelligence and machine intelligence to
obtain more powerful capacities in a system [8–13].

2 Overview

Figure 1 shows the three main components of our rat cyborg, i.e., implanted
electrodes, a rat-mounted pack, and a computational component.
The electrodes are implanted in specific regions of the rat’s brain and electrical
stimuli can be delivered to the rat’s brain via the implanted electrodes. The stimuli
control the rat cyborg to perform turning left, turning right, or moving forward
behaviors.
Visual Cue-Guided Rat Cyborg 67

Fig. 1 Three main components of our rat cyborg system. The electrode picture is taken under a
microscope. The rat-mounted pack includes a miniature camera, a wireless module, and a
stimulator

The rat-mounted pack consists of the following components (Fig. 1).


• A stimulator generating electrical stimuli that are delivered to the rat’s brain via
the electrodes implanted in the brain.
• A miniature camera that captures real-time video of the scene in front of the rat.
The miniature camera measures 20 mm  8 mm  1 mm and its optical axis is
in the same direction as the rat’s head.
• A wireless module that receives stimulus instructions from a PC and sends
videos from the miniature camera to the computational component on the PC to
identify interesting objects. Thus, the rat-mounted pack includes an instruction
receiver and a video transmitter.
The computational component comprises object detection algorithms and a
closed-loop stimulation model. Face and colored object detection algorithms are
developed to search for faces and colored objects in the video data transferred from
the rat-mounted pack. Based on the results, the closed-loop stimulation model
estimates the motion state of the rat and determines the stimulus sequence delivered
to the rat-mounted pack to stimulate the rat to take the correct actions.
All of the rats used in these experiments were cared well by the animal keepers.
All of the experiments were performed in accordance with the guidelines issued by
the Ethics Committee of Zhejiang University and they complied with the China
Ministry of Health Guide for the Care and Use of Laboratory Animals.

3 Basic Rat Robot

First, we construct a basic rat robot system, as described in [5]. The rat robot can
perform navigation tasks under manual control using the basic instructions: “left,”
“right,” and “forward.” The behavior of the rat robot is controlled by the implanted
68 Y. Wang et al.

electrodes and the electrical stimuli sent to the rat brain. In this section, we describe
the basic principles of the rat robot system, including the underlying mechanism
that connects electrical stimulation with the rat’s actions, the hardware, and the rat
robot training procedure.

3.1 Stimulation-Action Principles

Electrical stimuli can be delivered to specific brain regions as rewards [15, 16] and
as steering cues [17] to control rat behavior. The MFB in the rat’s brain is known as
a pleasure center, thus the application of electrical stimuli to the MFB can be used
as rewards [15]. Applying a stimulus to the MFB will increase the level of dopa-
mine (a neurotransmitter with an important role in reward-motivated behavior) in
the rat’s brain [15, 18], thereby motivating its motion and reinforcing its behavior
[6]. The application of stimulation to the SI can be used as a steering cue [19]. Rats
use their vibrissae (whiskers) to sense object surfaces while exploring the envi-
ronment. The whisker barrel fields in the SI receive projections from the con-
tralateral facial vibrissae. A stimulus on one side of the SI is represented as a virtual
touch on the contralateral vibrissae, which makes the rat perform a turn [6]. Thus,
three pairs of electrodes are implanted in the rat’s brain. One of the electrodes is
placed in the MFB and the other two are implanted symmetrically in the whisker
barrel field of the left and right SIs.

3.2 Hardware Modules

The rat robot system contains two hardware modules/circuits: a stimulator and a
wireless module.
The stimulator circuit generates stimulation pulses. The size of the circuit is
minimized by using surface-mounted devices. The main processor in the stimulator
is a Mixed-Signal ISP FLASH MCU (C8051F020), which is characterized by its
high speed, small size, and low power consumption. These features make it suitable
for use in the small rat-mounted pack. This processor has two 12-bit Digital to
Analog Converters (DACs), which produce outputs for jitter-free waveform gen-
eration. The electrical stimulation pulses exported from the two DACs of the
C8051F020 MCU are used to control a constant voltage driver circuit and a con-
stant current driver circuit, thereby producing a monopolar pulse. The pulses of the
constant voltage/current pass through three analog switches and are then delivered
to the implanted electrodes.
The wireless module contains an instruction receiver, which receives the manual
control instructions sent from the computer, and a transmitter, which sends the
video from the camera to the computer to search for interesting objects (see Sect. 4
for details).
Visual Cue-Guided Rat Cyborg 69

4 Rat Cyborg

We construct our rat cyborg system based on the basic rat robot. We intend that the
rat cyborg is able to find objects to guide its motion to perform automatic navi-
gation. Thus, we mount a miniature camera on the back of the rat and the captured
video is sent to the computer. Two object detection algorithms are developed to
search for interesting objects in the video, i.e., faces and colored objects.
Furthermore, we want the rat cyborg to perform a unique motion in response to a
found object, thereby allowing it to achieve human-specified navigation in an
automatic manner where it is guided by the objects of interest. However, it is not
sufficient to simply link an object and a single instruction, i.e., “left,” “right,” or
“forward,” because the actual motion performed by the rat depends on its current
state and its response delay, as well as the instruction. In manual control, humans
usually observe the state and responses of the rat, before giving a group of
instructions to ensure that the rat achieves the motion successfully. Thus, inspired
by the human control procedure, we develop a closed-loop stimulation model to
estimate the rat’s motion state and to determine the stimulus sequence that allows
the rat cyborg to achieve the corresponding action.

4.1 Object Detection

In this study, we require that the rat cyborg can find two common objects: colored
objects and human faces.
In colored object detection, a specific color is treated as a random variable c,
which conforms to a single Gaussian distribution, c  Nðl; RÞ, where c ¼
ðR; G; BÞT is the color vector, and l and R are the mean vector and the covariance
matrix of the distribution respectively. The parameters for a specific color are
estimated from a group of natural training images by:

1X n
1 X n
l¼ cj ; R ¼ ðcj  lÞðcj  lÞT ; ð1Þ
n j¼1 n  1 j¼1

where n is the total number of color training samples cj . The probability of a pixel
with color vector x belonging to the specific color can be computed as:

1 T
R1 ðxlÞ
e2ðxlÞ
1
pðxÞ ¼ 1=2 1=2
: ð2Þ
ð2pÞ jRj

During detection, if pðxÞ is greater than a threshold Tc , the pixel is considered to


be this color. When the area of the bounding box of connected pixels in a frame
exceeds a threshold Ta , the object is considered to be detected. In our experiments,
70 Y. Wang et al.

Tc is set to 0.9 and Ta is set to 25  25 pixels, which obtains satisfactory


performance.
For face detection, we develop a modified version of the fast face detection
method called soft cascade [20], which employs real AdaBoost as the learning
algorithm. The classifier used by soft cascade is:

X
T
HT ðxÞ ¼ hi ðxÞ; ð3Þ
i¼1

where x is a test sample and hi ðxÞ denotes a weak classifier. Given a set of rejection
thresholds fc1 ; c2 ; . . .; cT g, x will be accepted as a face if and only if every partial
sum Ht ðxÞ [ ct . This cascade structure makes the detector fast. The Haar feature is
used in the detector and a stump method is used to train the weak classifiers [20].
The detector is trained on a face image set containing more than 20,000 face images
and 100,000 non-face images, which measures 10  10 pixels, and the face images
were collected from the Internet.

4.2 Closed-Loop Stimulation Model

We want the rat cyborg to perform a unique motion when an object is found.
However, an explicit rule that simply issues a “left,” “right,” or “forward”
instruction when finding an object does not work well. This is because the action of
the rat depends on its current motion and its response delay, as well as on the
objects detected. For example, let us suppose that the rat reaches a junction and we
want it to take a left turn. One “left” instruction may be sufficient if its head is
pointing in the same direction as its body, whereas two “left” instructions would be
necessary if its head is currently pointing to the right of its body. In addition, to
obtain a successful and continuous motion, the “left” instruction should be followed
by a “forward” instruction if the rat’s head actually turns left. The triggering time
for the “forward” instruction depends on the response and the action delays of the
rat. Thus, during manual control, humans provide a series of instructions to the rat
cyborg based on observations of the rat’s states and the objects. Therefore, to
construct an automatic instruction control system, we develop a closed-loop stim-
ulation model, which learns the manual control process and imitates the human
instruction-issuing process to steer the rat cyborg automatically.
As shown in Fig. 2, the loop comprises a rat state extraction module and a
human-like instruction model. When an instruction is given, the program checks the
rat’s motion state to determine whether it has made the correct motion indicated by
the instruction. Based on the changes in the rat’s state and the object detection
results, the program uses the human-like instruction model to issue the next
instruction, which imitates the decisions made by humans when they observe
similar states. The human-like instruction model is learned from a training dataset
Visual Cue-Guided Rat Cyborg 71

Fig. 2 Closed-loop
stimulation model

that contains instruction sequences, state change data, and detected objects collected
during the manual control procedure. Next, the loop passes to the next round and
continues until the mission is complete.
Rat State Extraction The rat’s motion state provides feedback for the closed-loop
stimulation model to allow appropriate instructions to be given. We define the rat
motion state as S ¼ ðh; VÞ, where V is the rat head motion direction and h is the rat
head orientation. The motion states of the rat cyborg are extracted using the video
captured by the rat-mounted camera.
The rat head motion direction is estimated based on the average motion of the
feature points in two consecutive video frames. It should be noted that when the rat
head moves in one direction, the feature points in the video move in the opposite
direction. The main steps used to estimate the motion direction of the rat’s head
comprise feature detection, feature tracking, and direction computation.
• Feature detection: In this step, we initialize a set of feature points for tracking in
consecutive frames. We use the Harris corner detection method [21] to extract
corner features from the frame Iðx; y; tÞ. The autocorrelation matrix M is com-
puted from the image derivatives as follows:
X  
I2 Ix Iy
M¼ wðx; yÞ x ; ð4Þ
x;y
Ix Iy Ix2

where wðx; yÞ is a window function and Ix denotes the partial derivative of the
pixel value with respect to the x direction. The corner response is defined as
R ¼ detðMÞ  k  traceðMÞ2 , where k is a constant, and detðÞ and traceðÞ are
the determinant and the trace of a matrix, respectively. The Harris detector finds
the points where the corner response function R is greater than a threshold, and it
takes the points with the local maxima of R as the corner feature points.
• Feature tracking: The Lucas-Kanade method [22] is applied to compute the
optical flow between consecutive frames, Iðx; y; tÞ and Iðx; y; t þ 1Þ. We assume
that u and v are the x and y components of the velocity of the corner feature
ðx; yÞ. Thus, we have Ix u þ Iy v þ It ¼ 0. This equation is computed over a
5-by-5 window around the pixel ðx; yÞ, thereby yielding the following over-
constrained system:
72 Y. Wang et al.

Fig. 3 Rat head orientation


estimation

2 3 2 3
  Ix ðp1 Þ Iy ðp1 Þ It ðp1 Þ
u
A ¼ b; where A ¼ 4 n n 5; b ¼ 4 n 5: ð5Þ
v
Ix ðp25 Þ Iy ðp25 Þ It ðp25 Þ

The solution is ½u vT ¼ ðAT AÞ1 AT b. The corresponding pixel in Iðx; y; t þ 1Þ


of the corner ðx; yÞ is then found using u and v.
• Direction computation: The corner feature point ðx1 ; y1 Þ in image Iðx; y; tÞ and
its corresponding pixel ðx01 ; y01 Þ in the next frame Iðx; y; t þ 1Þ form a vector
v1 ¼ ða1 ; b1 Þ, which indicates the motion of the feature point between the two
frames. The rat’s head motion direction V is calculated
Pn as the opposite direction
of the average feature point motion, i.e., V ¼  i vi =n, where n denotes the
number of feature points.
For the rat head orientation, the line between the rat cyborg and the current target
is considered to be the reference direction. The rat’s head orientation is defined as
the angle h between the camera’s optical axis and the reference direction, as shown
in Fig. 3. Assume that the position of the target in the frame is d pixels from the
center in the x direction. The rat’s head orientation h is computed as
h ¼ arctanðd=f Þ, where f is the focal length. If the rat cyborg deviates from the
reference direction by a large distance, the target will move outside the video frame.
In this case, the distance d is estimated by the last target offset distance dold and the
rat’s head motion, d ¼ dold  Vx , where Vx is the x component of V.
Human-like Instruction Model In the closed-loop stimulation method, the
human-like instruction model issues an instruction, given a rat state and an object.
This method operates in a similar manner to humans when they encounter a similar
situation during the manual control of a rat cyborg.
In the manual control process, after the instruction Ci1 is issued to the rat
cyborg, its posture changes from the state Si1 to the next state Si . Next, humans
observe the change in the rat state and determine the current instruction Ci to adjust
the incorrect action or to reward the correct action. Thus, the state change is an
important factor when deciding the current instruction. In addition, the current
Visual Cue-Guided Rat Cyborg 73

object affects the selection of the instruction. Different objects are related to dif-
ferent motion expectations for the rat cyborg. Thus, different instructions will be
sent even if the same state change is observed for different objects. Therefore, the
current motion expectation is treated as another factor when deciding the current
instruction.
We assume that the state change extracted by our method is
DSi ¼ fhi  hi1 ; Vi  Vi1 g, the current motion expectation Ei indicated by the
object detection result is one of Forward (0), Left Turn (-1), and Right Turn (1), and
the current instruction issued by humans is Ci . We use Xi ¼ ðDSi ; Ei Þ as input
features and Ci as output labels to train a support vector machine (SVM) classifier
to construct the human-like instruction model. Because there are three possible
values for an instruction: C (Left (-1), Right (1), or Forward (0)), we finally build a
three-class classifier using a one-against-all scheme.
The training data were collected during the manually controlled navigation of the
rat cyborg. Both the control instructions and the videos from the rat-mounted
camera were recorded. The rat motion state Si that corresponded to each instruction
Ci was extracted from the videos and used to compute the rat state change DSi . The
action expectation Ei was obtained based on the object detection results. In the
testing stage, we computed the real-time state change DS and motion expectation E
of the rat cyborg. The instruction for the rat cyborg was then obtained based on the
classification results produced using our model.

5 Experiments

In this section, the accuracy of the rat state extraction procedure and the classifi-
cation performance of the human-like instruction model were evaluated. Finally, we
developed a video showing the rat cyborg performing full cue-guided navigation
tasks.

5.1 Evaluation of the Rat State Extraction Method

The rat states provide important information that allows the closed-loop model to
issue suitable stimulus instructions. In this section, we present an assessment of the
accuracy of our state extraction method. This experiment was conducted using a
four-armed maze (see Fig. 4a). We designed eight routes for rat state estimation.
For each arm of the maze, the rat cyborg was initially placed at the end and a
colored arrow was placed at a junction in the maze, where the rat cyborg was
required to move from the starting point to the end of the adjacent arm indicated by
the direction of the arrow, as shown in Fig. 4a. There were two possible directions
for the arrow, so each arm had two possible routes. Thus, there were eight routes in
total for the four arms. As the rat cyborg traversed the routes, we continuously
74 Y. Wang et al.

Fig. 4 a Estimation of the rat’s head orientation (viewed from the top by a bird’s eye camera).
Left Head orientation estimated using the rat-mounted video (blue arrow) and the top-mounted
video (yellow arrow) when the sign was visible. Right Estimated orientations when the sign was
not visible. b Estimation of the rat’s head motion direction (viewed from the rat-mounted camera).
Left Corner features (red dots) detected in the first frame. Right Original feature location (red dots),
the registered features (green dots) in the next frame, and the estimated rat’s head motion direction
(yellow arrow)

Table 1 Average difference and standard deviations between the rat’s head orientations estimated
from the rat-mounted camera and from the top-mounted camera, and the accuracy of the estimated
rat’s head motion direction
Trial 1– R1 R2 R3 R4 R5 R6 R7 R8
4
AD/SD* 8.9/6.6 13.1/6.1 12.7/6.9 7.7/5.1 10.2/6.7 8.4/6.3 9.1/6.2 9.7/6.7
ACC 88.4 84.7 82.5 93.3 91.2 88.9 89.3 92.3
AD/SD 8.7/6.6 6.9/6.0 8.4/6.8 7.7/5.3 8.2/5.5 8.6/6.0 11.5/6.2 11.4/7.1
ACC 91.0 89.8 90.8 91.7 93.6 84.2 85.2 90.7
AD/SD 10.6/7.3 9.7/6.9 11.0/7.3 10.1/7.0 8.4/6.8 11.9/6.4 10.9/7.0 11.5/7.0
ACC 94.2 89.8 90.0 91.6 93.3 85.2 87.2 92.1
AD/SD 8.3/7.2 13.6/7.2 8.6/5.9 8.5/6.7 11.0/6.7 8.4/5.9 9.9/6.6 8.1/6.4
ACC 86.1 85.1 5 94.2 91.7 89.6 91.7 89.4
88.2
[*] AD denotes the average difference (degree), SD denotes the standard deviation, and ACC
denotes Accuracy (%)

estimated the rat’s head motion direction V and the rat’s head orientation h from the
videos recorded by the mounted camera (see Fig. 4a, b). We tested each route four
times, thus there were four trials and 32 routes.
To obtain the ground truth for V and h, we used a bird’s eye camera, which was
mounted above the scene, to record videos while the rat cyborg performed the tests.
In these stable videos, we labeled the rat’s head in the first frame and used the
Lucas-Kanade method [22] to track the head and to compute the rat’s head ori-
entations h. These results were compared with the results estimated from the
rat-mounted camera and we computed the average differences and standard devi-
ations, as shown in Table 1. For the rat’s motion direction V, a similar method also
obtained the rat’s head motion directions from the videos captured by the bird’s eye
camera. However, it should be noted that the motion direction is in a vector space
and the computational results obtained from the videos use different scales com-
pared with those computed from the videos recorded by the rat-mounted camera
Visual Cue-Guided Rat Cyborg 75

because the two videos use different views and different cameras. Thus, we per-
formed a qualitative comparison to determine whether the two motion estimates
were in the same direction: “left” or “right.”
Table 1 shows the average difference and standard deviations between the rat’s
head orientations estimated from the rat-mounted camera and those from the bird’s
eye camera, as well as the accuracy of the estimates of the rat’s head motion
direction. In all trials, the average differences are usually about 8 degrees. These
differences generally have trivial effects in determining whether the rat’s head is
currently located left or right of its body. On average, approximately 90% of the
rat’s motion directions are estimated correctly. Because the closed-loop model
continuously estimates the rat state, one estimation error may be followed by
several correct estimations. The error action caused by an error state can be rectified
by the subsequent estimations in the model. Thus, the performance can satisfy the
requirement of the closed-loop stimulation model.

5.2 Evaluation of the Closed-Loop Stimulation Model

During the evaluation of the closed-loop stimulation model, we manually controlled


the rat cyborg to navigate the four-armed maze and a small urban planning model
(Fig. 5) to collect data for training the human-like instruction model. After training,
we employed the unused data to perform an off-line test of the human-like
instruction model. Next, we tested whether the closed-loop stimulation model could
automatically direct the rat cyborg to perform a single action or two successive
actions given one or two object(s) and the rat’s states. Several simple scenes were
designed for this experiment, each of which contained one or two object(s), and the
rat cyborg was required to perform a single action or two actions.
To train the human-like instruction model and to verify its performance in an
off-line test, we performed 60 trials in the small urban planning model and the
four-armed maze. In these trials, we placed the pictures of colored arrows and
human faces in different positions. The rat cyborg was manually controlled to walk
toward the arrow pictures, to turn in the directions indicated by the arrows, and it
finally reached the human face target. During navigation, we collected the
instructions issued by humans and the videos from the rat-mounted camera. The

Fig. 5 Automatic cue-guided


navigation. The rat cyborg
was expected to follow the
signs to reach the target face
76 Y. Wang et al.

Table 2 Average confusion matrix for the off-line instruction classifications obtained using our
human-like instruction model
% Forward Left Right
Forward 93.83 2.95 3.22
Left 11.81 88.19 0
Right 12.37 0 87.63

Table 3 Comparison of the success rates obtained for motions using automatic control with our
method and manual control based on four simple tasks
Ours Ours Manual Manual
Success/Total Speed (m/min) Success/Total Speed (m/min)
R1 Left Turn 42/50 2.88 47/50 2.84
Right Turn 44/50 2.60 48/50 2.63
Left ! Right 15/20 2.83 17/20 2.89
Right ! Left 17/20 2.33 19/20 2.38
R2 Left Turn 45/50 3.78 49/50 3.85
Right Turn 44/50 3.42 43/50 4.10
Left ! Right 16/20 3.26 18/20 3.74
Right ! Left 15/20 3.34 18/20 3.34

objects in the videos were detected using the object detection methods and the rat’s
states were extracted. For each manually issued instruction, we determined the
synchronous rat state changes and the objects detected to form a dataset. There were
1171 instructions in 60 trials, thus the dataset contained 1171 samples. We selected
574 random samples as the training data to learn the human-like instruction model
and used the remaining 597 samples as testing data. The off-line test results are
shown in Table 2. There are three types of instruction: “Forward,” “Left,” and
“Right.” The average confusion matrix shows that the accuracies of classification
for the three instructions are 93.83%, 88.19%, and 87.63%, respectively.
To verify whether the closed-loop stimulation model could automatically direct
the rat cyborg to perform a specified motion, we designed four simple routes: a
single left/right turn in the four-armed maze and a left/right turn followed by a
right/left turn in the small urban planning model. The pictures of arrows were
placed at junctions to indicate the motion directions. We used two rat cyborgs and
tested each rat cyborg on the first two routes 50 times and on the other two routes
20 times. If the rat cyborg completed the motion successfully, we treated it as a
successful trial. We recorded the time costs and speed during each trial to evaluate
the efficiency of the closed-loop model. Moreover, the same experiments were
conducted using manual control for the purposes of comparison.
Table 3 compares the success rates for achieving specified motions using our
automatic control method and manual control based on four simple tasks. In all
cases, the success rates with the closed-loop stimulation model are very similar to
those with manual control. The speed of motion completion is also similar to the
two methods. In some cases, our method is even faster than manual control. This
Visual Cue-Guided Rat Cyborg 77

indicates that the closed-loop stimulation model can automatically control the rat
cyborg to perform specified motions.
A video demo of the automatic navigation process can be found at the following
link: http://www.cs.zju.edu.cn/gpan/demo/RatCyborg.mp4.

6 Conclusion and Discussion

In this study, we have developed a rat robot system, called a rat cyborg, which is
able to find colored objects and faces using a miniature camera. The detection
results have been used to trigger stimuli to guide the behavior of the rat cyborg
based on a closed-loop model. Our extensive experiments demonstrate that the rat
cyborg is capable of performing visual cue-guided automatic navigation. This work
could inspire totally new search and rescue applications, such as finding victims
trapped by earthquake debris.

Acknowledgements This work was supported by the grants from the National 973 Program (no.
2013CB329500), National Natural Science Foundation of China (No. 61673340) and Zhejiang
Provincial Natural Science Foundation of China (LZ17F030001, LR15F020001)

References

1. Bin G, Gao X, Wang Y, Hong B, Gao S (2009) VEP-based brain-computer interfaces: time,
frequency, and code modulations [research frontier]. IEEE Comput Intell Mag 4(4):22–26
2. Wolpaw J, Wolpaw EW (2012) Brain-computer interfaces: principles and practice. Oxford
University Press
3. Holzer R, Shimoyama I (1997) Locomotion control of a bio-robotic system via electric
stimulation. IEEE/RSJ Int Conf Intell Robots Syst 3:1514–1519
4. Paxinos G (2004) The rat nervous system. Academic Press
5. Feng Z, Chen W, Ye X, Zhang S, Zheng X, Wang P, Jiang J, Jin L, Xu Z, Liu C, Liu F, Luo J,
Zhuang Y, Zheng X (2007) A remote control training system for rat navigation in complicated
environment. J Zhejiang Univ Sci A 8(2):323–330
6. Talwar S, Xu S, Hawley E, Weiss S, Moxon K, Chapin J (2002) Behavioural neuroscience:
rat navigation guided by remote control. Nature 417(6884):37–38
7. Li Z, Hayashibe M, Fattal C, Guiraud D (2014) Muscle fatigue tracking with evoked EMG via
recurrent neural network: Toward personalized neuroprosthetics. IEEE Comput Intell Mag 9
(2):38–46
8. Wu Z, Pan G (2013) Smartshadow: models and methods for pervasive computing. Springer
9. Wu Z, Pan G, Zheng N (2013) Cyborg intelligence. IEEE Intell Syst 28(5):31–33
10. Wu Z, Pan G, Principe JC, Cichocki A (2014) Cyborg intelligence: Towards bio-machine
intelligent systems. IEEE Intell Syst 29(6):2–4
11. Wu Z, Yang Y, Xia B, Zhang Z, Pan G (2014) Speech interaction with a rat. Chin Sci Bull 59
(28):3579–3584
12. Wu Z, Zhou Y, Shi Z, Zhang C, Li G, Zheng X, Zheng N, Pan G (2016) Cyborg intelligence:
recent progresses and future directions. IEEE Intell Syst 31(6):44–50
78 Y. Wang et al.

13. Yu Y, Pan G, Gong Y, Xu K, Zheng N, Hua W, Zheng X, Wu Z (2016)


Intelligence-augmented rat cyborgs in maze solving. PLoS ONE 11(2):e0147754
14. Wang Y, Lu M, Wu Z, Tian L, Xu K, Zheng X, Pan G (2015) Visual cue-guided rat cyborg
for automatic navigation. IEEE Comput Intell Mag 10(2):42–52
15. Hermer-Vazquez L, Hermer-Vazquez R, Rybinnik I, Greebel G, Keller R, Xu S, Chapin J
(2005) Rapid learning and flexible memory in “habit” tasks in rats trained with brain
stimulation reward. Physiol Behav 84(5):753–759
16. Reynolds J, Hyland B, Wickens J (2001) A cellular mechanism of reward-related learning.
Nature 413(6851):67–70
17. Romo R, Hernández A, Zainos A, Brody C, Lemus L (2000) Sensing without touching:
psychophysical performance based on cortical microstimulation. Neuron 26(1):273–278
18. Schultz W (2002) Getting formal with dopamine and reward. Neuron 36(2):241–263
19. Wang Y, Su X, Huai R, Wang M (2006) A telemetry navigation system for animal-robots.
Robot 28(2):183–186
20. Bourdev L, Brandt J (2005) Robust object detection via soft cascade. IEEE Comput Soc Conf
Comput Vis Pattern Recognit 2:236–243
21. Harris C, Stephens M (1988) A combined corner and edge detector. In: Alvey vision
conference, vol. 15. p 50
22. Lucas BD, Kanade T (1981) An iterative image registration technique with an application to
stereo vision. IJCAI 81:674–679
Predicting Motor Intentions
with Closed-Loop Brain-Computer
Interfaces

Matthias Schultze-Kraft, Mario Neumann, Martin Lundfall,


Patrick Wagner, Daniel Birman, John-Dylan Haynes
and Benjamin Blankertz

1 Introduction

The ability of modern brain-computer interfaces (BCIs) to study the relationship


between brain processes and mental states in real-time and provide immediate feedback
to the person has put forth novel application possibilities. While BCI research has
primarily been focused on its use as an assistive technology in the medical context with
the aim to provide paralyzed patients with a direct communication and control channel
(Birbaumer et al. [4], Wolpaw and Wolpaw [28]), since the turn of the century research
has expanded towards BCI applications that go beyond control (Blankertz et al. [9],
Allison et al. [1], Brunner et al. [12], Blankertz et al. [5]). Control-directed BCIs have
been characterized by their “closed-loop” nature (Blankertz et al. [6]), because estab-
lishing communication channels relies on feeding the user’s decoded intentions back to
the user in real-time. Non-control BCIs, on the other hand, have been predominantly
open-loop systems, since their primary goal has been the monitoring and prediction of
mental states (Kohlmorgen et al. [19], Müller et al. [23], Schultze-Kraft et al. [27],
Naumann et al. [24]), without requiring a direct interaction with the user. A further
distinction is that while for control-BCIs the goal is to achieve “explicit control”,
non-control BCIs on the other hand aim to exploit “implicit information” obtained from
neurophysiological markers in the EEG.

M. Schultze-Kraft (&)  M. Neumann  M. Lundfall  P. Wagner  B. Blankertz


Neurotechnology Group, Technische Universität Berlin, Berlin, Germany
e-mail: schultze-kraft@tu-berlin.de
M. Schultze-Kraft  J.-D. Haynes  B. Blankertz
Bernstein Focus: Neurotechnology, Berlin, Germany
M. Schultze-Kraft  D. Birman  J.-D. Haynes  B. Blankertz
Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
J.-D. Haynes
Berlin Center for Advanced Neuroimaging, Berlin, Germany

© The Author(s) 2017 79


C. Guger et al. (eds.), Brain-Computer Interface Research, SpringerBriefs
in Electrical and Computer Engineering, DOI 10.1007/978-3-319-64373-1_8
80 M. Schultze-Kraft et al.

Here, we present findings from two recent studies, which While the results from
one study are reported here for the first time, the findings from the other study have
already been published (Schultze-Kraft et al. [26]). Both employ instances of BCIs
outside the medical context that are at their very core closed-loop systems, relying
on fast responsive feedback. In both experiments, we aimed at detecting movement
intentions of subjects in real-time from the ongoing EEG and using this decoded
information in order to interact with the subjects’ behavior. The unique possibility
to intervene in an experimental paradigm based on the momentary intention or
decision state of a person opens the potential for employing such BCIs for multiple
purposes.

1.1 EEG Signals Predictive of Movement Intentions

What both studies have in common is that they both aim at detecting movement
intentions using two related and well-known motor preparatory signals in the EEG.
One is the so-called readiness potential (RP), a slow, negative cortical potential that
starts more than one second before voluntary, self-initiated movements and is
observed over motor areas in the EEG (Kornhuber and Deecke [20], Cui et al. [14]).
This potential gained particular fame in the seminal experiment by Libet et al. [21],
who found that the conscious decision to move occurs several hundred milliseconds
after onset of the RP. The early onset of the RP suggests that it is a good neuro-
physiological marker for predicting whether and when a spontaneous movement
will occur.
About 400 ms before movement onset, the RP suddenly increases its slope and
becomes asymmetrically distributed on the scalp, with a stronger negativity over the
hemisphere contralateral to the moved body part (Coles et al. [13], Eimer [15]).
This late component of the RP has been coined the lateralized readiness potential
(LRP). This property of the LRP suggests that it is a good neurophysiological
marker for predicting what kind of movement will occur, e.g. whether the left or the
right hand will be moved.

1.2 Two Studies

LRP-study In one of the two studies we present here (termed LRP-study), we


aimed at predicting a binary movement decision in real-time using the lateralized
readiness potential as a predictive signal. The ability to predict a binary movement
decision from neural signals in real time before the movement occurs was first
demonstrated by Blankertz et al. [7] and later investigated in a systematic study by
Maoz et al. [22], where epilepsy patients with implanted intracranial electrodes
played a “matching pennies” game against an opponent. In this game, the player
wins a fixed amount of money if they raise a different hand than the opponent at the
Predicting Motor Intentions with Closed-Loop … 81

end of a countdown and loses that amount otherwise. Using low-frequency signals
from the implanted electrodes allowed them to predict which hand the patient
would move 0.5 s before the movement with good accuracy. In the LRP-study
presented here, we aimed at implementing the “matching pennies” paradigm,
however using the lateralized readiness potential in the EEG as a non-invasive
approach (as opposed to the invasive approach used by Maoz et al. [22]) for
predicting the laterality of the movement in real-time. The study was conceived as a
proof of concept to test whether an EEG potential like the LRP has enough pre-
dictive power to allow for a successful prediction of the player’s decision, thus
enabling the BCI to win the game.
RP-study In the other study (termed RP-study) the BCI was setup to predict
whether/when a movement would occur by detecting the occurrence of a readiness
potential in the ongoing EEG. Here, the experimental paradigm was designed to
address a fundamental question in cognitive neuroscience. It has to date remained
unclear whether the onset of the RP triggers a chain of events that unfolds in time
and cannot be cancelled, or whether people can still cancel movements after onset
of the RP. One intriguing way to test the underlying hypothesis is to interrupt a
person with a stop signal once a RP has started, but before they have started the
intended movement, thus potentially giving them the opportunity to stop the
movement (Haynes [18]). In a study with 10 participants, we implemented this idea
with a real-time BCI in order to test the underlying hypothesis (Schultze-Kraft et al.
[26]).

2 Methods

2.1 Experimental Task

LRP-study Six subjects participated in the experiment in which they played the
“matching pennies” game against the computer. The game consisted of single trials.
A trial began with the subject pressing down the bottom leftmost and the bottom
rightmost keys of a standard keyboard with the fingertips of the left and right hand,
respectively, while resting the hands calmly on the table. This started a three-second
countdown that was presented on a computer screen as a continuous shrinking of a
horizontal bar. Immediately at the end of the countdown, subjects were asked to
raise one of the two hands with a fast movement and high enough “as if to show the
palm to the computer”. Subjects were asked to perform the movement precisely
timed with the end of the countdown. To ensure this, there was a 100 ms window at
the end of the countdown during which the movement was required to occur. If the
movement occurred outside this time window or if both buttons were released, the
trial was considered invalid. Otherwise, the computer’s choice (left or right) was
shown on the screen. If the subject’s choice was the same as the computer’s, this
was a win trial for the BCI. Otherwise, it was a win trial for the subject.
82 M. Schultze-Kraft et al.

The experiment was divided into three stages of 100 trials each. During stage I,
the choice of the computer was random. After stage I, the recorded data were then
used to train a classifier on the LRP in order to learn to discriminate between the
movement of either the left or the right hand. During stages II and III, the classifier
was applied to the ongoing EEG data, and the output of the classifier at the end of
the countdown was taken as the computer’s choice. While subjects were not aware
of the change after stage I, after stage II but before stage III they were informed
about the origin of the computer’s choices.
RP-study During this experiment, participants (N = 10) made spontaneous,
self-initiated movements with their right foot, which consisted of pressing a button
that was attached to the floor. Subjects were instructed to terminate their decision
and withhold any movement whenever a stop signal was elicited on a screen. The
task was designed as a “duel” against the BCI. If the subjects pressed the button
while a light on a computer screen was green, this was a win trial for the subject. If
they pressed the button after the computer had turned the light red (stop signal), this
was a lose trial. The experiment had three consecutive stages. In stage I, stop
signals were elicited at random onset times. The EEG data from stage I were then
used to train a classifier to detect the occurrence of RPs. In stages II and III,
movement predictions were made in real-time by the BCI with the aim of turning on
the stop signal in time to interrupt the subject’s movement. For details, see
Schultze-Kraft et al. [26].

2.2 Data Acquisition

In both studies, EEG was recorded at 1 kHz from 32 (LRP-study) or 64 (RP-study)


Ag/AgCl electrodes (EasyCap; Brain Products), respectively, referenced to the nose
(LRP-study) or channel FCz (RP-study) and re-referenced offline to a common
reference. In the RP-study, EMG was additionally recorded from the calf muscle of
the moved foot in order to determine the time of movement onset. The amplified
signal was converted to digital (BrainAmp: Brain Products), saved for offline
analysis, and simultaneously processed online by the Berlin Brain-Computer
Interface toolbox (BBCI1).

2.3 Online Classifier

In both studies, before stage II, a linear classifier was trained using segments of
EEG data from trials in stage I. In the RP-study, during stages II and III, the
classifier was then applied to the ongoing EEG and its output used to determine

1
https://github.com/bbci/bbci_public.
Predicting Motor Intentions with Closed-Loop … 83

with above chance accuracy if a movement was being prepared, which then elicited
a stop signal. For details, see Schultze-Kraft et al. [26].
In the LRP-study, the EEG data used to train the classifier consisted of 600 ms
segments preceding the time point of the release of one of the two buttons, where
the laterality defined the two classes. The data was then downsampled to 10 Hz and
the difference between the channel pairs FC1 − FC2, FC5 − FC6, C3 − C4, CP 1 −
CP2 CP5 − CP6 was concatenated to obtain a feature vector, which was used to
train a regularized linear discriminant analysis (LDA) classifier with automatic
shrinkage (Blankertz et al. [8]). The so-trained classifier was used during stages II
and III to predict laterality. Every 10 ms, a classifier output was generated from the
immediately preceding 600 ms of EEG data, and the classifier output at the end of
the countdown was then used as the computer’s decision in the task.

3 Results

3.1 Mean Event-Related Potentials

Let us first have a look at the two EEG signals relevant in each of the studies.
RP-study Figure 1a shows the mean readiness potential for spontaneous, vol-
untary foot movements. It displays the well-known shape of the RP, which starts to
become negative around 1000 ms before movement onset and has its maximum
around the time of movement onset. The RP has its highest amplitude at channel Cz
and, as expected for foot movements, there is no lateralization of the potential
(Brunia et al. [11]). Furthermore, a comparison with EEG segments where no
movement occurred shows a very high class discriminability that is already
apparent several hundred ms before movement onset.
LRP-study Participants performed movements with either the left or the right
hand, which resulted in the readiness potential becoming asymmetrical around
400 ms before button release. The difference between the two classes was strongest
in channels C3 and C4, respectively (Fig. 1b). This is confirmed by signed r2
values, which furthermore are highest during the time interval −200 to −50 ms w.r.
t. the time of button release.

3.2 Performance of Online Predictors

Let us next examine the performance of the real-time BCI in both studies to predict
the correct laterality of hand movements and to detect RPs and predict self-initiated
movements, respectively.
LRP-study The experimental paradigm of the LRP-study required subjects to
move either their left or right hand 100 times in each of the three stages. All
84 M. Schultze-Kraft et al.

Cz
(a)
0
0.5
-5

2
µV

sgn r
0

-10
-0.5

-15

-1000 -500 0
Time to movement onset [ms]

foot foot - no move

µV sgn r
2

-1.7 1.7 -0.05 0.05

C3 C4
(b) 2 2

1 1

0 0 0.05

2
µV

µV

sgn r
-1 -1 0

-2 -2 -0.05
left
-3 right -3

-4 -4
-600 -400 -200 0 -600 -400 -200 0
Time to button release [ms] Time to button release [ms]

left hand right hand left - right

µV µV sgn r
2

-4 4 -4 4 -0.04 0.04

Fig. 1 Mean readiness potential for foot movements and lateralized readiness potential for
movements of left and right hand. a The top panel shows the grand average RP at channel Cz,
computed by averaging the EEG signal time-locked to movement onset and baseline correcting in
the time interval −1200 to −1100 ms. The color bar on top shows the class discriminability (signed
r2 values) between movement and no-movement trials. The two bottom panels show as scalp
topographies the spatial distribution of the voltage and signed r2 values, respectively, during the
time interval −700 to −500 ms, as indicated in the top panel (dotted lines). b The top panels show
the LRP recorded over channels C3 and C4, respectively, time-locked to the button release and
baseline corrected w.r.t. time interval −600 to −550 ms. The color bar on the top shows the class
discriminability (signed r2 value) between the two classes (left, right). The bottom panels show as
scalp topographies the mean voltage (left, middle) and signed r2 value (right), averaged in the time
interval −200 to −50 ms w.r.t. time of button release, as indicated in the top panels (dotted lines)
Predicting Motor Intentions with Closed-Loop … 85

subjects moved the right and left hand with approximately equal probability, and
there was also no significant difference across stages. During stage I, the computer’s
choice was random. As expected—and given the equal probability of subjects to
raise either the left or the right hand—the accuracy of the online predictor was 48%
(Fig. 2), which was not significantly better than chance (one-sided tð5Þ ¼ 1:87,
p ¼ 0:94). During stages II and III, however, when the computer’s choice was
controlled by the real-time classifier output, the mean accuracy increased sub-
stantially to 62 and 65%, respectively, which in both cases was significantly better
than chance (one-sided tð5Þ ¼ 6:52, p\0:001 and one-sided tð5Þ ¼ 3:96, p\0:01).
RP-study In the RP-study, the evaluation of the real-time BCI is more complex
because the possible trial outcomes were manifold (Fig. 3a). If the button was
pressed without a preceding stop signal, the current trial ended. We refer to this as a
missed button press trial. If a stop signal was issued and the subject pressed the
button during the subsequent second, we term the trial a predicted button press trial.
If no button press but an EMG onset occurred despite there being a stop signal we
term the trial an aborted button press trial. Otherwise, if no observable movement
followed a stop signal we refer to this as an ambiguous trial that reflected either an
early cancellation or a false alarm. Furthermore, during stages II and III 40% of
trials were silent (not shown here). In these trials, the time of a planned stop signal
was recorded but the red stop signal itself was not presented. These trials always
ended when the participant eventually pressed the button. Figure 3b shows that
roughly 2 out of 3 button presses were missed while during stage I, but only 1 out
of 3 button presses were missed during stages II and III. Furthermore, predicted or
aborted button presses were almost absent during stage I, while during stages II and
III, when the BCI was actively predicting subjects’ movements, they occurred in
roughly 20 and 15% of trials, respectively.

3.3 The Cancelling of Self-initiated Movements

The experimental paradigm of the RP-study furthermore allowed us to test whether


people are able to cancel self-initiated movements after onset of the RP and, if so, if
there is a point of no return. We therefore assessed how the timing of stop signals
was related to movement onset (as assessed by EMG). Figure 4 shows that subjects
mostly pressed the button if the stop signal occurred after EMG onset (failed
cancellations) but that they were able to stop the movement in time if the stop signal
occurred earlier around EMG onset (late cancellations).
Interestingly, subjects rarely moved despite seeing stop signals earlier than
200 ms before EMG, even though RP onset occurred more than 1000 ms before
EMG onset. Examining the distribution of “silent predictions” (cyan distribution)
shows that, while a majority of them occurred around movement onset, many also
occurred more than 200 ms before EMG onset.
This suggests that the BCI was indeed able to predict movements at such early
stages and that subjects were caught early enough to cancel their decision without
86 M. Schultze-Kraft et al.

70

60

Accuracy (%)
50

40

30
Stage I Stage II Stage III

Fig. 2 Mean accuracy of the online laterality predictor during the three stages. The average across
subjects is shown (error bars = SEM). The chance level accuracy at 50% is indicated with a
dashed line

(a)

Trial outcomes
(b) 70
Stage I
60
Stage II
50 Stage III
Trials (%)

40
30
20
10
0
Missed Predicted Aborted Early cancellation
button press button press button press / False alarm

Fig. 3 Possible trial types and outcomes. a The four possible trial categories, as detailed in the
preceding text. b Percentage of trial outcomes across stages for the four trial categories described
in a. All trial categories in one stage (bars of same color) add up to 100%. The average across
subjects is shown (error bars = SEM). In missed button press trials, the participant wins. In
predicted button press trials, the BCI wins. Aborted button press trials and the ambiguous early
cancellation/false alarm trials constitute draws because the participant’s task was to press a button
without being detected. Figure reproduced with permission from Schultze-Kraft et al. [26]

any overt sign of movement (Fig. 4, yellow). Evidence for such early cancellations
was finally obtained by means of an offline analysis that detected the occurrence of
event-related desynchronization (ERD), an EEG marker for movement preparation
Predicting Motor Intentions with Closed-Loop … 87

125
Silent predictions
Failed cancellations

Number of predictions
100
Late cancellations
75 Early cancellations
(partly)
50

25

-1000 -800 -600 -400 -200 0 200 400


Time relative to EMG onset (ms)

Fig. 4 Distribution of BCI predictions in four different cases, relative to EMG onset. The cyan
line shows the predictions in silent trials which were only recorded but not shown to participants.
The red and orange bars show the joint distribution of failed cancellations (stop signals in
predicted button press trials) and late cancellations (stop signals in aborted button press trials),
respectively. The yellow area indicates the timing of presumed early cancellations.
Figure reproduced with permission from Schultze-Kraft et al. [26]

that is independent of the RP (Bai et al. [3]). In conclusion, our results suggest that
humans can still cancel voluntary movements even after onset of the RP. However,
this is only possible until a point of no return around 200 ms before movement
onset.

4 Discussion

The two EEG signals used in each of the studies, the readiness potential and the
lateralized readiness potential, both share a critical feature: they can predict a
specific motor intention several hundred milliseconds before the corresponding
action begins. However, they also differ in many ways. While the RP is predictive
of whether/when a self-initiated movement will occur, the LRP predicts movement
content, i.e. a what decision. This distinction fits well into what has boon described
as the what, when, whether model of intentional action (Brass and Haggard [10]).
Furthermore, the predictive power of both signals occurs at different time scales.
While the early onset of the RP (Fig. 1a) allows us to make predictions as early as
several hundred milliseconds before movement onset (a key feature for finding the
point of no return in the RP-study), in the LRP-study, the relatively late lateral-
ization of the RP (Fig. 1b) shifts the time window for good prediction accuracies
closer to movement onset.
Examining the performance of the online predictors in both studies shows that
the BCI was successful in making predictions about the subjects’ intentions. During
stages II and III, when the BCI was actively predicting subjects’ movement intent,
mean prediction accuracies of 62 and 65%, respectively, were achieved in the LRP
study. In the RP-study, in those two stages the rate of predicted movements (both
88 M. Schultze-Kraft et al.

completed and cancelled button presses) increased from virtually absent (1.5%)
during stage I to around 36%. Furthermore, the offline ERD analysis revealed that
the roughly 30% of trials with predictions but no overt movements were in fact in
part early cancellations and not merely false alarms. This remarkable performance
of the online predictor in the RP-study eventually allowed us to probe the coupling
between the RP as a preparatory signal and its corresponding action and identify a
point of no return in cancelling self-initiated movements (Schultze-Kraft et al. [26]).
With the two presented studies, we demonstrated both the technical feasibility
and resulting application possibilities of BCIs capable of real-time prediction and
immediate feedback of movement intentions. Early attempts on single-trial EEG
aimed at predicting the laterality of finger movements from the lateralized readiness
potential with the goal of improving the responsiveness of control-based BCIs
(Blankertz et al. [7]). This work led to the development of a system capable of
online predictions of externally evoked actions such as in an emergency braking
situation (Haufe et al. [16, 17]). Other studies have used event-related desynchro-
nization in the EEG to predict movement intentions from single trials both offline
(Salvaris and Haggard [25]) and online (Bai et al. [2]). To the best of our knowl-
edge, the two presented studies are the first studies that demonstrated the successful
real-time prediction of when and what movement intentions using the RP and the
LRP, respectively. Most importantly, however, the RP-study is the first realization
of the idea of employing a real-time, closed-loop BCI as a research tool, thereby
paving the way for future experiments that address previously unapproachable
questions from cognitive neuroscience.

Acknowledgements This work was supported by the Bernstein Focus: Neurotechnology from
the German Federal Ministry of Education and Research (BMBF grant 01GQ0850), by the
Bernstein Computational Neuroscience Program (BMBF grant 01GQ1001C), the Research
Training Group “Sensory Computation in Neural Systems” (GRK 1589/1-2), the Collaborative
Research Center “Volition and Cognitive Control: Mechanisms, Modulations, Dysfunctions” (SFB
940/1) and the German Research Foundation (DFG grants EXC 257 and KFO 247).

References

1. Allison BZ, Dunne S, Leeb R, Millán JDR, Nijholt A (2012) Towards practical
brain-computer interfaces: bridging the gap from research to realworld applications.
Springer Science & Business Media, Heidelberg
2. Bai O, Rathi V, Lin P, Huang D, Battapady H, Fei D-YY, Schneider L, Houdayer E, Chen X,
Hallett M (2011) Prediction of human voluntary movement before it occurs. Clin
Neurophysiol 122(2):364–372
3. Bai O, Vorbach S, Hallett M, Floeter MK (2006) Movement-related cortical potentials in
primary lateral sclerosis. Ann Neurol 59(4):682–690
4. Birbaumer N, Ghanayim N, Hinterberger T, Iversen I, Kotchoubey B, Kübler A,
Perelmouter J, Taub E, Flor H (1999) A spelling device for the paralysed. Nature 398
(6725):297–298
Predicting Motor Intentions with Closed-Loop … 89

5. Blankertz B, Acqualagna L, Dähne S, Haufe S, Schultze-Kraft M, Sturm I, U2¢umlic M,


Wenzel MA, Curio G, Müller K-R (2016) The Berlin brain-computer interface: progress
beyond communication and control. Front Neurosci 10:530
6. Blankertz B, Dornhege G, Krauledat M, Müller K-R, Curio G (2007) The non-invasive Berlin
brain-computer interface: fast acquisition of effective performance in untrained subjects.
NeuroImage 37(2):539–550
7. Blankertz B, Dornhege G, Lemm S, Krauledat M, Curio G, Müller K-R (2006) The Berlin
brain-computer interface: machine learning based detection of user specific brain states.
J UCS 12(6):581–607
8. Blankertz B, Lemm S, Treder M, Haufe S, Müller K-R (2011) Singletrial analysis and
classiffcation of ERP components - a tutorial. NeuroImage 56(2):814–825
9. Blankertz B, Tangermann M, Vidaurre C, Fazli S, Sannelli C, Haufe S, Maeder C,
Ramsey LE, Sturm I, Curio G, Müller KR (2010) The Berlin brain-computer interface:
non-medical uses of BCI technology. Front Neurosci 4:198
10. Brass M, Haggard P (2008) The what, when, whether model of intentional action.
Neuroscientist 14(4):319–325
11. Brunia CH, Voorn FJ, Berger MP (1985). Movement related slow potentials. II. A contrast
between finger and foot movements in left-handed subjects. Electroencephalogr Clin
Neurophysiol 60:135–145
12. Brunner C, Birbaumer N, Blankertz B, Guger C, Kübler A, Mattia D, del R. Millán J,
Miralles F, Nijholt A, Opisso E, Ramsey N, Salomon P, Müller-Putz GR (2015). BNCI
Horizon 2020: towards a roadmap for the BCI community. Brain Comput Interfaces 2
(1):1–10
13. Coles MG, Gratton G, Donchin E (1988) Detecting early communication: using measures of
movement-related potentials to illuminate human information processing. Biol Psychol 26
(1):69–89
14. Cui RQ, Huter D, Lang W, Deecke L (1999) Neuroimage of voluntary movement: topography
of the Bereitschaftspotential, a 64-channel DC current source density study. NeuroImage 9
(1):124–134
15. Eimer M (1998) The lateralized readiness potential as an on-line measure of central response
activation processes. Behav Res Methods Instrum Comput 30(1):146–156
16. Haufe S, Kim J-W, Kim I-H, Sonnleitner A, Schrauf M, Curio G, Blankertz B (2014)
Electrophysiology-based detection of emergency braking intention in real-world driving.
J Neural Eng 11(5):056011
17. Haufe S, Treder MS, Gugler MF, Sagebaum M, Curio G, Blankertz B (2011) EEG potentials
predict upcoming emergency brakings during simulated driving. J Neural Eng 8(5):056001
18. Haynes J-D (2011) Decoding and predicting intentions. Ann N Y Acad Sci 1224(1):9–21
19. Kohlmorgen J, Dornhege G, Braun M, Blankertz B, Müller K-R, Curio G, Hagemann K,
Bruns A, Schrauf M, Kincses W (2007) Improving human performance in a real operating
environment through real-time mental workload detection. In: Dornhege G, del R. Millán J,
Hinterberger T, McFarland D, Müller K-R (eds) Toward brain-computer interfacing. MIT
press, Cambridge, MA, pp 409–422
20. Kornhuber HH, Deecke L (1965) Hirnpotentialänderungen bei Willkürbewegungen und
passiven Bewegungen des Menschen: Bereitschaftspotential und reafferente Potentiale.
Pflügers Arch 284:1–17
21. Libet B, Gleason CA, Wright EW, Pearl DK (1983) Time of conscious intention to act in
relation to onset of cerebral activity (readiness-potential). The unconscious initiation of a
freely voluntary act. Brain J Neurol 106(3):623–642
22. Maoz U, Ye S, Ross IB, Mamelak AN, Koch C (2012) Predicting action content on-line and
in real time before action onset - an intracranial human study. In: Bartlett PL, Pereira FCN,
Burges CJC, Bottou L, Weinberger KQ (eds) NIPS, pp 881–889
23. Müller K-R, Tangermann M, Dornhege G, Krauledat M, Curio G, Blankertz B (2008)
Machine learning for real-time single-trial EEG-analysis: from brain-computer interfacing to
mental state monitoring. J Neurosci Methods 167(1):82–90
90 M. Schultze-Kraft et al.

24. Naumann L, Schultze-Kraft M, Dähne S, Blankertz B (2017) Prediction of difficulty levels in


video games from ongoing EEG. In: Gamberini L, Spagnolli A, Jacucci G, Blankertz B,
Freeman J (eds) Symbiotic interaction, vol 9961. Springer International Publishing, Cham,
pp 125–136
25. Salvaris M, Haggard P (2014) Decoding intention at sensorimotor timescales. PLoS ONE 9
(2):e85100
26. Schultze-Kraft M, Birman D, Rusconi M, Allefeld C, Görgen K, Dähne S, Blankertz B,
Haynes J-D (2016a) The point of no return in vetoing self-initiated movements. Proc Nat
Acad Sci 113(4):1080–1085
27. Schultze-Kraft M, Dähne S, Gugler M, Curio G, Blankertz B (2016) Unsupervised
classification of operator workload from brain signals. J Neural Eng 13(3):036008
28. Wolpaw J, Wolpaw EW (2012) Brain-computer interfaces: principles and practice. OUP,
USA
Towards Online Functional Brain
Mapping and Monitoring During Awake
Craniotomy Surgery Using ECoG-Based
Brain-Surgeon Interface (BSI)

L. Yao, T. Xie, Z. Wu, X. Sheng, D. Zhang, N. Jiang, C. Lin, F. Negro,


L. Chen, N. Mrachacz-Kersting, X. Zhu and D. Farina

1 Introduction

Starting from its basic functions of brain-initiated communication and control [1, 2],
Brain-computer Interface (BCI) has begun to focus intensively on neurorehabili-
tation [3, 4, 5], in particular for its online, real-time, and active-involvement fea-
tures [6, 7, 8]. In this project, the concept of BCI will be further extended to bridge
the gap between the patient’s brain and the surgeon, with the clinical applications
for online functional brain mapping and monitoring during awake craniotomy brain
surgery. We have named the proposed system the “Brain-Surgeon Interface (BSI)”,
fully providing the online interaction between the patient’s brain and the surgeon,

L. Yao  N. Jiang  X. Zhu (&)


Department of Systems Design Engineering, University of Waterloo, Waterloo, Canada
T. Xie  X. Sheng  D. Zhang
State Key Lab of Mechanical System and Vibration, Shanghai Jiao Tong University,
Shanghai, China
Z. Wu  L. Chen (&)
Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
C. Lin
Shenzhen Institutes of Advanced Technology Chinese Academy of Science, Beijing, China
F. Negro
Universita Di Brescia, Brescia, Lombardy, Italy
N. Mrachacz-Kersting
Department of Health Science and Technology, Center for Sensory-Motor Interaction,
Aalborg University, 9220 Aalborg, Denmark
D. Farina (&)
Department of Bioengineering, Imperial College London, London, UK
e-mail: d.farina@imperial.ac.uk

© The Author(s) 2017 91


C. Guger et al. (eds.), Brain-Computer Interface Research, SpringerBriefs
in Electrical and Computer Engineering, DOI 10.1007/978-3-319-64373-1_9
92 L. Yao et al.

with the purpose of improving traditional brain surgery, especially avoiding errors
in the removal of important functional brain tissues.
Brain surgery is performed to remove lesions in the brain tissue, mainly due to brain
tumor or abnormal discharge region causing epilepsy [9, 10, 11, 12]. Besides identi-
fying to a brain the lesions, other regions responsible for important functions, such as
moving, sensation, and language, need also to be identified [13, 14, 15]. Therefore,
fMRI is routinely performed to identify these regions by pre-surgical functional brain
mapping [16, 17]. Traditionally, the off-line fMRI brain maps will be used to guide the
brain surgery when the patients are fully anaesthetized without any conscious activity.
Although this “pre-surgical fMRI” is applicable, it has several limitations due to its
low-selectivity and the brain shifts after craniotomy. Since the latest developments in
anesthesiology, awake brain surgery is now much safer. During surgery, the surgeon
could remap the brain using cortical electrical stimulation (ES) [18], together with the
“pre-surgical fMRI”. But the cortical ES may cause seizures, which can be dangerous
for the subject and the surgery procedure. Moreover, it is usually very time-consuming
to get a full mapping. Compared to the “pre-surgical fMRI”, a full mapping is usually
very time-consuming, the “intraoperative fMRI” technique has been developed to
provide the surgeon with a real-time mapping of the brain areas, and also to check the
brain activity when the surgery is finished [19]. In case of partial removal of the lesion,
the brain surgery can continue immediately. The development of our ECoG-based BSI
overcomes the limitations of the current techniques, fully integrating the interactive
nature of BCI system, with fast, online, easy preparation features. The communication
between the patient’s brain and the surgeon will be built in a novel way for the first
time, with the ultimate goal of a high quality brain surgery.
In this study, we present preliminary results of an innovative BSI system. This
concept has the potential to be one of the new exciting applications developed from
the traditional BCI approach.

2 Motor Cortex Mapping with ERD/S and MRCP

During the awake craniotomy surgery, patients were required to perform simple
wrist extension tasks, while ECoG and the EMG of the extensor digitorum muscle
were concurrently recorded. The starting time of the task was identified by the
Teager-Kaiser energy operator from EMG signal. Movement related cortical
potentials (MRCP, [0.05 3] Hz) were extracted in single trials [20, 21], as shown in
Fig. 1(1). This allowed the possibility of fast motor cortex mapping within one trial.
The MRCP signals across all channels are shown in Fig. 1(4). Moreover, the
corresponding Event related (De)synchronization (ERD/S) activation [22, 23] is
shown in Fig. 1(2), and also across channels in Fig. 1(5). These two signal
modalities provide a fast and reliable way for the motor cortex mapping.
Towards Online Functional Brain Mapping and Monitoring … 93

Fig. 1 Neural correlate of movement. 1 Movement related cortical potential (MRCP) with respect
to wrist flexion at channel No. 18. 2 Event related spectrum perturbation (ERSP) at channel
No. 18, non-significant parts were wiped out under bootstrap significance level of P = 0.01. 3
Electrode array on the cortex. 4 MRCP

3 Sensory Cortex Mapping with ERD/S and SSSEP

Mechanical stimulation was applied to the index finger, using a 175 Hz sine carrier
wave modulated by a 27 Hz sine wave. Within each trial, two seconds after the
onset of each trial, the subject was alerted with a vibration that lasted 200 ms. Then,
2 s later, a stimulation was applied for 5 s.
The power spectrum in channel 17 (localized on the sensory cortex) with respect to
27 Hz sustained stimulation is shown in Fig. 2(2). ERD/ERS across all channels are
shown in Fig. 2(3). The stimulation evoked SSSEP response has a frequency specific
feature [24, 25], complementarily the induced ERD/ERS oscillatory dynamics, which
also reflect somatosensory processing [26, 27], have a non-stimulation frequency
specific feature for a real-time decoding [28, 29, 30 31]. Therefore, the combination of
ERD/S and SSSEP provides a novel sensory cortex mapping.
94 L. Yao et al.

Fig. 2 Neural correlate of tactile sensation. 1 Electrode array on the cortex and number order. 2
Power spectrum at channel No. 17 with respect to 27 Hz tactile stimulation. 3 Event related
spectrum perturbation across all channels, non-significant parts were wiped out under bootstrap
significance level of P = 0.01. Time zero

4 Online BCI System Between the Brain and the Surgeon

Extending the traditional concept of BCI system designed for interaction between
the brain and external devices, our current BSI system establishes an interactive
channel between the brain and the surgeon, for assistance in precise brain surgery.
The system works in an online scenario, and extracts the neural activation infor-
mation with respect to the given task, and feedback as a dynamic activation map to
the surgeon. Moreover, the learning algorithm can help the surgeon decide which
lesioned brain tissue to remove. Using motor cortex mapping, with two to three
repeated motor tasks (duration of 10–15 s), the activation region can be precisely
identified together with its associated brain regions. Using sensory cortex mapping,
with sensory stimulations triggered by the surgeon, the activation region can be
identified by both the SSSEP response and oscillatory dynamics. This information
can then be provided to the surgeon. The surgeon will interact with the patient’s
brain through the BSI system online, improving the awake craniotomy surgery by
monitoring the brain activation using advanced BCI techniques.
Towards Online Functional Brain Mapping and Monitoring … 95

5 Discussion and Long-Term Perspectives

We propose a novel concept of a BCI system for interaction between the brain and the
surgeon, with the ultimate purpose of assisting brain surgery, significantly reducing the
time needed during surgery mapping and reducing the medical costs. Our pilot studies
showed that MRCPs and oscillatory dynamics can be utilized for motor cortex map-
ping, at a single-trial level. Besides, the sensory cortex can be mapped by SSSEP and
oscillatory decreasing when applying a sustained rhythmic sensory stimulation for a
dozen seconds. Interestingly, we found an interesting phenomenon related to the ERS
response on the motor cortical area (Fig. 2(3), Channel 6), indicating that the motor
cortex is suppressed or stayed in an idle state during sensation tasks. This may provide
a new way to perform motor cortex mapping using only sensory stimulation. The
concept of BCI for neurosurgical purposes will be of great value in actively engaging
the patient in the surgical procedure, unlike current methods. Our next challenge will be
the translation of this pilot experimental setting into a full clinical system.

References

1. Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM (2002)


Brain-computer interfaces for communication and control. Clin Neurophysiol 113(6):767–791
2. Wolpaw JR, McFarland DJ (2004) Control of a two-dimensional movement signal by a
noninvasive brain-computer interface in humans. Proc Natl Acad Sci U. S. A. 101(51):17849–
17854
3. Daly JJ, Wolpaw JR (2008) Brain–computer interfaces in neurological rehabilitation. Lancet
Neurol 7(11):1032–1043
4. Kübler A, Nijboer F, Mellinger J, Vaughan TM, Pawelzik H, Schalk G, Mcfarland DJ,
Birbaumer N, Wolpaw JR (2005) Patients with ALS can use sensorimotor rhythms to operate a
brain
5. Pichiorri F, Morone G, Petti M, Toppi J, Pisotta I, Molinari M, Paolucci S, Inghilleri M,
Astolfi L, Cincotti F, Mattia D (2015) Brain-computer interface boosts motor imagery practice
during stroke recovery. Ann Neurol 77(5):851–865
6. Xu R, Jiang N, Mrachacz-Kersting N, Lin C (2014) A closed-loop brain-computer interface
triggering an active ankle-foot orthosis for inducing cortical neural plasticity 61(7):2092–2101
7. Jiang N, Gizzi L, Mrachacz-Kersting N, Dremstrup K, Farina D (2014) A brain-computer
interface for single-trial detection of gait initiation from movement related cortical potentials.
Clin Neurophysiol
8. Mrachacz-Kersting N, Jiang N, Stevenson AJT, Niazi IK, Kostic V, Pavlovic A,
Radovanovic S, Djuric-Jovicic M, Agosta F, Dremstrup K, Farina D (2015) Efficient
neuroplasticity induction in chronic stroke patients by an associative brain-computer interface.
J Neurophysiol. doi: 10.1152/jn.00918.2015
9. Behrens E, Schramm J, Zentner J, König R (1997) Surgical and neurological complications in
a series of 708 epilepsy surgery procedures. Neurosurgery 41(1):1–9
10. Jacobs J, Zijlmans M, Zelmann R, Chatillon C-É, Hall J, Olivier A, Dubeau F, Gotman J
(2010) High-frequency electroencephalographic oscillations correlate with outcome of
epilepsy surgery. Ann Neurol 67(2):209–220
11. Jeha LE, Najm I, Bingaman W, Dinner D, Widdess-Walsh P, Lüders H (2007) Surgical
outcome and prognostic factors of frontal lobe epilepsy surgery. Brain 130(2):574–584
96 L. Yao et al.

12. Salanova V, Markand O, Worth R (2002) Temporal lobe epilepsy surgery: outcome,
complications, and late mortality rate in 215 patients. Epilepsia 43(2):170–174
13. Picht T, Schmidt S, Brandt S, Frey D, Hannula H, Neuvonen T, Karhu J, Vajkoczy P, Suess O
(2011) Preoperative functional mapping for rolandic brain tumor surgery: comparison of navigated
transcranial magnetic stimulation to direct cortical stimulation. Neurosurgery 69(3):581–589
14. Duffau H (2005) Lessons from brain mapping in surgery for low-grade glioma: insights into
associations between tumour and brain plasticity. Lancet Neurol 4(8):476–486
15. Schiffbauer H, Berger MS, Ferrari P, Freudenstein D, Rowley HA, Roberts TPL (2002)
Preoperative magnetic source imaging for brain tumor surgery: a quantitative comparison
with intraoperative sensory and motor mapping. J Neurosurg 97(6):1333–1342
16. Shinoura N, Yamada R, Kodama T, Suzuki Y, Takahashi M, Yagi K (2005) Preoperative
fMRI, tractography and continuous task during awake surgery for maintenance of motor
function following surgical resection of metastatic tumor spread to the primary motor area.
min-Minimally Invasive Neurosurg 48(2):85–90
17. Adcock JE, Wise RG, Oxbury JM, Oxbury SM, Matthews PM (2003) Quantitative fMRI
assessment of the differences in lateralization of language-related brain activation in patients
with temporal lobe epilepsy. Neuroimage 18(2):423–438
18. Szelényi A, Bello L, Duffau H, Fava E, Feigl GC, Galanda M, Neuloh G, Signorelli F, Sala F
(2010) Intraoperative electrical stimulation in awake craniotomy: methodological aspects of
current practice. Neurosurg Focus 28(2):E7
19. Lu J-F, Zhang H, Wu J-S, Yao C-J, Zhuang D-X, Qiu T-M, Jia W-B, Mao Y, Zhou L-F
(2013) Awake intraoperative functional MRI (ai-fMRI) for mapping the eloquent cortex: is it
possible in awake craniotomy? NeuroImage Clin 2:132–142
20. Niazi IK, Jiang N, Tiberghien O, Nielsen JF, Dremstrup K, Farina D (2011) Detection of
movement intention from single-trial movement-related cortical potentials. J Neural Eng 8(6):
66009
21. Xu R, Jiang N, Lin C, Mrachacz-kersting N, Dremstrup K, Farina D (2014) Enhanced
low-latency detection of motor intention interface applications 61(2):288–296
22. Pfurtscheller G, da Silva FH (1999) Event-related EEG/MEG synchronization and
desynchronization: basic principles. Clin Neurophysiol 110(11):1842–1857
23. Neuper C, Wörtz M, Pfurtscheller G (2006) ERD/ERS patterns reflecting sensorimotor
activation and deactivation. Prog Brain Res 159:211–222
24. Müller-Putz GR, Scherer R, Neuper C, Pfurtscheller G (2006) Steady-state somatosensory
evoked potentials: Suitable brain signals for brain-computer interfaces? IEEE Trans Neural
Syst Rehabil Eng 14(1):30–37
25. Breitwieser C, Kaiser V, Neuper C, Müller-Putz GR (2012) Stability and distribution of
steady-state somatosensory evoked potentials elicited by vibro-tactile stimulation. Med Biol
Eng Comput 50(4):347–357
26. Houdayer E, Labyt E, Cassim F, Bourriez JL, Derambure P (2006) Relationship between
event-related beta synchronization and afferent inputs: Analysis of finger movement and
peripheral nerve stimulations. Clin Neurophysiol 117(3):628–636
27. Houdayer E, Degardin A, Salleron J, Bourriez JL, Defebvre L, Cassim F, Derambure P (2012)
Movement preparation and cortical processing of afferent inputs in cortical tremor: an
event-related (de)synchronization (ERD/ERS) study. Clin Neurophysiol 123(6):1207–1215
28. Yao L, Meng J, Zhang D, Sheng X, Zhu X (2013) Selective sensation based brain-computer
interface via mechanical vibrotactile stimulation. PLoS One 8(6)
29. Yao L, Meng J, Zhang D, Sheng X, Zhu X (2014) Combining motor imagery with selective
sensation toward a hybrid-modality BCI. IEEE Trans Biomed Eng 61(8):2304–2312
30. Yao L, Meng J, Sheng X, Zhang D, Zhu X (2015) A novel calibration and task guidance
framework for motor imagery BCI via a tendon vibration induced sensation with kinesthesia
illusion. J Neural Eng 12(1):16005
31. Yao L, Sheng X, Zhang D, Jiang N, Farina D, Zhu X (2016) A BCI system based on
somatosensory attentional orientation. IEEE Trans Neural Syst Rehabil Eng 4320(c):1–1
A Sixteen-Command and 40 Hz Carrier
Frequency Code-Modulated Visual
Evoked Potential BCI

Daiki Aminaka and Tomasz M. Rutkowski

1 Introduction

A brain computer interface (BCI) is a system that utilizes brain activity to provide
direct communication between the mind and an external environment, without
involving any muscles or peripheral nervous system fibers [1]. Patients suffering
from locked-in-syndrome (LIS) [2] can use BCIs to communicate with their care-
takers or complete various simple daily tasks (such as typing text messages, con-
trolling their environments or robotic applications, using the Internet and other
mainstream technologies, etc.) [3–6]. BCI can provide practical real-world com-
munication tools for people with amyotrophic lateral sclerosis (ALS) or even some
disorders of consciousness (DOCs) that do not require movement—only properly
classified brainwaves [1, 7, 8].
We present a new BCI that can use EEG activity elicited by a code-modulated
visual evoked potential (cVEP) [4–6]. The work presented here is based on our
earlier cVEP approach that we hereby extend to utilize 16 light sources. The cVEP
is a natural response that the brain generates when the user focuses attention on
avisual stimulus with specific code-modulated sequences, and several groups have
used the cVEP in BCIs [9–15]. The cVEP-based paradigm is a type of
stimulus-driven BCI, which relies on voluntary attention to specific stimuli to
produce distinct patterns of brain activity. BCIs based on cVEP and other
stimulus-driven attentional approaches typically require less training than BCIs

D. Aminaka  T.M. Rutkowski (&)


BCI-Lab, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 308-8577, Japan
e-mail: tomek@bci-lab.info
URL: http://bci-lab.info/
D. Aminaka
Intel, Tsukuba, Japan
T.M. Rutkowski
Cogent Labs Inc, Tokyo, Japan

© The Author(s) 2017 97


C. Guger et al. (eds.), Brain-Computer Interface Research, SpringerBriefs
in Electrical and Computer Engineering, DOI 10.1007/978-3-319-64373-1_10
98 D. Aminaka and T.M. Rutkowski

based on motor imagery, and work for nearly all healthy users. cVEP-based BCIs
can also provide a particularly high information transfer rate (ITR) with reduced
concerns about user annoyance and epileptic seizure induced by visual stimulation.
A problem that was raised in our previous research was related to a biased
classification accuracy caused by the CCA towards some of the BCI commands.
The training dataset for a classifier was created from cVEP responses when user
gazed at the first flashing LED pattern (the top location in Fig. 1). The remaining
training patterns were created by applying circular shifts of the first LED cVEP’s
response. This method was responsible for the possible accuracy drop due to a
limited number of training examples. We propose in this paper to use the linear
SVM-based classifier to improve the cVEP BCI accuracy and to minimize any
biases related to potential overfitting problems. We use the RGB light-emitting
diodes (LEDs) in order to evoke four types of cVEPs. We also utilize the higher
flashing pattern carrier frequency of 40 Hz and compare our results with the
classical setting of 30 Hz. This refresh rate was chosen previously due to a limited

Fig. 1 The cVEP BCI experimental set-up. The user wears an EEG cap with eight electrodes
covering the visual cortex. The g.USBamp connected to a g.TRIGbox captures EEG and trigger
signals, which are then preprocessed by OpenVibe on a first laptop. Segmented and filtered signals
are then streamed via UDP to a second laptop running Python-based linear SVM classifier and a
speech synthesis application that announces classification results as feedback
A Sixteen-Command and 40 Hz Carrier Frequency … 99

computer display refresh rate of 60 Hz [16]. Moreover, we propose to use the


chromatic green-blue stimulus [17] as a further extension in our project and we
compare results with the classical monochromatic (white-black) arrangement.
The approach presented here is an extension of our previously reported cVEP
BCI paradigms [13]. This time, we have reached the 16-command benchmark with
a very small visual angle of five degrees between the flashing light sources, as
shown in Fig. 1. Our main objective here was to explore the performance of this
system in a real-world scenario. In particular, we sought to assess accuracy across
nine healthy participants.

2 Methods

Brainwave responses in these online BCI experiments were recorded using eight
active g.LADYbird EEG electrodes connected to a g.USBamp portable amplifier
from g.tec medical instruments GmbH, Austria. Stimulus triggers from 16 cVEP’s
generating ARDUINO DUE are captured by g.TRIGbox connected to the g.
USBamp. The Ethical Committee of the Faculty of Engineering, Information and
Systems at the University of Tsukuba, Tsukuba, Japan approved the experiments.
The number sequence spelling (1–16) experiments are conducted based on cVEP
responses [13]. The experimental paradigm is implemented in an OpenVibe envi-
ronment, which sends via UDP 5 * 60 Hz bandpass filtered EEG signals (also with
48 * 52 Hz notch applied) to a linear support vector machine (SVM) program
implemented by our team in Python. Classification results are announced by a
synthetic voice, as shown in a YouTube video [18].

2.1 Visual Stimulus Generation

In this study, we use m–sequence encoded flashing patterns [9] to create sixteen
commands for the cVEP BCI. The m–sequence is a binary pseudorandom code,
which is generated using a procedure as follows,

xðnÞ ¼ xðn  pÞ  xðn  qÞ; ðp [ qÞ; ð1Þ

where xðnÞ is the nth element of the m–sequence obtained by the exclusive or
(XOR) operation, denoted by  in the Eq. (1), using the two preceding elements
indicated by their positions (n − p) and (n − q) in the string. In this project p = 5 and
q = 2 are used. An initial binary sequence is decided to create the final m–sequence
that used in the following Eq. (1):
100 D. Aminaka and T.M. Rutkowski

xinitial ¼ ½0; 1; 0; 0; 1: ð2Þ

Finally, the 31 bit–long sequence is generated based on the above initial


sequence, as shown in Eq. (2).
The interesting feature of the m–sequence approach, which is very useful for the
cVEP-based BCI paradigm design, is an unique autocorrelation function. The
autocorrelation function has only a single peak at the period sample value. If the m–
sequence period is N, the autocorrelation function will result with values equal to 1
at 0; N; 2N; . . . and 1/N otherwise. It is also possible to introduce a circular shift of
the m–sequence denoted by s, to create a set of m–sequences with shifted auto-
correlation functions, respectively. In this study, the shifted time length has been
defined as s ¼ 2 bits. Fifteen additional sequences have been generated using the
above shifting procedure, respectively. During the online cVEP-based BCI exper-
iments, the sixteen LEDs continued to flash simultaneously using the time-shifted
m–sequences as explained above.

2.2 Classification of cVEP Responses

A linear SVM classifier was used to compare accuracies with a previously suc-
cessfully implemented CCA method [15]. In the training session, a single dataset
containing the cVEP responses to this first flashing LED was used. The remaining
sixteen cVEP responses were constructed by circularly shifting the first LED
responses. We used the linear SVM classifier to identify the user’s intended target
based on the EEG activity elicited by the flickering patterns. The cVEP response
processing and classification steps were as follows:
1. For the classifier training purposes, capturing the EEG cVEP y1 obtained in
response to the first m–sequence. A procedure to construct the remaining
training patterns yi ; ði ¼ 2; 3; . . .; 15Þ, based on the original recorded y1
sequence was as follows:

yi ðtÞ ¼ y1 ðt  ði  1ÞsÞ; ð3Þ

where s was the circular shift and t indicated a position in the sequence.
2. Averaging the captured j cVEPs as yi;j ðtÞ for each target i separately. The
averaged responses yi were used for the linear SVM classifier training with
cross-validation to avoid overfitting. In this study, there were N = 60 training
datasets and the number of averaged responses was M = 10. The averaging
procedure was as follows:

1 l þX
M1
yi;l ¼ y ; ð4Þ
M j¼l i;j

where l = 1, 2,…,N – M +1 was the dataset number.


A Sixteen-Command and 40 Hz Carrier Frequency … 101

3. For test classification purposes, cVEPs during BCI sessions were used.
The 10 single cVEP sequences (each around 380 ms long) averaging procedure
in online experiments to remove the non-cVEP related noise in EEG has been
necessary so far on order to maintain a reasonable final accuracies. Thus, each
command could be generated in about 3.8 s with additional one second break added
for a comfortable eye-saccade execution between flashing targets. The total single
command generation time of 4.8 s allows for a communication rate of 12.5
commands/minute.

2.3 Experimental Procedure

The users were asked to execute only micro eye saccades to gaze directly at one of
the 16 LEDs flashing 31-bit long m-sequences with two bits circular shifts applied
to differentiate the patterns (only commands #1 and #16 differ by a single bit). The
target LEDs are arranged in 4  4 matrix with 15 cm distances (visual angle
between LEDs of 5°) and 1.6 m away from user’s eyes, as shown in Fig. 1. To
avoid neighboring LEDs to flash similar patterns, the following matrix placement is
used:
0 1
1 9 2 10
B3 11 4 12 C
B C: ð5Þ
@5 13 6 14 A
7 15 8 16

3 Results

Results from three test sessions are presented in Fig. 2. The results show that all
nine healthy participants could successfully use this cVEP BCI with sixteen
commands, which was not annoying and presented flicker at the border of human
perception. Only one user, in the second trial, scored at a chance level of 6.5%.
Also, a single user in a final session achieved 100% accuracy. The majority of the
results were well above chance level. The grand mean average accuracy of all
experiments was of 51%. A video demonstrating a completely successful result—
accurate spelling of sixteen digits—is available on YouTube [18].
102 D. Aminaka and T.M. Rutkowski

16 cVEP BCI user accuracies obtaind with SVM classifier (non-signficant)

100.00

90.00

80.00

70.00

60.00
Accuracy [%]

50.00

40.00

30.00

20.00

10.00
chance = 6.25

0.00
First trial Second trial Third trial

Fig. 2 The cVEP BCI accuracy results from three test sessions conducted by each subject in the
project. The boxplots visualize mean and standard error intervals. Additionally, all results are
depicted in form of small dots (nine subjects for each trial). No significant differences were
observed among the test trials

4 Conclusions

Based on results with healthy users, we expect that this cVEP BCI application could
also have significant potential for clinical applications to support different users in
need. We plan to extend the cVEP BCI with sixteen commands that correspond to
different applications for musical performances and instrumental applications.
During this study, we also observed that the participating users were more
motivated to practice, since the random sequence-based visual stimulation had a
very user-friendly appearance. This could be especially helpful for patients who
find some displays confusing, and could also increase appeal to healthy users. The
effects of practice have not been well explored with cVEP BCIs, and might lead to
A Sixteen-Command and 40 Hz Carrier Frequency … 103

improved performance. The approach presented here could conceivably further


incorporate an airborne ultrasonic display, such as our system that won the 2014
BCI Research Award [19]. Other promising future research directions include
additional communication and control applications, other hybridization with other
BCI approaches, new ways to improve ITR, and validation with specific patient
groups.

References

1. Wolpaw J, Wolpaw EW (eds) (2012) Brain-computer interfaces: principles and practice.


Oxford University Press, New York, USA
2. Plum F, Posner JB (1966) The diagnosis of stupor and coma. FA Davis, Philadelphia, PA,
USA
3. Fazel-Rezai R, Allison BZ, Guger C, Sellers EW, Kleih S, Kuebler A (2012) P300
Brain-computer interface: current challenges and emerging trends. Front Hum Neuroeng 5:14.
doi: 10.3389/fneng.2012.00014
4. Rutkowski TM (2015) Brain-robot and speller interfaces using spatial multisensory brain-
computer interface paradigms. Front Comput Neurosci Conf Abstr 14. http://www.frontiersin.
org/10.3389/conf.fncom.2015.56.00014/event_abstract
5. Rutkowski TM, Shinoda H (2015) Airborne ultrasonic tactile display contactless brain-
computer interface paradigm. Front Hum Neuroscience 16:3–1. http://www.frontiersin.org/
human_neuroscience/10.3389/conf.fnhum.2015.218.00016/full
6. Rutkowski T (2016) Robotic and virtual reality BCIs using spatial tactile and auditory
odd-ball paradigms. Front Neurorobotics 10:20. http://journal.frontiersin.org/article/10.3389/
fnbot.2016.00020
7. Guger C, Spataro R, Allison BZ, Heilinger A, Ortner R, Cho W, LaBella V (2017) complete
locked-in and locked-in patients: command following assessment and communication with
vibro-tactile P300 and motor imagery brain-computer interface tools. Front Neurosci 11:251.
http://journal.frontiersin.org/article/10.3389/fnins.2017.00251/full
8. Lesenfants D, Chatelle C, Saab J, Laureys S, Noirhomme, Q Chapter 6: Neurotechno- logical
communication with patients with disorders of consciousness. Neurotechnology and Direct
Brain Communication: New Insights and Responsibilities Concerning Speechless But
Communicative Subjects, p 85
9. Bin G, Gao X, Wang Y, Li Y, Hong B, Gao S (2011) A high-speed BCI based on code
modulation VEP. J Neural Eng 8(2):025015
10. Waytowich N, Krusienski D (2015) Spatial decoupling of targets and flashing stimuli for visual
brain-computer interfaces. J Neural Eng 12(3):036006. doi: 10.1088/1741-2560/12/3/036006
11. Reichmann H, Finke A, Ritter H (2016) Using a cVEP-based brain-computer interface to
control a virtual agent. IEEE Trans Neural Syst Rehabil Eng 24(6):692–699. doi: 10.1109/
TNSRE.2015.2490621
12. Kapeller C, Kamada K, Ogawa H, Prueckl R, Scharinger J, Guger C (2014) An electrocor-
ticographic BCI using code-based VEP for control in video applications: a single-subject
study. Front Syst Neurosci 8:139. http://journal.frontiersin.org/article/10.3389/fnsys.2014.
00139/full
13. Aminaka D, Makino S, Rutkowski TM (2015) Classification accuracy improvement of
chromatic and high-frequency code-modulated visual evoked potential-based BCI. In: Guo Y,
Friston K, Aldo F, Hill S, Peng H (eds) Brain informatics and health. Lecture Notes in
Computer Science, vol 9250. Springer International Publishing, London, UK, pp 232–241.
http://dx.doi.org/10.1007/978-3-319-23344-4_23
104 D. Aminaka and T.M. Rutkowski

14. Aminaka D, Shimizu K, Rutkowski TM (2016) Multiuser spatial cVEP BCI direct brain-robot
control. In: Proceedings of the Sixth International Brain-Computer Interface Meeting: BCI
Past, Present, and Future. Asilomar Conference Center, Pacific Grove, CA USA, Verlag der
Technischen Universitaet Graz, 2016, p 70
15. Aminaka D, Makino S, Rutkowski TM (2015) Chromatic and high-frequency cVEP-based
BCI paradigm. In: 2015 37th Annual International Conference of the IEEE Engineering in
Medicine and Biology Society (EMBC). IEEE Engineering in Medicine and Biology Society.
Milan, Italy. IEEE Press, p 1906–1909. http://arxiv.org/abs/1506.04461
16. Aminaka D, Makino S, Rutkowski TM (2014) Chromatic SSVEP BCI paradigm targeting the
higher frequency EEG responses. In: Asia-Pacific Signal and Information Processing
Association, 2014 Annual Summit and Conference (APSIPA), Angkor Wat, Cambodia, p 1–
7. http://dx.doi.org/10.1109/APSIPA.2014.7041761
17. Sakurada T, Kawase T, Komatsu T, Kansaku K (2014) Use of high-frequency visual stimuli
above the critical flicker frequency in a SSVEP-based BMI. Clin Neurophysiol
18. Rutkowski TM cVEP BCI with 16 commands and 40 Hz carrier frequency, YouTube video
available online. https://youtu.be/stS3Qz6ln9E
19. Hamada K, Mori H, Shinoda H, Rutkowski TM (2015) Airborne ultrasonic tactile display
BCI. In: Brain-computer interface research. Springer International Publishing, pp 57–65
Trends in BCI Research I:
Brain-Computer Interfaces for Assessment
of Patients with Locked-in Syndrome
or Disorders of Consciousness

Christoph Guger, Damien Coyle, Donatella Mattia, Marzia De Lucia,


Leigh Hochberg, Brian L. Edlow, Betts Peters, Brandon Eddy,
Chang S. Nam, Quentin Noirhomme, Brendan Z. Allison
and Jitka Annen

1 Introduction

Brain-computer interface (BCI) technology analyzes brain activity to control


external devices in real time. In addition to communication and control applica-
tions, BCI technology can also be used for the assessment of cognitive functions of
patients with disorders of consciousness (DOC) or locked-in syndrome (LIS) [1, 2,
3]; (Ortner et al., in press). The top-right corner of Fig. 1 reflects healthy persons
with normal motor responses and cognitive functions. On the bottom-left corner are
coma patients without these functions. Patients in the unresponsive wakefulness

C. Guger (&)  B.Z. Allison


g.tec Guger Technologies OG, Herbersteinstrasse 60, 8020 Graz, Austria
e-mail: guger@gtec.at
D. Coyle
Faculty of Computing and Engineering, School of Computing and Intelligent Systems,
Magee Campus, Ulster University, Northland Road, Derry, Northern Ireland BT48 7JL, UK
D. Mattia
Neuroelectrical Imaging and BCI Lab, Fondazione Santa Lucia, IRCCS, Via Ardeatina, 306,
00179 Rome, Italy
M. De Lucia
Laboratoire de Recherche en Neuroimagerie (LREN), Department of Clinical Neurosciences,
Lausanne University Hospital (CHUV) and University of Lausanne, chemin de Mont-Paisible
16, 1011 Lausanne, Switzerland
e-mail: marzia.de-lucia@chuv.ch
L. Hochberg  B.L. Edlow
Department of Neurology, Massachusetts General Hospital, 175 Cambridge Street, Boston,
MA 02114, USA

© The Author(s) 2017 105


C. Guger et al. (eds.), Brain-Computer Interface Research, SpringerBriefs
in Electrical and Computer Engineering, DOI 10.1007/978-3-319-64373-1_11
106 C. Guger et al.

Fig. 1 Motor responses and cognitive functions for coma, unresponsive wakefulness state
(UWS), minimally consciousness state (MCS), locked-in syndrome (LIS), complete locked-in
syndrome (CLIS)

state (UWS) and minimally consciousness state (MCS) may have conscious
awareness but no way to convey their awareness through any kind of movement.
These patients should be carefully assessed to make sure that physicians, families
and caregivers are aware of their cognitive functions. Cognitive assessment is also
important for individuals with LIS, particularly CLIS (complete LIS), to understand
which cognitive functions are remaining. Assessment may reveal whether patients
understand instructions and conversations, and whether they may be able to
communicate.

B. Peters  B. Eddy
Oregon Health & Science University, 707 SW Gaines St, #1290,
Portland, OR 97239, USA
C.S. Nam
Brain-Computer Interface (BCI) Lab, North Carolina State University,
Raleigh NC 27695, USA
Q. Noirhomme  J. Annen
Coma Science Group, GIGA Research and Neurology Department, University
Hospital of Liège, Liège, Belgium
Trends in BCI Research I: Brain-Computer Interfaces … 107

People with locked-in syndrome (LIS) exhibit quadriplegia and anarthria, but
may retain some voluntary movement of the eyes, eyelids, or other body parts. LIS
is not a DOC, as persons with LIS are both conscious and aware. However, people
with CLIS have no voluntary motor function and are thus unable to communicate or
respond to behavioral testing, leading to frequent and often prolonged misdiagnosis
[2, 4]. While people with CLIS may retain relatively normal cognitive functioning,
shown in Fig. 1, their cognitive abilities and conscious awareness may also be
impaired for various reasons. Furthermore, since people with CLIS cannot move or
communicate, they may be unable to inform doctors, family and friends that they
are in fact able to understand them and wish to play an active role in decisions
affecting their lives.
The potential of BCI technology to support more accurate and detailed differ-
ential diagnosis among DOC and LIS patients is also apparent from the strong
recent interest from the BCI community. In addition to numerous publications from
different groups (reviewed in 2), there was considerable interest in this topic at the
Sixth International BCI Meeting in 2016, including a workshop and day-long
Satellite Event that presented the latest advances. In 2017 alone, this topic has been
or will be presented in at least a dozen major conferences to our knowledge,
including the Seventh International BCI Conference, Society for Neuroscience
annual conference, and Human-Computer Interaction International (HCII) annual
conference. This research direction was also recognized in our most recent book in
this series [3].
Thus, the use of BCI technology for improved diagnosis and related goals for
persons with DOC and LIS has become a prominent trend within the BCI research
community. The primary goal of this article is to summarize new research results
from several top groups in this field, along with commentary and future directions.
First, we describe a commonly used platform for DOC assessment and communi-
cation called mindBEAGLE.

2 DOC Assessment and Communication Platform


from g.tec

Some of the results presented here used the mindBEAGLE system. mindBEAGLE
is an electro-physiological test battery for DOC and LIS patients that can use four
approaches to assess conscious awareness: (i) auditory evoked potentials (AEP);
(ii) vibro-tactile evoked potentials with 2 tactors—VT2; (iii) vibro-tactile evoked
potentials with 3 tactors—VT3 and (iv) motor imagery (MI). The system consists of
a biosignal amplifier, an EEG cap with active electrodes, the BCI software that
analyzes the data in real-time, in-ear phones for the auditory stimulation, and 3
vibro-tactile stimulators (tactors). In the AEP approach, a sequence of low
(non-target) and high (target) tones is presented to the patient and evoked potentials
are calculated. The BCI classifier attempts to identify the target tone based on EEG
108 C. Guger et al.

data, leading to accuracies between 0 and 100%. Chance accuracy in this task is
12.5%, and the threshold for significant communication depends on the number of
trials, but high accuracy may reflect conscious awareness. In the VT2 approach, one
tactor is mounted on the right hand (target) and receives 10% of the stimuli and one
tactor is mounted on the left hand and receives 90% of the stimuli (non-target).
Then the patient has to silently count the right hand stimuli to elicit a P300 response
that the BCI system can detect. In the VT3 approach, one tactor is mounted on the
left hand (10% of stimuli), one tactor is mounted on the right hand (10% of stimuli)
and one tactor is mounted on the spine or leg (80% of the stimuli) [5, 6]. Now the
patient can count the stimuli on the left hand to say YES and can say NO by
counting right hand stimuli. The motor imagery paradigm verbally instructs the
patient to imagine either left or right hand movements and the BCI system classifies
the data [7].
The top of Fig. 2 shows results for an UWS patient with no reliably discrim-
inable Evoked Potentials (EPs) and a very low BCI accuracy for AEP and VT2
testing. Although this patient shows some differences in the EPs, the intertrial
variability was very high. The bottom of Fig. 2 shows the results for an MCS-
patient, which look like results from a healthy control. These results indicate that
this MCS- patient could follow the experimenter’s instructions, and thus is able to
understand conversations.
After a successful assessment run, mindBEAGLE can also be used for com-
munication. In this case, the patient is asked a question and can answer YES or NO
by attending to vibrations of either the left or right tactor. Similarly, the patient can
use the MI approach by imagining a left or right hand movement to say YES and
NO.
The testing battery gives important information about a patient’s cognitive
functions and ability to follow conversations. Furthermore, it can allow commu-
nication and identify fluctuations in cognitive function. Of special importance is
that mindBEAGLE provides a standardized approach for testing patients. Currently
the system is being validated in 10 centers in China, Germany, Austria, Italy,
Belgium, France, Spain and the USA.
One related direction that was very recently published extended mindBEAGLE
technology to provide communication for persons with complete locked-in syn-
drome (CLIS). We showed that two of three patients with CLIS could communicate
using the mindBEAGLE system [8]. This is an exciting development, because BCI
technology had not yet been well validated with persons with CLIS. Consistent with
the results presented above, the MI approach was not effective in the CLIS patients,
but vibrotactile approaches were. We are now working with additional patients and
considering new paradigms to improve communication.
Trends in BCI Research I: Brain-Computer Interfaces … 109

UWS patient

MCS patient

Fig. 2 AEP and VT2 results for one UWS (unresponsive wakefulness state) and one MCS
(minimally conscicousness state) patient. The top curve shows the classification accuracy on the
y-axis and the number of target stimuli on the x-axis. The bottom curve shows the EPs for target
(green) and non-target (blue) stimuli. Green shaded areas reflect a significant difference between
target and non-target EPs
110 C. Guger et al.

3 DOC Assessment at Ulster

Initial research at Ulster [9] reported successful results with BCI-based motor
imagery (MI) training in a patient who had MCS using sensorimotor rhythm
(SMR) feedback. This result suggested that feedback could raise patients’ awareness
about the potential for BCI technology to impact their conditions, and could be
effective in a detection of awareness protocol involving motor imagery BCIs.
Subsequently, four MCS patients (3 male; age range, 27–53 yr; 1–12 yr after brain
injury) participated in multiple sessions with sensorimotor rhythm (SMR) feedback,
to determine whether BCI technology can be used to increase the discriminability of
SMR modulations [10, 11]. The study had three objectives: (1) To assess awareness
in subjects in MCS (initial assessment); (2) To determine whether these subjects may
learn to modulate SMR with visual and/or stereo auditory feedback (feedback ses-
sions) and (3) To investigate musical feedback for BCI training and as cognitive
stimulation/interaction technology in disorders of consciousness (DOC). Initial
assessment included imagined hand movement or toe wiggling to activate sensori-
motor areas and modulate SMR in 90 trials, following the protocol described in [12].
Within-subject and within-group analyses were performed to evaluate significant
brain activations. A within-subject analysis was performed involving multiple BCI
training sessions to improve the user’s ability to modulate sensorimotor rhythms
through visual and auditory feedback. The sessions took place in hospitals, homes of
subjects, and a primary care facility. Awareness detection was associated with
sensorimotor patterns that differed for each motor imagery task. BCI performance
was determined from mean classification accuracy of brain patterns using a BCI
signal processing framework with a leave-one out cross-validation [10]. All subjects
demonstrated significant and appropriate brain activation during the initial assess-
ment without feedback. SMR modulation was observed in multiple sessions with
auditory and visual feedback. Figure 3 shows results for subject E (19 sessions),
showing that accuracy improves over time with auditory but not visual feedback.
In conclusion, the EEG-based assessment showed that patients who had MCS
may have the capacity to operate a simple BCI-based communication system, even
without any detectable volitional control of movement. All EEG-based awareness
detection studies prior to this research did not provide real-time feedback to the
patient during the assessment. This research was the first to demonstrate stereo
auditory feedback of SMR in MCS patients, allowing the patient to hear the target
and feedback, which could be useful in patients who cannot use visual feedback. As
many DOC patients have limited eye gaze control and/or other visual system
impairments, visual feedback is often unsuitable for them. We used musical audi-
tory feedback in the form of a palette of different musical genres. This enabled us to
open a dialogue with the care teams/families on musical preference, discussed in the
presence of the patient, to enhance attentiveness and engagement. Anecdotal evi-
dence indicates that musical feedback could help engage DOC sufferers during BCI
training and improve BCI performance. A quote from one of the families of par-
ticipants in our study is published in a recent report [13].
Trends in BCI Research I: Brain-Computer Interfaces … 111

Fig. 3 BCI accuracy for patient E with MCS. Top row BCI accuracy with visual feedback
(moving ball or computer game) and baseline accuracy without feedback. Middle and bottom row
BCI accuracy with auditory feedback (pink noise, reggae, jazz, hip hop, electronic music, classical
music, rock, country,…) and baseline accuracy without feedback. The number of trials in each run
after artifact rejection are indicated after the type of feedback (pink noise, hip hop,…). Significant
differences between baseline and feedback is indicated with the following notation: ***P  .005;
**P  .05; *P  .1

4 DOC Neurophysiological Assessment at FSL

The stability of Event-Related Potentials (ERPs) is essential for efficient and


effective ERP-based BCI systems, especially when BCIs is applied in a challenging
clinical condition such as DOC. In this regard, there are several factors that can
limit (if not prevent) the use of BCI technology in patients diagnosed with DOC
such as fluctuations of vigilance, attention span and abnormal brain activity due to
brain damage (Giacino et al. [21]) to name few. In a recent study conducted at
Fondazione Santa Lucia (Rome), Aricò and colleagues [14] showed a significant
correlation between the magnitude of the jitter in P300 latency and the performance
achieved by healthy subjects in controlling a visual covert attention P300-based
BCI. In particular, the higher the P300 latency jitter, the lower the BCI accuracy.
We speculated that the covert attention modality increases the variability of the time
needed to perceive and categorize the visual stimuli.
112 C. Guger et al.

Currently, we are conducting a neurophysiological (EEG) screening in patients


with DOC or functional looked-in syndrome (LIS) who are consecutively admitted
at the Post-Coma Unit of the Fondazione Santa Lucia for their standard care
rehabilitation. As part of this neurophysiological screening, patients are presented
with a simple auditory P300 oddball paradigm, which consists of a binaural stream
of 420 standard high tones (440 + 880 +1760 Hz) and 60 deviant complex low
tones (247 + 494 + 988 Hz) pseudo-randomly interspersed (50 ms stimulus dura-
tion; 850 ms inter stimulus interval). Stimuli are first presented in a passive con-
dition (just listening to auditory stimuli) and then in an active condition (mentally
counting the deviant tones). EEG signals are recorded from 31 electrode positions
(512 Hz sample rate) with a commercial EEG system. A preliminary (retrospective)
analysis of the morphological features (amplitude and latency) of the main ERP
waveforms (on Cz) was performed on a convenient sample of 13 admitted DOC
patients (9 males; mean age = 47 ± 16; mean time from event = 24 ± 33.5; 5
unresponsive wakefulness state - UWS; 8 Minimally Conscious State - MCS) in
their subacute and chronic stages. A wavelet transform method was applied to
identify the P300 waveform peak in single trials and thus to assess the magnitude of
latency jitter phenomenon [14]. We found significantly higher values of P300
jittering in UWS and MCS patients compared to a control (12 healthy subjects; 6
males; mean age = 30.3 ± 6.5) data set (p < .01), for both active and passive
paradigms. Moreover, UWS patients showed significantly higher jitter values
compared to MCS patients (p < .01) and to the control group (p < .001) in the active
condition. The MCS data also exhibited a significantly higher jitter (p < .05)
compared to control data set. A representative case is illustrated in Fig. 4. These
preliminary findings prompted us to apply this analysis in a larger cohort of DOC
patients to validate this measurement as indicative of different DOC states.

Fig. 4 Data from a representative MCS patient (male, 54 years old, 11 months after a traumatic
brain injury). a Average of epochs related to deviant (red) and standard (blue) stimuli. Solid lines
and dotted lines reflect the wavelet filtered and non-filtered potentials, respectively. b Single trial
epochs associated with deviant stimuli filtered with the wavelet based method. In this case, the
P300 peaks exhibited a range of latencies between 350 and 500 ms
Trends in BCI Research I: Brain-Computer Interfaces … 113

Promisingly, we also found a significant negative correlation (r = −.055, p < .05)


between the jitter values observed during the active listening condition in both
UWS and MCS patients and the relative JFK Coma Recovery Scale-Revised
(CRS-r) scores, that is, patients with lower CRS-r scores had higher jitter values
[15].

5 DOC Prediction CHUV

Early prediction of comatose patients’ outcome is currently based on a battery of


clinical examinations that are repeatedly performed during the first days of coma
(Rossetti et al., 2010). This includes the evaluation of brain stem reflexes, the motor
response, and the electroencephalographic recordings while stimulating patients
with arousing stimuli. All these examinations are highly predictive of poor out-
come, i.e. death or vegetative state. In this context, the development of markers
identifying patients with good outcomes remains challenging. Recently, the neural
responses to auditory stimuli as measured by electroencephalography (EEG) over
the first days of coma provided promising results for predicting patients’ chance of
surviving (Tzovara et al., 2013). This test consists of recording EEG responses to
auditory stimuli during the first and second days of coma using a classic mismatch
negativity (MMN) paradigm, in which a sequence of identical sounds is rarely
(30% of the time) interrupted by a sound that differs from the standard stimulus in
terms of pitch, location or duration. The differential response to standard and
deviant sounds is measured via a single-trial decoding algorithm, and its perfor-
mance is evaluated using the area under the Receiver Operating Characteristic
(AUC) (Tzovara et al., 2012). The higher the value of the AUC, the more accurate
the auditory discrimination between standard and deviant sounds. The test showed
that an improvement in auditory discrimination between the first and second days of
coma is only observed in survivors. Remarkably, the auditory discrimination per se
during the first or second recording was not as predictive as the progression. The
test has been extensively validated in a cohort of postanoxic comatose patients
treated with therapeutic hypothermia, including 94 individuals (Tzovara et al.,
2016). Results (see Fig. 5) showed a positive predictive power of 93%, with 95%
confidence interval 5 0.77–0.99 when excluding comatose patients with status
epilepticus either during the first or the second day of coma.
In addition to the prediction of awakening, recent results revealed that the
progression of auditory discrimination during coma provides early indication of
future recovery of cognitive functions in survivors (Juan et al., 2016). Current
validation is ongoing in other comatose patients treated with different therapeutic
strategies and at multiple hospital sites. This test will be further used in a longi-
tudinal study targeting patients who exhibit an improvement in auditory discrimi-
nation but do not wake up within the first days or weeks after coma onset. These
patients could first regain a minimal level of consciousness before waking up, and
114 C. Guger et al.

Fig. 5 Schematic representation of the EEG based test for predicting comatose patients’ chance of
awakening. a. Neural responses to standard and deviant sounds during an MMN paradigm are
recorded through a clinical EEG at comatose patient’s bedside. EEG measurements are represented
as a vector of voltage measurements across the whole electrode montage. b. Time-point by
time-point voltage topographies are modeled based on a mixture of Gaussians distribution, and the
corresponding posterior probabilities are used for labeling EEG single-trials as belonging to
standard or deviant sounds’ responses. c. The performance of the decoding algorithm is quantified
using the area under the Receiver Operating Characteristic (AUC), performed separately for each
recording and patient. The AUC value is indicative of the auditory discrimination at a neural level.
d. Based on the decoding performance obtained from the two recordings of the first two days of
coma, one can compute the progression of the auditory discrimination and predict patients’
chances of awakening, as an improvement is typically observed in survivors (positive predictive
power 93%, with 95% confidence interval 5 0.77–0.99)

could be considered for further EEG based evaluation using the MindBEAGLE
system and related experimental protocols.

6 Acute DOC Assessment at Massachusetts General


Hospital (MGH)

Researchers at MGH have launched a pilot study to test the feasibility of using the
mindBEAGLE BCI device in the Neurosciences Intensive Care Unit (NeuroICU).
In addition to demonstrating the feasibility of implementing BCI in the acute
NeuroICU setting, this pilot study (ClinicalTrials.gov Identifier NCT02772302)
aims to determine if mindBEAGLE neurophysiological markers of cognitive
function correlate with bedside behavioral assessments of consciousness. The MGH
Trends in BCI Research I: Brain-Computer Interfaces … 115

team recently enrolled its first patient (MGH1), a 72-year-old man with a history of
hypertension who was admitted to the NeuroICU with a cerebellar hemorrhage that
caused brainstem compression and coma. His NeuroICU course was complicated
by intraventricular hemorrhage and hydrocephalus requiring bilateral external
ventricular drains, as well as renal failure requiring hemodialysis. At the time of the
BCI study, which was performed on his 39th day in the NeuroICU, his Glasgow
Coma Scale score was 6T (Eyes = 4, Motor = 1, Verbal = 1T) and his behavioral
evaluation with the CRS-R indicated a diagnosis of UWS (Auditory = 1, Visual = 1,
Motor = 0, Oromotor/Verbal = 1, Communication = 0, Arousal = 2). EEG elec-
trodes were placed manually since the presence of a left frontal external ventricular
drain and a right frontal surgical wound from a recent endoscopic third ventricu-
lostomy prevented application of an EEG cap. During the study, which was per-
formed without complication and without any increase in intracranial pressure, the
patient remained on mechanical ventilation via tracheostomy. No sedation was
administered during or prior to the study.
The mindBEAGLE device detected P300 responses with 70% accuracy during
the VT2 paradigm, an observation that suggests that the patient was able to attend to
salient stimuli. Although this VT2 result may not definitively prove conscious
awareness, it suggests that the patient may be cable of higher-level cognitive
processing. Notably, the mindBEAGLE device detected only 30% accuracy during
the AEP task, 0% during the VTP3 task, and chance accuracy during a motor task,
suggesting the possibility that the patient’s level of responsiveness may have been
fluctuating. Within one month of the mindBEAGLE NeuroICU evaluation, the
patient began to track visual stimuli, indicating transition from UWS to MCS.

7 LIS Assessment at Oregon Health and Science


University (OHSU)

Researchers at OHSU conducted a small pilot study (N = 2) investigating the effects


of custom MI prompts on assessment and communication accuracy with the
mindBEAGLE MI paradigm for people with LIS. It was hypothesized that custom
prompts based on well-rehearsed movements [16], using a first-person perspective,
and incorporating visual, auditory, and tactile sensations associated with the
movement [17], would improve performance compared to a generic prompt.
Patient P1 had incomplete LIS secondary to a brainstem stroke, and could
communicate using eye movements. P2 had CLIS or possible DOC secondary to
advanced amyotrophic lateral sclerosis. Both completed 12 weekly mindBEAGLE
MI sessions, each including a 60-trial assessment run and a communication trial of
10 yes/no questions with known answers (e.g. “Is your name Bob?”). In a
multiple-baseline AB design, participants were given a generic MI prompt (imagine
touching the fingers to the thumb on the left or right hand, as described in the
mindBEAGLE manual) in the first 6 or 7 sessions, and a custom prompt (e.g.
116 C. Guger et al.

imagine picking guitar strings with the right hand and moving between chord
positions with the left) in the remaining sessions. Custom prompts were based on
activities participants had enjoyed when able-bodied, as reported by the participant
himself (P1) or a family member (P2), and consisted of a guided imagery script with
sensory elements (e.g. the feel of the guitar strings or the sound of the notes).
During assessment, participants were given auditory prompts to imagine either the
left- or right-sided movement for each trial. To answer questions, they were
instructed to imagine the left-sided movement for YES and right-sided for NO.
Results are presented in Fig. 6. Participants’ assessment accuracy stayed near
chance levels and was similar for the generic (P1: mean = 51.8 ± 4.15%, P2: mean
= 41.7 ± 17.22%) and custom (P1: mean = 51.2 ± 4.92%, P2: mean = 50.0 ±
10.95%) prompt conditions. Neither participant demonstrated a significant assess-
ment accuracy level (  66.2%) in any session, and performance did not signifi-
cantly improve with repeated practice. Accuracy in responding to YES/NO
questions was more variable, perhaps due to the smaller number of trials, and again
stayed near chance levels. Interestingly one patient reached 90% accuracy in one
YES/NO run which shows awareness during this experiment. The custom prompt
did not appear to improve performance on either task, as accuracy scores under that
condition remained within the expected range based on scores achieved with the
generic prompt.
The small sample size in this study precludes generalization of results to other
potential BCI candidates. The poor performance of P1, who is known to be

Fig. 6 Results for two participants with LIS using the mindBEAGLE MI paradigm for
assessment (left) and answering yes/no questions (right). Dashed lines represent calculated
trendlines within each condition. Shaded areas represent the degree of data overlap from the
generic prompt phase. Small dotted lines represent the expected range of responses using the
custom MI prompt based on performance with the generic prompt
Trends in BCI Research I: Brain-Computer Interfaces … 117

conscious and cognitively intact, reminds us that a negative result on a BCI-based


assessment is not conclusive evidence of impairment. Additional research with
larger participant samples is necessary to determine the utility and appropriateness
of MI BCI as a means of assessment and communication for individuals with LIS.

8 DOC Assessment at North Carolina State University


(NCSU)

The research team at North Carolina State University (NCSU, Raleigh, USA) took a
tactile-based hybrid BCI approach to assess consciousness and establish commu-
nication with behaviorally non-responsive patients. Tactile-based BCIs are a rela-
tively new and upcoming research topic in the BCI area, which have the potential to
help visually-impaired and blind groups. Steady-State Somatosensory Evoked
Potentials (SSSEPs) can be elicited on the contralateral areas of the brain with
vibrational stimuli [18]. Only recently have tactile-based BCIs been hybridized with
SSSEP and tactile-P300 to increase the number of usable classes and improve BCI
classification accuracy [19, 20]. In this study, we investigate how different spatial
attention affects recorded brain signals, and which spatial patterns provide better
SSSEP responses.
The stimulation equipment used was the same solenoid tactor setup presented in
our previous study [20]. Five healthy volunteers were subjects for the experiment.
Vibrational stimuli were presented on subjects’ fingertip, wrist, forearm, and elbow
of the dominant side. One tactor presented random pulses on one of four positions,
with SSSEP stimulation presented on the other three positions (see Fig. 7a). Each
subject conducted 100 pseudo-randomly distributed trials by locations and pulse
patterns. To generate a random pulse, a 100 Hz sine wave was presented for
250 ms, while SSSEP stimulation was generated by modulating a 27 Hz square
wave atop a 100 Hz sine wave. Each trial consisted of a 5 s rest period, 2 s
reference period, and 8 s stimulation period, during which the subjects were asked
to focus only on counting the number of random pulses. EEG signals were recorded
with a g.USBamp biosignal amplifier using a large Laplacian montage around sites
C3 and C4. BCI2000 was used for data acquisition and stimulus presentation, and
EEG signals were sampled at 512 Hz and band-pass filtered between 20 Hz and
56 Hz, then analyzed using Canonical Correlation Analysis (CCA) from 20–29 Hz.
The average CCA values showed higher Pearson’s correlation (r-value) on the
contralateral brain area for 27 Hz, while there were no differences on the ipsilateral
brain area at the same SSSEP stimulus frequency (see Fig. 7b). ANOVA of dif-
ferent positions at 27 Hz on C3 for each subject showed that S1 had a significantly
higher r-value on the fingertip than other positions (p < .0001), while S2 showed a
significantly lower r-value on the fingertip than other positions (p < .0001) (see
Fig. 7c). S3 (p = .0619) and S4 (p = .0763) showed marginal significance, and the
r-value of fingertip was lower than that of the elbow for S3. There were no
118 C. Guger et al.

Fig. 7 a Conceptual diagram for tactor positions and stimuli when presenting random pulse
b Averaged r-values of C3 and C4 areas for all subjects c Averaged r-value of each position at
27 Hz on C3 for S1 and S2

significant differences for S4 in post hoc Tukey tests. S5 showed no significant


difference on positions.
The CCA value showed that unattended flutter sensation can elicit SSSEPs in the
contralateral brain area by simply attending to random pulses presented on the same
nerve pathway. Moreover, there were individually different effects of
spatial-selective attention on the nerve pathway.
We have validated a new approach that evokes SSSEP through off-site attention,
which may be used to reduce the mental workload needed to focus on SSSEP
stimulation with random pulses and could be combined with P300 stimulation for a
hybrid BCI system. In addition, these results can potentially improve the perfor-
mance of a tactile-based BCI system by utilizing user-specific stimulation sites for
improved SSSEP responses. These SSSEP features will be used for future research
to develop a hybrid BCI for behaviorally non-responsive patients. SSSEP BCI
technology could complement other emerging BCI technologies for these patients.

9 DOC Assessment and Communication in Liege

Correct diagnosis of patients with DOC is vital for realistic perspectives on


revalidation and outcome. The gold standard for diagnosis is still behavioral bed-
side assessment, preferably using the Coma Recovery Scale-revised, as it has been
proven to be the most sensitive tool that can detect the smallest non-reflexive signs
of consciousness [21]. Patients could perform worse than their mental function
permits during this kind of testing due to motor dysfunction, aphasia, sensorimotor
deficits and other causes. Metabolism as measured with glucose positron emission
tomography, and functional connectivity of the default mode network as measured
with functional MRI during resting state, are objective ways to assess if con-
sciousness is remaining in DOC patients (see Fig. 8). Active functional magnetic
resonance imaging (fMRI) paradigms can be employed to assess command fol-
lowing via MI of playing tennis or navigating through a house, which, if used
successfully, can be employed for BCI-based communication.
Trends in BCI Research I: Brain-Computer Interfaces … 119

Fig. 8 From [31]. Glucose metabolism as measured with FDG positron emission tomography,
BOLD resting state default mode network activity and BOLD mental imagery as measured with
functional MRI in different states of consciousness. The UWS patient on the left shows the
neuroimaging of a typical UWS patient with minimal residual brain function. The next panel
shows a UWS patient who was unconscious on the behavioral level but who revealed signs of
consciousness during the neuroimaging tests, such as command following during the active MRI
paradigm. The next panel presents an MCS patient who shows residual metabolism and default
mode network connectivity, albeit less than in a healthy subject. The rightmost column depicts
normal brain function

Relative to MRI-based assessments, EEG-based BCIs have the advantages of


being more affordable, portable and more robust to movement artifacts and metal
implants. The tests are easily repeatable, making them very suitable for in clinical
practice and during rehabilitation. Previous EEG based BCIs have proven useful to
assess awareness and command following in this patient group. Auditory [22, 6]
P300 oddball paradigms, and MI experiments [12] have been used successfully.
Ethical considerations play an important role for the use of BCIs in this patient
population. The outcome of the BCI assessment might influence the medical team
and the patient’s loved ones [23]. If the test results show less cognitive function
than expected, the patient’s family might cope better with the decision to withdraw
life supporting treatment, or lose hope. If the tests show more cognitive abilities
than with neurological examination, the clinical management of the patient should
be improved so that the chance of recovery increases, but this outcome could also
give false hope to families. If the tests show the same level of cognitive abilities as
the behavioral assessments, this affirms the decision of the medical team.
The patient’s physical and mental disabilities might make it hard to believe that
patients can have a good quality of life, whereas healthcare professionals mainly
aspire to help their patients attain and maintain a good quality of life. DOC patients
cannot communicate whether they feel their life is enjoyable. LIS patients are
classically able to communicate by means of eye movement, and when they
compare their current well-being to the best and worst periods in their lives, the
majority of patients is rather happy [24]. The feeling of well-being in the LIS
subjects is comparable to the normal population, indicating that the level of physical
(and possibly mental) disability does not significantly influence quality of life.
120 C. Guger et al.

Furthermore, a BCI could give the patient a level of autonomy that would be life
changing and most likely increase their quality of life.

10 Alternative Approaches and Directions

As alternatives to neuroimaging and electrophysiological paradigms,


non-brain-based approaches, such as measurement of subclinical electromyography
signals (Bekinschtein et al., [25, 26], pupil dilation during mental calculation [27],
changes in salivary pH [28, 29] or changes in respiration patterns [30] have been
proposed to identify covert voluntary cognitive processing in patients with disor-
ders of consciousness. Recently, an electromyographic paradigm detected muscle
activations in response to ‘move your left/right hand’ command in 14 patients with
MCS (Lesenfants et al., submitted A). Six of them only behaviorally responded to
commands on the day of the recording, while all of them showed behavioral
responses to commands while assessed repeatedly on multiple days. This approach
could be an alternative to BCI inspired paradigms in patients with some residual
voluntary muscle control. These results open the door to the development of hybrid
paradigms, looking jointly for subclinical electromyography signals and voluntary
brain function in response to motor command.
Monitoring fluctuations of the level of vigilance can improve the detection of
residual signs of consciousness by helping to select the best recording time and by
tracking changes across a recording session. Attention itself can also serve as a marker
of voluntary cognitive process. Tracking attention during a BCI task in 6 patients with
LIS, Lesenfants and colleagues (submitted B) showed that they could track changes in
attention with the EEG. They showed that the patients increased their attention during
each trial in comparison to the resting periods between trials. While only two patients
were successful with the BCI task, all six patients showed fluctuations of attention that
could be distinguished from a rest period with more than 90% accuracy.

11 Discussion

The promising results with BCI technology for patients with DOC exhibit several
trends. First, results show that new paradigms are emerging that are initially
promising, but generally require broader validation with more patients over longer
periods. The mindBEAGLE system could make such validation faster and easier
while facilitating standardization. The system has standardized paradigms, is used
in multi-center studies, is tested with VS, MCS, LIS, CLIS and healthy persons, is
used at home, research centers, care facilities and intensive care units and has a
standard approach to evaluation. Second, results support the hybrid BCI concept, in
which one type of BCI is combined with another BCI and/or another means of
communication to provide improved performance and move flexibility for users.
Trends in BCI Research I: Brain-Computer Interfaces … 121

Results have shown that different paradigms, including MI, MMN, and visual and
auditory P300 s, can be effective assessment and/or communication tools for these
patients, and that SSSEPs could potentially provide another type of BCI for
patients. FSL showed that the P300 jitter is larger for VS than MCS and for healthy
control subjects, which could further facilitate new improvements.
Furthermore, the jitter was negatively correlated with the CRS-R. In some
patients, non-EEG signals based on eye, muscle, or other activity could also be
useful. Providing a suite of different assessment and communication options could
lead to more decisive and detailed assessment and more effective communication
while providing users some choice in the approach they wish to use. Third, results
with MI BCIs are mixed. MI training can be effective with MCS patients, whereas
MI training without feedback did not lead to effective communication in two ALS
patients who explored different mental strategies. Interestingly, MCS patients could
learn to modulate their SMR with auditory feedback. Fourth, persons with CLIS
resulting from ALS, and perhaps other causes, could also benefit from BCI tech-
nology that has until now been focused on DOC patients. Fifth, there is a strong
trend toward non-visual BCIs, which are often needed for this target group. Sixth,
the joint workshops at different major conferences with different groups, and the
very nature of this book chapter, show a trend toward dissemination and collabo-
ration among researchers from different regions, disciplines, and sectors.
However, this is still a new technology that requires substantial further research,
development, and validation with patients in field settings. Future systems could
improve existing approaches based on MI, P300 s, and other paradigms, and add
additional EEG and non-EEG based tools. New software could improve classifier
accuracy, facilitate user interaction, and allow improved communication and control of
devices such as fans or music players. New hardware could provide better quality data
in noisy settings via more comfortable and practical electrodes. Additional background
research is needed to better interpret data from this challenging population, develop and
test new paradigms, explore improved classifier algorithms, and explore different
patient groups. Research could also explore related tools to help target patients, such as
methods to predict recovery (such as the new method from CHUV) or systems for
cognitive and motor rehabilitation.
Another important future direction is public awareness—very few medical experts
are aware of BCI-based options for DOC and other patients. Although very extensive
work is still needed to develop methods and systems that are more informative, precise,
flexible, and helpful, the results presented here show that BCI for consciousness
assessment and communication has advanced beyond laboratory demonstrations, with
successful validations of different approaches in different settings. The next several
years should see significant improvements in this technology, improved quality of life
for many patients, and more informative and reliable assessment tools that will help
provide options and crucial information to medical staff, patients, and families.

Acknowledgements The work of g.tec was supported by the H2020 grant ComaWare and
ComAlert (project number E! 9361 Com-Alert). Q. Noirhomme has received funding from the
European Community’s Seventh Framework Program under grant agreement n° 602450
122 C. Guger et al.

(IMAGEMEND). Research at OHSU was supported by NIH grant R01DC014294 and NIDILRR
grant 90RE5017. Research at MGH was supported by NIH grant K23NS094538 and the American
Academy of Neurology/American Brain Foundation. Research at NCSU was supported by NSF
grant IIS1421948. Marzia De Lucia’s research at Lausanne University Hospital is supported by the
“EUREKA-Eurostars” grant (project number E! 9361 Com-Alert). The work was partially sup-
ported by the Italian Ministry of Healthcare and the French Speaking Community Concerted
Research Action (ARC-06/11-340). This paper reflects only the authors’ view and the funding
sources are not liable for any use that may be made of the information contained therein.

References

1. Guger C, Noirhomme Q, Naci L, Real R, Lugo Z, Veser S, Sorger B, Quitadamo L,


Lesenfants D, Risetti M, Formisano R, Toppi J, Astolfi L, Emmerling T, Erlbeck H,
Monti MM, Kotchoubey B, Bianchi L, Mattia D, Goebel R, Owen AM, Pellas F, Müller-Putz
G, Kübler A (2014) Brain-computer interfaces for coma assessment and communication. In:
Ganesh RN (ed) Emerging theory and practice in neuroprosthetics. IGIGLOBAL Press
2. Lesenfants D, Habbal D, Chatelle C, Schnakers C, Laureys S, Noirhomme Q
Electromyographic decoding of response to command in disorders of consciousness,
submitted A
3. Coyle D, Stow J, McCreadie K, Sciacca N, McElligott J, Carroll Á (2017) Motor imagery
BCI with auditory feedback as a mechanism for assessment and communication in disorders
of consciousness. In: Brain-Computer Interface Research. Springer International Publishing,
pp 51–69
4. Laureys S, Pellas F, Van Eeckhout P, Ghorbel S, Schnakers C, Perrin F, Berre J,
Feymonville ME, Pantke KH, Damas F, Lamy M, Moonen G, Goldman S (2005) The
locked-in syndrome: What is it like to be conscious but paralyzed and voiceless? Prog Brain
Res 150:495–511
5. Ortner R, Lugo Z, Prückl R, Hintermüller C, Noirhomme Q, Guger C (2013) Performance of
a tactile P300 speller for healthy people and severely disabled patients. In: Proceedings of the
35th Annual International Conference of the IEEE Engineering in Medicine and Biology
Society (EMBC 2013), Osaka, JP. 3–7 July 2013
6. Lugo ZR, Rodriguez J, Lechner A, Ortner R, Gantne IS, Laureys S, Guger C (2014) A
vibrotactile p 300-based brain-computer interface for consciousness detection and commu-
nication. Clin EEG Neurosci 45:14–21
7. Guger C, Kapeller C, Ortner R, Kamada K (2015) Motor imagery with brain-computer
interface neurotechnology. In: Garcia BM (ed) Motor imagery: emerging practices, role in
physical therapy and clinical implications, pp 61–79
8. Guger C, Spataro R, Allison BZ, Heilinger A, Ortner R, Cho W, La Bella V (2017) Complete
locked-in and locked-in patients: command following assessment and communication with
vibro-tactile P300 and motor imagery brain-computer interface tools. Front Neurosci 11
9. Coyle D et al (2012) Enabling control in the minimally conscious state in a single session with
a three channel BCI. In: 1st International Decoder Workshop, April, pp 1–4
10. Coyle D et al (2015) Sensorimotor modulation assessment and brain-computer interface
training in disorders of consciousness. Arch Phys Med Rehabil 96(3):62–70
11. Coyle D et al (2013) Visual and stereo audio sensorimotor rhythm feedback in the minimally
conscious state. In: Proceedings of the Fifth International Brain-Computer Interface Meeting
2013, pp 38–39
12. Cruse D, Chennu S, Chatelle C, Bekinschtein TA, Fernández-Espejo D, Pickard JD,
Owen AM (2011) Bedside detection of awareness in the vegetative state: a cohort study.
Lancet 378(9809):2088–2094
Trends in BCI Research I: Brain-Computer Interfaces … 123

13. Nuffield Council on Bioethics Report (2013) Novel neurotechnologies : intervening in the
brain. http://nuffieldbioethics.org/wp-content/uploads/2013/06/Novel_neurotechnologies_
report_PDF_web_0.pdf
14. Aricò P, Aloise F, Schettini F, Salinari S, Mattia D, Cincotti F (2014) Influence of P300
latency jitter on event related potential-based brain-computer interface performance. J Neural
Eng 11(3):035008
15. Schettini F, Risetti M, Arico P, Formisano F, Babiloni F, Mattia D, Cincotti F (2015) P300
latency Jitter occurrence in patients with disorders of consciousness: toward a better design
for brain computer interface applications. In: Engineering in Medicine and Biology Society
(EMBC), 2015 37th Annual International Conference of the IEEE, 2015, pp 6178–6181
16. Olsson C-J, Nyberg L (2010) Motor imagery: If you can’t do it, you won’t think it. Scand J
Med Sci Sports 20:711–715
17. Bovend’Eerdt TJH, Dawes H, Sackley C, Wade DT (2012) Practical research-based guidance
for motor imagery practice in neurorehabilitation. Disabil Rehabil 34(25):2192–2200
18. Snyder AZ (1992) Steady-state vibration evoked potentials: description of technique and
characterization of responses. Electroencephalogr Clin Neurophysiol Potentials Sect 84
(3):257–268
19. Severens M, Farquhar J, Duysens J, Desain P (2013) A multi-signature brain-computer
interface: use of transient and steady-state responses. J Neural Eng 10(2):026005
20. Choi I, Bond K, Krusienski D, Nam CS (2015) Comparison of stimulation patterns to elicit
steady-state somatosensory evoked potentials (SSSEPs): implications for hybrid and
SSSEP-based BCIs. In: IEEE International Conference on Systems, Man, and Cybernetics
(SMC), 2015
21. Giacino JT, Kalmar K, Whyte J (2004) The JFK Coma Recovery Scale-Revised:
Measurement characteristics and diagnostic utility. Arch Phys Med Rehabil 85(12):2020–
2029
22. Laureys S, Perrin F, Faymonville M-E, Schnakers C, Boly M (2004) Cerebral processing in
the minimally conscious state. Neurology 63:916–918
23. Jox RJ, Bernat JL, Laureys S, Racine E (2012) Disorders of consciousness: responding to
requests for novel diagnostic and therapeutic interventions. Lancet Neurol 11(8):732–738
24. Bruno MA, Bernheim JL, Ledoux D, Pellas F, Demertzi A, Laureys S (2011) A survey on self
assessed well-being in a cohort of chronic locked-in syndrome patients: happy majority,
miserable minority. BMJ Open 1(1):1–9
25. Bekinschtein TA, Coleman MR, Niklison J, Pickard JD, Manes FF (2008) Can electromyo-
graphy objectively detect voluntary movement in disorders of consciousness? J Neurol
Neurosurg Psychiatry 79(7):826–828
26. Habbal D, Gosseries O, Noirhomme Q, Renaux J, Lesenfants D, Bekinschtein TA, Majerus S,
Laureys S, Schnakers C (2014) Volitional electromyographic responses in disorders of
consciousness. Brain Inj 28(9):1171–1179
27. Stoll J, Chatelle C, Carter O, Koch C, Laureys S, Einhäuser W (2013) Pupil responses allow
communication in locked-in syndrome patients. Curr Biol 23(15):R647–R648
28. Ruf CA, DeMassari D, Wagner-Podmaniczky F, Matuz T, Birbaumer N (2013) Semantic
conditioning of salivary pH for communication. Artif Intell Med 59(2):91–98
29. Wilhelm B, Jordan M, Birbaumer N (2006) Communication in locked-in syndrome: effects of
imagery on salivary pH. Neurology 67(3):534–535
30. Charland-Verville V, Lesenfants D, Sela L, Noirhomme Q, Ziegler E, Chatelle C, Plotkin A,
Sobel N, Laureys S (2014) Detection of response to command using voluntary control of
breathing in disorders of consciousness. Front Hum Neurosci 8
31. Laureys S, Schiff ND (2012) Coma and consciousness: paradigms (re)framed by
neuroimaging. NeuroImage 61:1681–1691
124 C. Guger et al.

Author Biographies

Christoph Guger is actively running international research projects in the BCI domain, and is the
CEO of g.tec medical engineering GmbH, Guger Technologies OG, g.tec neurotechnology USA
Inc. and g.tec medical engineering Spain SL.

Damien Coyle is professor at the University of Ulster and active in BCI research for many years.
He is a specialist for signal processing and has experience with DOC patients.

Donatella Mattia MD, PhD, Neurologist, Neurophysiologist. Lab. Director. Her main research
interests are EEG-based BCI design and validation in neurorehabilitation and advanced signal
processing methods for feature extraction and outcome measure.

Marzia De Lucia is principal investigator at the Laboratoire de Recherche en Neuroimagerie of


Lausanne University Hospital and the Faculty of Biology and Medicine at the University of
Lausanne, Switzerland. Her research focuses on consciousness detection and comatose patients’
outcome prediction.

Leigh Hochberg is a vascular and critical care neurologist and neuroscientist. His research focuses
on the development and testing of novel neurotechnologies to help people with paralysis and other
neurologic disorders, and on understanding cortical neuronal ensemble activities in neurologic
disease. Dr. Hochberg has appointments as Professor of Engineering, School of Engineering and
Institute for Brain Science, Brown University; Neurologist, Massachusetts General Hospital,
where he attends in the NeuroICU and on the Acute Stroke service; Director, VA Center for
Neurorestoration and Neurotechnology, Providence VAMC; and Senior Lecturer on Neurology at
Harvard Medical School. He also directs the Neurotechnology Trials Unit for MGH Neurology,
where he is the IDE Sponsor-Investigator and Principal Investigator of the BrainGate pilot clinical
trials (www.braingate.org) that are conducted by a close collaboration of scientists and clinicians at
Brown, Case Western Reserve University, MGH, Providence VAMC, and Stanford University.
Dr. Hochberg is a Fellow of the American Academy of Neurology and the American Neurological
Association. Dr. Hochberg’s BrainGate research, which has been published Nature, Science
Translational Medicine, Nature Medicine, Nature Neuroscience, the Journal of Neuroscience, and
others, is supported by the Rehabilitation R&D Service of the U.S. Department of Veterans
Affairs, NCMRR/NICHD, NIDCD, and NINDS.

Brian L. Edlow received his B.A. from Princeton University and M.D. from the University of
Pennsylvania School of Medicine. He completed an internal medicine internship at Brigham and
Women’s Hospital, followed by neurology residency and neurocritical care fellowship at
Massachusetts General Hospital and Brigham and Women’s Hospital. He is currently a critical
care neurologist at Massachusetts General Hospital, where he is Associate Director of the
Neurotechnology Trials Unit and Director of the Laboratory for NeuroImaging of Coma and
Consciousness. Dr. Edlow’s research is devoted to the development of advanced imaging
techniques for detecting brain activity and predicting outcomes in patients with severe traumatic
brain injury. The goals of this research are to improve the accuracy of outcome prediction and to
facilitate new therapies that promote recovery of consciousness. Dr. Edlow receives support from
the National Institutes of Health, Department of Defense, and American Academy of
Neurology/American Brain Foundation.

Betts Peters is a speech-language pathologist specializing in augmentative and alternative


communication, and works on BCI research with REKNEW Projects at Oregon Health & Science
University.
Trends in BCI Research I: Brain-Computer Interfaces … 125

Brandon Eddy is a clinical fellow in speech-language pathology at the Oregon Health & Science
University Child Development and Rehabilitation Center in Portland, Oregon.

Chang S. Nam is currently an associate professor of Edward P. Fitts Industrial and Systems
Engineering at North Carolina State University (NCSU), USA. He is also an associated professor
of the UNC/NCSU Joint Department of Biomedical Engineering, as well as Department of
Psychology. He is director of BCI Lab at NCSU. His research interests center around
brain-computer interfaces and neurorehabilitation, smart healthcare, neuroergonomics, and
adaptive and intelligent human-computer interaction. Currently, Nam serves as the
Editor-in-Chief of the journal Brain-Computer Interfaces.

Quentin Noirhomme is senior scientist at Brain Innovation BV, where he works on the
development of BCIs for clinical applications. He collaborates with the University of Maastricht
and the University of Liege.

Brendan Z. Allison PhD was a Senior Scientist with g.tec and is a Visiting Scholar with the
UCSD Cognitive Science Department. He has been active in BCI research for over 20 years, and is
active with mindBEAGLE research for different groups.

Jitka Annen Jitka Annen is interested in multimodal analysis of consciousness and BCI
applications in patients with DOC, and is PhD student in the Coma Science Group headed by
Steven Laureys.
Recent Advances in Brain-Computer
Interface Research—A Summary
of the BCI Award 2016 and BCI
Research Trends

Christoph Guger, Brendan Z. Allison and Mikhail A. Lebedev

1 The 2016 Winners

The previous chapters should help to show the high quality of the nominated
projects, and thus the jury had a very difficult task. With 52 projects submitted,
identifying twelve nominees and the winners was not easy, and many good sub-
missions were not nominated, often due to a low score on one or more criteria. After
tallying the scores across the scoring criteria from the different judges, the nominees
were posted online and invited to our Gala Awards ceremony to learn which
nominees would win first, second, and third place in 2016.
The Gala Awards ceremony was part of the largest conference for the BCI
community in 2016, which was the Sixth International BCI Meeting. This con-
ference was held at the Asilomar Conference Grounds in Pacific Grove, CA, like
the previous two International BCI Meetings. The ceremony was a suspenseful
event, with hundreds of BCI aficionados in the audience to watch history being
made. At the ceremony, Dr. Guger (the organizer) and Dr. Allison (the emcee)

C. Guger (&)
Graz, Austria
e-mail: guger@gtec.at
B.Z. Allison
San Diego, USA
M.A. Lebedev
Durham, USA

© The Author(s) 2017 127


C. Guger et al. (eds.), Brain-Computer Interface Research, SpringerBriefs
in Electrical and Computer Engineering, DOI 10.1007/978-3-319-64373-1_12
128 C. Guger et al.

reviewed the nominated projects and asked representatives from each group to
come onstage to receive a certificate. Next, the three winners were announced and
handed their prizes. Without further ado, the three winners were:
The BCI Award 2016 Winner Is
Gaurav Sharma1, Nick Annetta1, Dave Friedenberg1, Marcie Bockbrader2, Ammar
Shaikhouni2, W. Mysiw2, Chad Bouton1, Ali Rezai2
1
Battelle Memorial Institute, 505 King Ave, Columbus, OH 43201
2
The Ohio State University, Columbus, OH, USA 43210)

An Implanted BCI for Real-Time Cortical Control of Functional Wrist and


Finger Movements in a Human with Quadriplegia
Mikhail A. Lebedev, chair of the 2016 jury, called the winning idea “A fascinating
demonstration of how spinal cord injury can be bypassed by a neural prosthesis
connecting the motor cortex directly to a functional electrical stimulation device
that activates the muscles of the paralyzed hand, allowing the patient to volitionally
execute wrist and finger movements—the system that will find a broad range of
clinical applications for restoration of motor control to paralyzed people, and
rehabilitation of their neurological deficits.”
The BCI Award 2016 2nd Place Winner Is
Sharlene Flesher2,3, John Downey2,3, Jennifer Collinger1,2,3,4, Stephen Foldes1,3,4,
Jeffrey Weiss1,2, Elizabeth Tyler-Kabara1,2,5, Sliman Bensmaia6, Andrew
Schwartz2,3,8, Michael Boninger1,2,4, Robert Gaunt1,2,3
>1,2,5,8Departments of Physical Medicine and Rehabilitation, Bioengineering,
Neurological Surgery, Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA
3
Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
4
Department of Veterans Affairs Medical Center, Pittsburgh, PA, USA
6
Department of Organismal Biology and Anatomy, University of Chicago,
Chicago, IL, USA.
Intracortical Microstimulation as a Feedback Source for Brain-Computer
Interface Users
The 3rd Place Winner Is
Thomas J. Oxley, Nicholas L. Opie, Sam E. John, Gil S. Rind, Stephen M.
Ronayne, Clive N. May, Terence J. O’Brien
Vascular Bionics Laboratory, Melbourne Brain Centre, Departments of
Medicine and Neurology, The Royal Melbourne Hospital, The University of
Melbourne, Parkville, Victoria, Australia.
Recent Advances in Brain-Computer Interface … 129

Minimally Invasive Endovascular Stent-Electrode Array for High-Fidelity,


Chronic Recordings of Cortical Neural Activity
The first, second, and third place winners received cash awards of $3000, $2000,
and $1000, respectively. These three winners also received a special bread knife,
and all nominees won other prizes. Interestingly, all three winners presented work
with invasive BCIs, even though the submissions were mostly non-invasive.
Whether this reflects a fluke for one year or an emerging trend remains to be seen
(Figs. 1 and 2).
At the Gala Award Ceremony, Dr. Guger also thanked the experts in the 2016
jury:
• Mikhail A. Lebedev (chair of the jury 2016),
• Alexander Kaplan,
• Klaus-Robert Müller,
• Ayse Gündüz,
• Kyousuke Kamada,
• Guy Hotson (winner 2015).

Fig. 1 Christoph Guger (left, organizer), Gaurav Sharma (First place winner, 2016), and
Kyousuke Kamada (jury member), all standing onstage during the Gala Awards Ceremony at the
BCI Meeting 2016 in Asilomar, CA
130 C. Guger et al.

Fig. 2 Sharlene Flesher, Jennifer Collinger, Robert Gaunt, and John Downey are delighted with
their Award! The image behind them encourages people to join the BCI society. Ironically, the
BCI society recently voted to add Jennifer Collinger as a board member. (Two editors of this book
series, Drs. Guger and Allison, are also board members.) The next BCI meeting hosted by the BCI
society, scheduled for 2018, will also feature an awards ceremony for the 2018 BCI award

Table 1 Type of input signal for the BCI system


Property 2016% 2015% 2014% 2013% 2012% 2011% 2010%
(N = 52) (N = 63) (N = 69) (N = 169) (N = 68) (N = 64) (N = 57)
EEG 71,2 76,1 72,5 68,0 70,6 70,3 75,4
fMRI 3,8 4,8 2,9 4,1 1,5 3,1 3,5
ECoG 11,5 9,5 13,0 9,4 13,3 4,7 3,5
NIRS 1,9 – 1,4 3,0 1,5 4,7 1,8
Spikes 7,7 4,8 8,7 7,1 10,3 12,5 –
Other signals 1,9 4,8 4,3 13,0 2,9 1,6 –
Electrodes 1,9 – – 6,5 1,5 1,6 –

2 Directions and Trends Reflected in the Awards

The Annual BCI Award shows trends in BCI technology and allows us to identify
the most important directions.
Recent Advances in Brain-Computer Interface … 131

Table 2 Real-time BCIs and off-line algorithms in projects submitted to the BCI awards
Property 2016% 2015% 2014% 2013% 2012% 2011% 2010%
(N = 52) (N = 63) (N = 69) (N = 169) (N = 68) (N = 64) (N = 57)
Real-time BCI 94,2 96,8 87,0 92,3 94,1 95,3 65,2
Off-line 5,8 3,2 8,7 5,3 4,4 3,1 17,5
applications

The following four tables summarize different characteristics of submitted pro-


jects since the award began in 2010. In each table, N reflects the number of
submissions, and numbers in different cells present the percentage or submissions
that that characteristic. We present one table for each of the four general BCI
components presented in the introduction.
Sensors Table 1 explores the different types of input signals used in the submitted
projects. As with previous years, the 2016 submissions focused primarily on
EEG-based systems, similar to most BCI articles. The submissions also reflected
other non-invasive sensor systems, such as fMRI and NIRS, and invasive methods
like ECoG and neural spikes.
Signal Processing The second table analyzes the percentage of submissions that
presented offline vs. real-time BCI applications. Nowadays, most of the BCI sys-
tems work in real-time and only a few projects improve off-line algorithms or
hardware components (Table 2).
Output/Application The third essential component of any BCI is the output.
Table 3 summarizes the different outputs, and related applications, that have been
submitted since 2010. The applications have varied over the years, but generally
show a strong interest in control, BCI platform tools and algorithms, monitoring
and assessment and control of robotic devices such as prosthetics, robots and
wheelchairs.
Environment/Interaction Finally, Table 4 summarizes the type of control signal
that was used to influence BCI operation. This is a key component of the BCI’s
operating environment and interaction with each user. Most of the BCI systems are
controlled with motor imagery, P300 or SSVEP paradigms. Some systems also use
face imagination or navigation in houses.
Another promising trend worth mentioning is the emergence and development of
the BCI Society. At a plenary session of the 2013 BCI Meeting, the attendees
unanimously voted in favour of establishing an official society to represent the BCI
community. Since then, the BCI Society was officially formed, including by-laws
and administrative aspects. It has recruited several hundred members, held Board
Member elections, launched a website, organized the 2016 BCI Meeting, and
managed other activities. The Board Members and other members include most of
the most respected figures in the BCI community, as well as many top experts from
132 C. Guger et al.

Table 3 Type of output system and application


Property 2016% 2015% 2014% 2013% 2012% 2011% 2010%
(N = 52) (N = 63) (N = 69) (N = 169) (N = 68) (N = 64) (N = 57)
Control 15,4 11,1 17,4 20,1 20,6 34,4 17,5
Platform 26,9 15,9 13,0 16,6 16,2 9,4 12,3
technology
algorithm
Stroke 5,8 4,8 13,0 13,7 26,5 12,5 7
neural plasticity
Wheelchair 7,7 15,9 13,0 11,8 8,8 6,2 7
robot
prosthetics
Spelling 3,8 12,7 8,7 8,3 25 12,5 19,3
Internet or VR 1,9 4,8 2,9 5,9 2,9 3,1 8,8
game
Learning 1,9 1,6 5,8 5,3 1,5 3,1 –
Monitoring, DOC 9,6 4,8 1,4 4,7 4,4 1,6 –
Stimulation 3,8 1,6 1,4 3,6 1,5
Authentication 3,8 4,8 13,0 3 – 9,4 –
speech
assessment
Connectivity 1,9 – – 2,4 1,5 - -
Music, art 3,8 1,6 1,4 1,8 - -
Sensation – – – 1,2 – 1,6 –
Vision 1,9 3,2 1,4 1,2 1,5
Epilepsy, parkinson, 3,8 3,2 2,9 1,2 - - -
tourette’s, autism
Depression, fatigue, 1,9 4,8 1,4 – 1,5 – –
ADHD, pain
Neuromarketing, – – 1,4 – 1,5 – –
emotion
Ethics – – 1,4 – – – –
Mechanical – – – – – 1,6 –
ventilation
Roadmap 1,9 – – – – – –

related fields. The BCI Society is now widely recognized as the central entity that
represents the BCI community.
The BCI Society has kindly allowed us to organize past and upcoming BCI
Awards ceremonies at BCI Meetings, and to post announcements relating to the
BCI Award on their website. As two of the editors of this book series are BCI
Society Board members, we strongly support the BCI Society and look forward to
ongoing friendly interaction with them. We also consider the nascent success of the
BCI Society very promising in terms of reducing fragmentation and miscommu-
nication. These problems have long been recognized in the published BCI literature
and elsewhere, and we hope the BCI Society reflects a trend toward amity and
accord.
Recent Advances in Brain-Computer Interface … 133

Table 4 Type of control signal used to interact with the BCI


Property 2016% 2015% 2014% 2013% 2012% 2011% 2010%
(N = 52) (N = 63) (N = 69) (N = 169) (N = 68) (N = 64) (N = 57)
P300/N200/ERP 11,5 28,6 11,6 11,8 30,9 25 29,8
SSVEP/SSSEP/cVEP 11,5 14,3 11,6 14,2 16,2 12,5 8,9
Motor imagery 32,7 36,5 37,7 25,4 30,9 29,7 40,4
ASSR – – 1,8 – 1,6 –

Fig. 3 The flyer for the 2017 BCI-research award

There do remain concerns with misrepresentation and other issues from some
relatively unskilled and unscrupulous BCI researchers, companies, and media
entities. Regrettably, there are notorious recent examples of this across all three of
these categories. Misrepresentation of what BCIs can do, especially to patients,
could be very damaging to patients and other buyers/users, the public, and the BCI
community as a whole. We hope this trend does not continue, and that the BCI
Society, BCI Awards and books, and other mechanisms can help refocus where it
belongs: on new, promising, top-quality BCI achievements.

3 Conclusion and Future Directions

The Annual BCI-Research Awards, along with this book series, have sought to
recognize and identify the newest and best developments in BCI research. As these
efforts continue over the years, we have more and more data we can use to explore
134 C. Guger et al.

different trends, and we may consider a specialized chapter or other article that just
focused on trends and a retrospective. In the short term, however, we are focused on
the next award. The 2017 BCI-Award flyer was posted online (see Fig. 3), and the
deadline of June 15, 2017 has expired. The jury is currently scoring the submis-
sions, and the awards ceremony will occur with the Seventh International BCI
Conference in Graz, Austria in September 2017.
We are proud to announce the jury for 2017:
Natalie Mrachacz-Kersting (chair of the jury 2017),
Gaurav Sharma (winner 2016),
Reinhold Scherer,
Jose Pons,
Femke Nijboer,
Kenji Kansaku,
Aaron Batista,
Jing Jin.
This is a particularly large jury, and contains even more breadth than usual. The
jury includes specialists in different imaging, signal processing, and output meth-
ods, rehabilitation, ethics, robotics, virtual reality, and numerous other BCI-related
fields. The 2017 jury also has very good representation from BCI groups around the
world. The chair comes from a top Danish BCI institute. Mrachacz-Kersting is a
professor in the Neural Engineering and Neurophysiology lab of Aalborg
University. The jury also includes experts who work in different European coun-
tries, the USA, China, and Japan.
In summary, the 2016 BCI Awards and the resulting chapters have introduced
and recognized many of the most innovative and promising new projects in the BCI
research community. Most of the nominees come from well-known, established
groups that are currently exploring even newer directions based on their nominated
projects. We have also explored different trends in BCI research by analysing
different characteristics of the submissions. We hope and expect that the 2017 BCI
Awards will highlight another group of new and fascinating ideas, and further
recognize new and developing trends.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy