Effect of biased feedback in BCI presented at BCI Meeting ’21 conference

Last week the BCI Meeting was held, the major international conference on brain-computer interfaces. As with many other conferences, this edition was the first to be held virtually (with great results, organizers managed to maintain the interactivity between participants, one of the main reasons for a conference to be held in the first place).

Jelena Mladenović presented there our work on the effect of a biased feedback during a motor imagery BCI task. During the experiment, participants had to control a racing game by imagining right hand or left hand movements. Depending on the recognized brain pattern, the character would go either left or right, catching fish. I must mention at this point that said character is a hungry pingouin, Tux from Extreme Tux Racer.

Glimpse at a BCI Tux racing game (with an over-enthusiast participant)

There was three conditions during the experiment: the position of the character could be either positively biased (it was easier to reach targets), negatively biased (harder to to do), or there was no bias (control condition). The main outcome of the experiment is that depending on users’ profiles (e.g. if they are prone to anxiety), the bias could be either helpful or detrimental in terms of performance, learning or flow (an optimal state we can get into while fulfilling a task).

We are in the process of publishing the full paper, if you want to know more about the study the preprint (the version before peers review a paper) is available at: https://hal.inria.fr/hal-03233170.

You can also watch her 10 minutes talk summarizing the study right here:

Jelena’s talk during the vBCI 2021 conference

With this talk Jelena was awarded with a Best Presentation award − in the “non-invasive” category, because in this conference these is also cutting-edge research on invasive techniques, see all the abstracts.

This work was done in collaboration with Jérémie Mattout from Inserm (CRNL COPHY team) and Fabien Lotte from Inria (Potioc team), and we hope to continue our investigations in the foreseeable future (this is already our second study on the topic, previous publication here).

Teaser: we also used the data gathered during this experiment to investigate if it would be possible to automatically select the best bias over the course of a BCI application. And it looks like it, even with a simple selection algorithm. Check Jelena’s thesis for first insights (while the whole piece deserved to be read, this particular subject rests in chapter 5).

Multisensory stimulation

Multisensory stimulation encompasses all the approaches, devices and exercises that allow a person to be stimulated through two or more senses at the same time.

Well, here we go. For those of you who are curious, let’s go a little deeper into the subject.

First of all, which senses are we talking about?

Generally, at least two of the five basic senses are used (sight, touch, hearing, smell, taste) but not only. In fact, we have more than five senses, contrary to what we have been taught since we were children at school. To date, the scientific community agrees that we have nine senses. The four additional senses are: proprioception, which is the ability to know where our own limbs are located, equilibrioception, which is the ability to maintain our balance with the vestibular system located in the inner ear, thermoception, which is the ability to feel temperatures, and finally nociception, which is the ability to recognize pain. Thus, during a multisensory stimulation, our senses of proprioception, equilibrioception and thermoception can also be stimulated. For obvious reasons, nociception is not stimulated (Milgram did not go through this…).


Right… and concretely, how does our body process this multisensory stimulation?

When we discover an object, an environment, a notion, all our senses transmit information to our brain about this object, this environment, this notion. We integrate this object, this environment, this notion under different sensory modalities and it is the precise combination of these different information that will allow us to characterize and recognize more easily this object, this environment, this notion in its entirety thereafter. This is called sensory integration or multisensory integration. All this sensory information, transmitted to the brain via our nerve endings, then allows the brain to process perceptions in a multimodal way. Multimodal corresponds to the fact that the information on an object is transmitted according to several sensory modalities (tactile, auditory, visual, etc…) (1)(2). This multisensory integration also has impacts on high level cognitive functions such as attention (3) or memory (3b). 

It should be noted that theories on sensory integration appeared in the 1960s in the United States with Ayres, an occupational therapist and doctor in psychology and neuroscience, and then by her successors (4). This theory became a trademark because Ayres developed a whole therapeutic approach based on it. This approach is mainly used by occupational therapists and consists of creating sensory-motor play situations to stimulate and progressively correct previously identified sensory integration disorders (5).

For the sake of consistency, we choose to use the term multisensory integration except when we refer explicitly to the approach developed by Ayres.

Two main categories of work have made it possible to objectify the effects of multisensory integration:

  • work on intersensory facilitation, which is “the study of the presentation of an accessory stimulus on the processing of a target stimulus of a different modality” (6),
  • and work comparing the processing of bimodal targets to that of unimodal targets (redundant target effect) (7).

It happens that the information is not congruent and then one sensory modality will possibly take precedence over the others and thus lead to perceptual illusions:

  • McGurck effect, highlighted by McGurck and McDonald in 1976, which is the influence of the visual perception of articulatory movement of the lips on the auditory perception of speech (8). In this example when we see someone pronounce “ga” and we hear “ba”, we understand the syllabe “da”.
  • Ventriloquism where the perception of an articulatory movement of the lips can influence the judgment of the spatial localization of a sound source (Driver, 1996) (9).
  • Virtual reality where visual perception takes precedence over vestibular perception. When we see that we are on the edge of a cliff, we may feel as if we are falling while our body is sitting or standing. This incongruence can be more or less well supported and create a physical discomfort (nausea).

At the neuroanatomical level, in the literature, observations show that 3 structures of the central nervous system would be involved in this mechanism of multi-sensory integration:

  • The prefrontal cortex to maintain simultaneous activities in different brain areas (10).
  • The hippocampus in particular for the long-term encoding of links between the different sensory components.
  • The thalamus because, except for olfactory information, all sensory information passes through the thalamus before being projected into the neocortical areas of the brain. And conversely, from the cortical areas to the thalamus. According to some authors, it is these reciprocal activations that play a primordial role in multisensory integration (11).

That’s a little clearer, but why do we do multisensory stimulation if our senses are already constantly engaged?

In 1958, Liederman and his collaborators conducted a study on the consequences of sensory deprivation in participants (okay, that’s not better than Milgram). Very quickly after one hour, the participants showed agitation, anxiety, hallucinations and other discomforts. These manifestations disappeared as soon as they returned to a multisensory environment (12). This study highlighted the positive aspect of the multisensory environment. Our sensory perceptions have a key role in our relationship with the outside world (13).

Based on this, in the field of health, several approaches have been proposed and invented in order to accompany people and promote their well-being. Among the most widespread, there is the Ayres approach menitoned above and the Snoezelen approach which has also become a registered trademark.

The Snoezelen approach, introduced in 1974 by Jan Hulsegge, a music therapist, and Ad Verheul, an occupational therapist, comes from the contraction of two Dutch verbs, “snuffelen” and “doezelen”, which mean respectively “to sniff” and “to sleep”. It thus includes two approaches, that of sensory stimulation and that of bodily relaxation (14). On the other hand, the stimulating situation in a Snoezelen context is not seen as an attempt to teach a specific skill or a basis for simply offering rest and quiet, but as an opportunity to promote a general sense of well-being by engaging in pleasant and stimulating activities that do not produce any pressure and can be fully enjoyed (15).

In France, this approach began to develop in the early 1990s.

However, sensory stimulation has long been used to improve well-being as well as to develop thinking, as Montessori did in the early 20th century. Indeed, Montessori (1915, 1918) proposed a multisensory approach to prepare for reading that solicits visual and auditory modalities as well as the haptic modality. The latter makes it possible to make the link between spatial stimuli (perceived with sight) and temporal stimuli (perceived with hearing) (15b).

In a multisensory environment, the guiding principles of the intervention are: non-productivity, the person at the heart of the intervention, the person’s strengths, the importance of the therapeutic alliance, the importance of the climate, individual and personalized intervention, team cooperation (16).

The contributions of the multisensory environment shown in the literature are: reduction of problematic behaviors, improvement of self-awareness, increase in social interaction and communication, relaxation effect and reduction of anxiety, mood regulation and strengthening of the therapeutic alliance (17).

This approach allows caregivers to focus on patients’ interests, to legitimize the time spent with them and this has a positive effect on the relationship with their patients and also on their morale as caregivers. Indeed, multisensory stimulation proposes a less technical vision of care where the caregiver accompanies the patient, not directs him.

What are the fields concerned by multisensory stimulation? (non-exhaustive answer)

Among the fields concerned, there is obviously the health field but not only…

Currently, multisensory stimulation is more and more used in different care sectors such as adults with intellectual deficiencies, neurodegenerative diseases, children with learning disabilities, maternity, chronic pain management, psychiatry, post-coma awakening, rehabilitation, strokes or traumatic brain injuries (18).

In his article reviewing the clinical research on the Snoezelen approach used in a specialized residential setting with dependent persons, Martin (2015) notes that the results are varied, with however an improvement in emotional regulation, which favors relaxation and psychic appeasement. Such effect was observed with persons of all ages (children, teenagers, adults, elderly persons), who suffered an intellectual disability with associated disorders, a neurodegenerative disease, or psycho-behavioral and psychiatric disorders (19).

In the field of childhood and disability in particular, multisensory activities serve to increase the child’s level of multimodal integration, enabling him or her to overcome difficulties that might restrict the development of higher-level cognitive skills such as symbolic play, language, writing, reading and social understanding (20).

In the perinatal and early childhood field, multisensory stimulation is indicated for skin-to-skin care for premature babies in particular (20b).

Moreover, multisensory stimulation in a learning context improves cognitive abilities for both disabled students (ULIS class) and students in conventional schooling (21).

The study carried out by Baker and Jordan in 2015 shows that multi-sensory stimuli from the same source can help the development of cognitive abilities. Here, it concerns the representation of a quantity in infants and young children. The authors explain the results as follows: “When several senses are stimulated, they capture the attention of infants and children who select relevant information more efficiently and avoid external disturbances. This would increase engagement in the task” (22).

In the field of Human-Computer Interaction, multisensory stimulation is of great interest to facilitate immersion in a virtual environment by increasing the localization of the virtual self (23) or to facilitate the use of an application in particular when the person has a sensory disorder (24). Not to mention the sensory support devices that could be proposed to people with mental, cognitive, physical or psychic disabilities (TVSS type devices for visual-tactile sensory support) (25).

Okay, and in concrete terms, how does this happen?

Multisensory stimulation from an environment that solicits several senses or from a single object that stimulates several senses, can be done in many ways:

  • partial environmental adaptation (adaptation of a known environment with multisensory elements (classroom, dentist’s office, etc), or total (complete adaptation of a room dedicated to this use only (Snoezelen trademark room, multi-sensory space),
  • provision of nomadic devices offering several sensory stimulations (blanket, multisensory cart),
  • care, activities, devices offering the stimulation of several senses through a single object (balneotherapy, music therapy, zootherapy, art therapy, therapeutic gardens, digital approach, grapho-motor activities, etc).

We can provide multi-sensory stimulation for recreational, preventive or curative purposes. This is done to stimulate the motor, cognitive and social skills of the person. This approach can be used on a scheduled basis or in flash when the patient shows signs of agitation or discomfort, or at the patient’s request. Generally, the person must be accompanied by a professional who is aware of this approach. However, it can happen that the person is alone, it depends on his profile and the proposed device. 

Bibliography

(1) Stein, B. E., & Meredith, M. A. 1993. Cognitive neuroscience. The merging of the senses. Cambridge, MA, US: The MIT Press (1993)

(2) Meredith, M.A. 2002. On the neuronal basis for multisensory convergence: A brief overview. Cogn. Brain Res. 14, 31–40 (2002)

(3) Talsma, D., Senkowski, D., Soto-Faraco, S., and Woldorff M. G. .2010. The multifaceted interplay between attention and multisensory integration,Trends in Cognitive Sciences, 14, pp.400-410).

(3b) Thelen, A., Matusz, P.J. & Murray, M.M. (2014). Multisensory context portends object memory.Current Biology, Vol 24, n°16, Pages R734–R735.

(4) Smith Roley, S., Mailloux, Z., Miller-Kuhaneck, H., & Glennon, T. (2007). Understanding Ayres’ sensory integration.

(5) https://www.leneurogroupe.org/integration-sensorielle

(6) Welche&Warren, 1986, Stein et al., 1996, Driver et Spence, 1998, Eimer, 2001, Spence, 2002, …

(7) Hershenson, 1962, Treisman et Gelade, 1980, Stein et al., 1989, Miller, 1982, Giard et Peronnet, 1999, …

(8) McGurk, H., & MacDonald, J. (1976). Hearing lips and seeing voices. Nature, 264(5588), 746-748.

(9) Driver, J. (1996). Enhancement of selective listening by illusory mislocation of speech sounds due to lip-reading. Nature, 381(6577), 66-68.

(10) Bechara, Tranel, Damasio, Adolphs, Rockland, & Damasio, 1995 ; Stuss & Alexander, 1999.

(11) Merabet, Desautels, Minville, & Casanova, 1998 ; Casanova, Merabet, Minville, & Desautels, 1999.

(12)Leiderman, H., Mendelson, J. H., Wexler, D., & Solomon, P. (1958). Sensory deprivation: clinical aspects. AMA archives of internal medicine, 101(2), 389-396.

(13) Sublon, G., & Achard, C. (2012). La stimulation multisensorielle comme outil de prise en charge orthophonique des troubles spatio-temporels et communicationnels de la maladie d’Alzheimer (Doctoral dissertation, Université de Lorraine).

(14) Martin, P. (2015). État de la recherche clinique sur l’approche Snoezelenutilisée en milieu résidentiel spécialisé. Revue francophone de la déficienceintellectuelle, 26, 161–180. https://doi.org/10.7202/1037056ar)

(15) Lancioni, G. E., Cuvo, A. J., & O’reilly, M. F. (2002). Snoezelen: an overview of research with people with developmental disabilities and dementia. Disability and rehabilitation, 24(4), 175-184.

(15b) Bryant et Bradley, 1985, Gentaz, Colé et Bara, 2003. Hatwell, Y., Streri, A., & Gentaz, E. (2000). Toucher pour connaître. Psychologie cognitive de la perception tactile manuelle. Paris : PUF.

(16) https://www.ciusss-capitalenationale.gouv.qc.ca/sites/d8/files/docs/ProfSante/MissionUniversitaire/ETMISSS/intervention_environnement_multisensoriel.pdf

(17) Rhyn, M., Pelle, C., Misso, V., & Barras, L. (2020). Les apports d’un environnement multisensoriel dans l’offre en soins hospitalière des adolescents en souffrance psychique, évaluation d’un projet clinique. Revue Francophone Internationale de Recherche Infirmière, 6(1), 100194.

 (18) Baillon, S., Van Diepen, E., & Prettyman, R. (2002). Multi-sensory therapy in psychiatric care. Advances in psychiatric treatment, 8(6), 444-450.

(19) Martin, P. (2015). État de la recherche clinique sur l’approche Snoezelen utilisée en milieu résidentiel spécialisé. Revue francophone de la déficience intellectuelle, 26, 161-180.

(20) https://autisme-espoir.org/wp-content/uploads/BMC-pediatrics-therapie-par-le-jeu.pdf

(20b) Pignol, J., Lochelongue, V., & Fléchelles, O. (2008). Peau à peau: un contact crucial pour le nouveau-né. Spirale, (2), 59-69. Feldman, R. (2002). Les programmes d’intervention pour les enfants prématurés et leur impact sur le développement: et trop et pas assez. Devenir, 14(3), 239-263.

(21) Alexandra Prunier. L’impact de la stimulation multi-sensorielle sur la mémorisation à long terme.Education. 2015. dumas-01280883.

(22) Baker, J.M. andJordan, K.E. (2015), Chapter 11. The influence of multisensory cues on representation of quantity in children, Evolutionary Origins and Early Development of Number Processing, (pp. 277-304). United States of America : Elsevier Inc. Academic Press)

(23) Nakul, E., Orlando-Dessaints, N., Lenggenhager, B., & Lopez, C. (2017). Bases multisensorielles de la localisation du soi. Approches par des conflits visuo-tactiles dans un environnement virtuel. Neurophysiologie Clinique, 47(5-6), 344.

(24) Botherel, V., Chêne, D., & Joucla, H. (2019, October). Une conception universelle mise en œuvre via des modes d’usages. In Journée annuelle du Sensolier 2019.

(25) Hervé Segond, Stéphane Maris, Yves Desnos, Perrine Belusso. IHM de Suppléance Sensorielle Visuo-Tactile pour Aveugles et d’Intégration Sensorielle pour Autistes. Journal d’Interaction Personne-Système, Association Francophone d’Interaction Homme-Machine (AFIHM), 2011, 2 (1), pp.1-15.

Ambient objects

According to a study by Gartner and Idate, in 2020 the number of connected objects in circulation around the world is between 50 and 80 billion. This dizzying number includes smartphones and connected watches, among others. Among these objects, we can gradually see the appearance of connected objects of a new generation, ambient objects.

What’s an ambient object?

An ambient object is part of the family of ambient information systems (ambient displays). It is an object that disseminates information, in an indirect, unsolicited manner to the person. By indirect, we mean that the information will be on the periphery of our attention. We are therefore sensitive to it implicitly, without processing the information directly in an explicit manner. This is the central tenet of calm technology described by Weiser and Brown in 1997 (Designing Calm Technology) which suggests that the display of information should move easily from the periphery of our attention to the center, and vice versa. As an example, we can cite the illuminated sign with the inscription “on air” which lights up above the door of the recording studios when a recording is in progress and goes out when it is finished. Information is transmitted without distracting or interfering with what we were doing.

Additionally, ambient information systems are connected objects that transmit information through subtle changes in a person’s environment (for example, decorative objects, ambient sound or light). These displays aim to blend seamlessly into a physical environment, where various everyday objects are transformed into an interface between people and digital information. For example, in some smart homes, switching on lightening and light intensity vary depending on the natural light. The fact that the lights switch on tells us that it is getting late without distracting us.

Why use an ambient object?

This method of disseminating information makes it possible to less distract the person, by not disrupting their attention. It then requires minimal effort on the part of the user while still providing knowledge. So instead of imposing information on the person with disturbing notifications, like a smartphone or a connected watch could do, it is the user who will grab it when they needs it.

When the information being disseminated is a breathing guidance (peripheral rhythmic breathing device), it has been shown that people will gradually synchronize with the guide and eventually breathe at the same rate as the guide. This occurs even when people are focused on another task, that requires their full attention (Morajevi et al 2011).

Where does it come from?

On the one hand, ambient objects come from the concept of ubiquitous computing (UbiComp, ubiquitous computing) imagined by Weiser in 1991. Ubiquitous computing aims to make all kinds of services accessible, anywhere, while “hiding” the computing units. In this human-computer interface paradigm computers run in the background. Thus, the user, no longer having the constraints of using a computer (being seated in front of a keyboard, a screen, etc.) regains their freedom of action, their freedom of movement.

On the other hand, this concept of displaying peripheral information, blended in the environment, also has to do with to the notion of ecological device. Indeed, ambient objects are less likely to alter the environment if which people operate, enabling thus smoother integration of the technology in their surroundings.

Bibliography

Moraveji, N., Olson, B. et al. Peripheral paced respiration : influencing user physiology during information work.
UIST ’11: Proceedings of the 24th annual ACM symposium on User interface software and technology October 2011 Pages 423–428
https://doi.org/10.1145/2047196.2047250
https://dl.acm.org/doi/abs/10.1145/2047196.2047250 

Weiser, M., Seely Brown, J. Designing Calm Technology. PowerGrid Journal, 1996.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.135.9788&rep=rep1&type=pdf

Yu, B., Hu, J., Funk, M. et al. DeLight: biofeedback through ambient light for stress intervention and relaxation assistance. Pers Ubiquit Comput 22, 787–805 (2018).
https://doi.org/10.1007/s00779-018-1141-6 
https://link.springer.com/article/10.1007/s00779-018-1141-6

Vogel, D. Balakrishnan. Interactive public ambient displays : transitioning from implicit to explicit, public to personal, interaction with multiple users.
UIST ’04: Proceedings of the 17th annual ACM symposium on User interface software and technology October 2004 Pages 137–146
https://doi.org/10.1145/1029632.1029656
http://www.dgp.toronto.edu/~ravin/papers/uist2004_ambient.pdf

What if play was not just a simple recreational activity but a necessity for the proper development of children?

Invited post, written by Sacha Benrabia, Cassandra Dumas, Laura Lalieve

Through play, children can develop self-confidence, collaboration, emotional expression and initiative. It also contributes to cognitive, physical, social and emotional well-being. Researh shown that through play, children develop many aspects, from their motor skills to their language, as well as their intellectual and social development.

Intellectual development

By stimulating children’s curiosity, games play a key role in memory development. In particular, construction games help develop the logical part of the brain. Finally, through play, children develop their brains and their cerebral plasticity.

Social Development

Play is an educational and pedagogical activity that contributes to the emotional, moral and social development of the child. It is also a way of understanding reality with its norms, rules and values. Play contributes to the structuring of the child, a key stage that marks his or her social relationship with his or her present and future environment.

Motor and sensory development

The experience gained through play is associated with improved spatial skills, including better geometric thinking. Through play, children begin to crawl and then walk normally, and develop skills in climbing, balancing, throwing, aiming and coordinating with obstacles along the way.

Language development

When children play with peers, they are communicating in some way with others. In this sense, play is a primitive system of communication through which emotions, dreams and fears can be expressed. More generally, play is a way of calling out to others through a symbol. By having fun with toys of various and sometimes atypical shapes and colors, children enrich their vocabulary and their understanding of the shapes and colors that surround them.

What about Coral?

Coral is a tangible biofeedback interface in the form of a construction set. Equipped with physiological sensors, it allows children to discover their physiological data and share them with others. Its colors and varied shapes arouse the curiosity of the youngest and keep them awake.

Through biofeedback, children can discover the effects of their mental states and emotions on their physiology and learn to regulate it in a healthier way.

Coral allows for motor, social, language and intellectual development of the child while offering a fun activity that appeals to the child’s imagination.

Sacha, Cassandra and Laura contributed to the third iteration of Coral, investigating how it could be used more specifically with children.

Bibliography

COUZON Nathalie, « Le pouvoir du jeu dans le développement des jeunes enfants », 2018 

http://rire.ctreq.qc.ca/2018/12/pouvoir-jeux/ 

EKIN Cansu C., CAGILTAY Kursat & KARASU Necdet, « Effectiveness of smart toy applications in teaching children with intellectual disability », Journal of Systems Architecture, 2018, n°89, p.41-48

https://www.researchgate.net/publication/326509360_Effectiveness_of_Smart_Toy_Applications_in_Teaching_Children_with_Intellectual_Disability

GAUSSOT Ludovic. « Le jeu de l’enfant et la construction sociale de la réalité », Le Carnet PSY, vol. 62, n°2, p.22-29, 2001

https://www.cairn.info/revue-spirale-2002-4-page-39.htm

VERDINE Brian, ZIMMERMANN Laura & al. « Effects of Geometric Toy Design on Parent-Child Interactions and Spatial Language », Early Childhood Research Quarterly, 2018

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6289199/

METRA Maryse, « Le jeu dans le développement affectif, cognitif, corporel et social de l’enfant », UFAIS Lyon, 2006

https://aefe-zoneafriquecentrale.net/IMG/pdf/jeu_et_developpement_de_l-_enfant.pdf

If you want to know more about this concept, here are some known names : Mildred Parten, Nathalie Nader-Grosbois, Sandrine Vincent, Jean Piaget