Cosmos

Cosmos is a shared experience where multiple users can observe a biofeedback of their heart-rate activity, displayed on a large screen.

Cosmos - new ways of interaction and breath sync

To each user is associated a(n archetypal) heart that is wandering in space. The heart rate defines the position of the heart symbol on a diagonal from low to high while the heart rate variability will make the heart wander up and down. This playful visualization is simple and yet effective to account for different features associated with heart rate. More importantly, beyond giving information about each individual, Cosmos in the background is constantly computing various metrics related to the synchronization between users. Events are triggered in the environment depending on the similarities that are extracted from all heart rates. For example, a correlation between two heart rates will display rainbows launching from one heart to the other, while a cluster of hearts (similar heart rates and heart rate variability) will provoke an avalanche of shooting stars and the cutest sounds you will ever hear. Cosmos prompts for introspection (take control of one’s heart) as well as for interaction among users (trigger events by joining efforts). It is also a good support to explain the physiological phenomenon associated with heart rate activity, the link between physiology and cognition or the natural synchrony that can occur between people.

We had the opportunity to showcase Cosmos during many public exhibitions. More that once, we observed how relationships could shift when a user was facing their heart rate and how it related with others’. People question what is happening and they would start various activities to try to control their hearty avatar; relaxation, physical exercises or social interactions… in various forms, often without realizing that their is still a world outside the experience. The kawaii factor® does help to lift anxieties linked to “exposing” oneself through biosignals. Playfulness prevails then, which in turns open the door to unique interactions, even between strangers.

On the technical side, because (for once) Cosmos does not rely on any specific object, it can be quickly setup. We can also interface it with most devices measuring heart rate (there is a standard bluetooth BLE connectivity in the industry), hence we can envision scenarios involving large group of users — we tested up to twelve at the moment. To study the impact of such biofeedback at the level of the group, Cosmos will have its own research in due time.

Echo

Echo is meant to be you tangible avatar, representing in real time your physiological signals (breathing, heart-rate, …) as well as higher-lever inner and mental states (cognitive load, attention level, emotions, …). This is somehow a tangible “out-of-body experience”, hence “Tobe”, the first name of the project back when it was a research project in the Potioc research team at Inria Bordeaux. Echo was not the first avatar we built, though. Before it was Teegi, which was specifically aimed at showing brain activity — a more focused projects, aimed at learning as much as introspection, that went on on its own.

Through Echo, users can observe and interact with what occurs within. In addition, part of the research consist in finding ways to let users shape their avatar, customizing it to increase identification. With the first version, which relied on spatial augmented reality (SAR, projector to display any graphics, external tracking system to project right onto the puppet), users could choose in which way they would represent their inner states. For example they could pick and draw on a dedicated device the color, size or form of their heart, and even animate their heart rate however they saw fit. Echo is conceived and 3D printed from scratch; tedious process for mass production but more flexibility when it comes to adjust shape and size to users’ liking. If it started as a cumbersome SAR setup, Echo ended up as a self-contained object with embedded display and computational unit, nowadays ready to shine with the flick of a switch.

We were able with Echo to investigate for the first time a shared biofeedback between pairs of users, back in 2014, with shared relaxation exercises. Among the other use cases we imagined for echo: a display that could re-enact how you felt along side the picture of a dear souvenir; an avatar representing a loved one remotely (scenario that since then we pushed through Breeze; or a proxy that could help people with sensory challenges to communicate with those around (e.g. autism spectrum disorder). This latter scenario is one of the applications we anticipate the most.

We are currently running a study, a two-users scenarios, where we want to formally assess up to which point an avatar such as Echo could alter the perception people have of one another. We hypothesize that communicating through physiological signals with such interface could create an additional layer of presence when to people meet and share a moment.

Even though Echo is still mostly a research project at the moment, several of them already lives outside of Ullo, up to Japan and Creact headquarters, where they are meant to be used in education context.

Additional resources: repository hosting the first version of Echo, the spatial augmented version based on visual programming language for both the rendering (with vvvv) and the processing (with OpenViBE): https://github.com/introspectibles. Personal page of Renaud Gervais, the other father of this first version.

Associated publications

Renaud Gervais, Jérémy Frey, Alexis Gay, Fabien Lotte, Martin Hachet. TOBE: Tangible Out-of-Body Experience. TEI’16 ACM Conference on Tangible, Embedded and Embodied Interaction, 2016⟨10.1145/2839462.2839486⟩⟨hal-01215499⟩. PDF

Flower

Flower is a device specifically aimed at providing breathing exercises. Patterns with different paces and colors are shown on the petals, that users can sync with. The main use-case is to use it as a way to reduce anxiety, when users choose to take a break. It is also envisioned as ambient device operating in the peripheral vision, with the slow pulsating light gently guiding users, without intruding into their routine. While we envisioned various designs at the beginning of the project, before the name settled, in the end a “flower” as a form factor is reminiscent of a plant next to which one would breathe in in order to smell it.

When the Flower is connected to a smartwatch, the breathing guide adapts to users, speeding up slightly when heart rate is higher, slowing down when heart rate is lower. This is on par with existing literature around the physiological phenomenon of cardiac coherence (in a nutshell: heart rate variability synced with breathing). There is indeed not one breathing to rule them all, and users benefits from adapting the breathing guide to their taste and physiology in order to provide a more effective guide.

To this day tow studies took place. One occurred in a replica apartment in order to assess its usability and how people would appropriate the device. The second study assessed the effect of the device when stressors were presented to participants, collecting along the way subjective measures, performance metrics and markers extracted from heart rate variability. In the associated paper we describe how the design of the Flower was praised by users and how it can reduce symptoms of stress when users focus their attention on it, as well as increase performance on a cognitive task (N-back). We did attempt to investigate whether an ambient biofeedback could alleviate stress, however this other experimental condition did not yield any significant difference compared to a sham feedback — most probably because an ambient feedback take a longer than mere minutes before it could be effective.

At this stage a couple dozens devices are being used by various people, including therapists that integrated the device in their practice — more information about this version on the company’s website. Beside providing breathing exercice, a second usage that emerged from the field consists in using the Flower as a timer, to orchestrate the day for people suffering from disorientation. We are actively working toward a second iteration that would offer more interaction when it is being manipulated and that could be mass-produced. At the same time we are building a platform that could help stimulate interactions between users and that could be used to gather data for field studies. We are also considering use-cases where one Flower could serve as a biofeedback for a whole group, the color changing depending on the overall heart rate variability.

Associated publications

Morgane Hamon, Rémy Ramadour, Jérémy Frey. Exploring Biofeedback with a Tangible Interface Designed for Relaxation. PhyCS – International Conference on Physiological Computing Systems, Sep 2018, Seville, Spain. ⟨10.5220/0006961200540063⟩⟨hal-01861829⟩. PDF

Morgane Hamon, Rémy Ramadour, Jérémy Frey. Inner Flower: Design and Evaluation of a Tangible Biofeedback for Relaxation. Physiological Computing Systems, Springer, Cham, 2019. ⟨10.1007/978-3-030-27950-9_8⟩PDF

Prism

Did you ever get lost while reading a book, living through the characters and the events, being transported over the course of a story in a foreign world? What if such written universe could evolve depending on you; the text reacting discretely when your heart is racing for of paragraph of action, or when your breath is taken away by the unexpected revelation of a protagonist?

This is a project about an interactive fiction fueled by physiological signals that we hereby add to our stash. While there was hints of a first prototype published four years prior, the current version is the result of a collaboration with the Magic Lab laboratory from Ben Gurion University, Israel. We published at CHI ’20 our paper entitled “Physiologically Driven Storytelling: Concept and Software Tool”. We received there a “Best Paper Honorable Mention”, awarding the top 5% of the submissions — references and link to the original article at the bottom.

Beyond the publication and the research, we wish to provide to writers and readers alike a new form of storytelling. Thanks to the “PIF” engine (Physiological Interactive Fiction), it is now possible to write stories which narrative can branch in real time depending of signals such as breathing, perspiration or pupils dilatation, among others. To do so, the system combines a simplified markup language Ink, a video-game rendering engine (Unity) and a robust software to process in real time physiological signals (OpenViBE). Depending on the situation, physiological signals can be acquired with a laboratory equipment, as well as with off-the-shelf devices like smartwatches or… with Ullo’s own sensors.

Interactive fiction’s origin story takes place in the late 70s, a time during which “Choose Your Own Adventures” books (and alike) emerged alongside video games that were purely textual. In the former one has to turn to a specific page depending on the choice made for the character; in the latter, precursors to adventures games, players have to type on the keyboard simple instructions alike OPEN DOOR or GO NORTH to advance in the story — one of the most famous game: Zork, by Infocom. (Zork, that I [Jeremy] must confess never being able to finish, unlike jewels such as A Mind Forever Voyaging or Planetfal, developped by the same company). More detailed story in the paper. Here to explicit interaction from the reader we substitute implicit interaction, relying on the physiology, with a pinch of machine learning to understand signals’ evolution depending on the context. Transparent for the reader, and no need to wield a programming language complex to learn for the writer, but a light syntax quick and easy to grasp.

If the vision — which is not without resemblance to elements that one can find in science-not-so-fiction-anymore work such as The Ender’s Game or The Diamond Age — is ambitious, the project is still in its infancy. Yet, two studies on the menu for the first published full paper. In one we investigate the link between, on the one hand, proximity of a story with the reader and, on the other hand, empathy toward the character. In the other study we look at which information physiological signals can bring about the reader, with first classification scores on constructs related to emotions, attention, or to the complexity of the story. From there a whole world is to explore, with long-term measures and more focused stories. One of the scientific objectives we want to carry on is to understand how this technology could favor empathy: for example opening-up readers perspectives by helping them to better encompass the point of view of a character which seems definitely too foreign at first. One lead among many, and on the way awareness about all the different (mis)usages.

Beside this more fundamental research, during project’s next phase we expect to organize workshops around the tool. If you are an author boiling with curiosity, whether established or hobbyist — and not necessarily kin on new tech –, don’t hesitate to reach for us to try it out. We are also looking to build a community around the open-source software we developed, contributors are welcomed!

On the technical side, next we won’t deny ourselves the pleasure of integrating devices such as the Muse 2 for a drop of muscular and brain activity, or exploring virtual reality rendering (first visuals with the proof of concept “VIF” http://phd.jfrey.info/vif/), or creating narratives worlds shared among several readers.

For more information, to keep an eye on the news related to the project, or to get acquainted with the (still rough) source code, you can visit the dedicated website we are putting up: https://pif-engine.github.io/.

Associated publications

Jérémy Frey, Gilad Ostrin, May Grabli, Jessica Cauchard. Physiologically Driven Storytelling: Concept and Software Tool. CHI ’20 – SIGCHI Conference on Human Factors in Computing System, Apr 2020, Honolulu, United States. 🏆 Best Paper Honorable Mention. ⟨10.1145/3313831.3376643⟩⟨hal-02501375⟩. PDF

Gilad Ostrin, Jérémy Frey, Jessica Cauchard. Interactive Narrative in Virtual Reality. MUM 2018 Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia, Nov 2018, Cairo, Egypt. pp.463-467, ⟨10.1145/3282894.3289740⟩⟨hal-01971380⟩. PDF

Jérémy Frey. VIF: Virtual Interactive Fiction (with a twist). CHI’16 Workshop Pervasive Play, 2016. ⟨hal-01305799⟩. PDF

Pulse

Pulse is an experience that we showcased first during the 2019 edition of the CES. With Pulse we wanted provide a moment where users can reflect not only on their heart rate, but also on the heart rates of those around, while retaining complete agency. We indeed brought back more control in the hands of the users; quite literally: thanks to ECG electrodes embedded in the spheres, the physiological measures only occur when users decide to grasp the device. Pulse involves three components. 1. The “planet”, that illuminates when users put their hands on it, starting to, you get it, pulse through light and vibration at the same pace of the heart rate. 2. The central “hub”, that gathers the heart rates of all users (up to four in current version), and that changes color accordingly to the synchronization it detects among them. 3. Finally the cable, also called… actually we don’t have yet a good name for it, but nonetheless it does more than conveying electrical signals and information: you can observe pulses of lights that accompany each heartbeat from the planets to the hub. Beyond pleasing the eye, that the cable explicitly conveys signals is also a way to remind users about what is going on behind the curtain.

Pulse was conceived in collaboration with the Potioc team at Inria; in particular Thibault Lainé and Joan Sol Roo, who worked on the very first proof of concept (sensors + the… cable). Current design would not be complete without the craft of Alexis Gay from GayA concept, who carefully transformed white blobs in shiny planets and helped to refine the user experience.

Due to its modular nature and “hands-on” approach, Pulse shares similarities with our Coral. More than what meets the eye at first: thanks to analog output here as well we can connect the hub to other gear. As such we built for the CES a device that converts analog signals to the MIDI protocol; a device (yet to have its own page) that in turns we connected to a sequencer and synthesizer, the teenage engineering OP-Z. As a result: a soundtrack that speeds up or down depending on the group average heart-rate, and notes and chimes triggered by each heart beat. Space does sound at times.

Because light, vibration and music was not enough, Pulse’s hub can also act as a (Bluetooth) sensor to connect the group to our Cosmos, to create an even more engaging experience. By merging these different moralities and projects, we are building a whole universe that revolves around participants.

Vibes

Imagine holding in your hand an enclosure shaped as a heart, that is pulsating through lights and vibrations at the same pace of your own heart. Classic biofeedback. Now picture giving it to someone in front of you: we assure that the words “take this, this is my heart” has quite an effect on visitors during an exhibition. This is Vibes, the tangible biofeedback you hold onto, that you can share, exchange, feel and make feel. Among the use cases we envision with Vibes: send your heart rate (i.e. your vibes) to someone far away, to remind them that you think about them. For the moment we can easily switch the signals between pairs of devices, to have users comparing their rhythms one with another.

Vibes
Two persons exchaging their heart-rate through Vibes

In the next iteration we will improve the haptic feedback. While vibration motors give satisfactory results to mimic a beating heart, we are in the process of integrating novel actuators, which versatility enables to explore any dynamic (think tiny boomboxes in the palm of the hand).

Vibes is still at an early stage, and yet we witnessed first hand how giving away a pulsating heart — even a nice-looking one — has an immediate effect on users. There is intimacy involved. Interestingly, often people would compare the pace of Vibes to the one that can they can measure themselves, placing for example a finger on the jugular. We observed these situation occurring more frequently that with our other biofeedback devices. People tends to underestimate the pace of their heart rate; maybe because of the proximity between the representation and the actual phenomenon, any perceived discrepancy might prompt for investigation (still on the right side of the uncanny valley?). This relationship between representation and effectiveness is still an hypothesis, one that we hope to investigate in the future.

Garden

Connecting inner states to a mixed reality sandbox

Garden started within the Potioc research team in Inria Bordeaux, at the end of 2015. This project was a new iteration of the “introspectibles” that were first investigated with Teegi and Tobe (now Echo). This human-size interactive sandbox lets users shape a landscape, which colors and animation evolve not only depending on the topology — piling sand create mountains, digging holes create lakes –, but also depending the physiological signals and inner states, to help the users stay focused on the body — breathing commands the waves crashing on the sandy shores, being relaxed make the forest grow. At the time an additional headset enabled users to immerse themselves in their creation, experiencing it from within, looking up to for the trees, breathing now animating the wind.

Our ambition back then was to investigate up to which point such tool could be use as a support and facilitator for mindfulness, the act of paying a deliberate and non-judgmental attention to the present moment, that had been shown to have a positive impact on a person’s health and subjective well-being. To do so we invited participants versed into meditation (long time practitioners and even a Buddhist lama) as well as medical caregivers (psychologist, psycho-motor therapist).

The feedback and results we gathered indicated that the system was indeed well suited for mindfulness, inducing a calm and mindful state on the user. We also collected many interesting qualitative data, opening up the Garden for new usages. In particular, we realized that such playful multi-modal device could become a tool to alleviate stress or act as mediator to facilitate communication between caregivers and their patients. We then started to envision how the Garden could be used in medical settings and benefit even people with cognitive disorders.

Despite the technical challenges that were hovering above, this is why we started to work, through Ullo, to a portable version of the device, that could be smoothly deployed outside of the lab and used in the field by medical practitioners. It took several iterations to reach this goal, prototypes that were each tested in-situ over the years in partner institutions (nursing home, hospitals, medical and education institutes). Now Garden has finally become an actual product, used across France to help people with diseases ranging from Alzheimer and dementia to autism or ADHD.

Garden was recognized by the scientific community — Honorable Mention Award (top 5% submissions) for its publication at ACM CHI, the leading conference in human-computer interaction; Best Demo Award at IHM 2018, the French counterpart — as well as by the tech industry — CES 2019 Honoree Innovation award Tech For A Better World — or by official representatives — AFNOR certification Testé et Approuvé par les Seniors.

This in only the beginning of the journey though, as we are constantly adding features and imagining new usages for the device — plus of course a packed research agenda. The Garden is currently being investigated as a tool that could be used in education settings, through a pending collaboration with the Nancy-Metz academy and the PErsEUS research team. We are also very interested in pushing further how users could interact with the Garden, connecting new sensors, creating new ways for users to express themselves (from sound synthesis to an “audioscape” matching the land), eventually combining again the Garden with XR devices. Finally we also started to study how features such as the shape of the landscape could bring new information to the table, accounting for users states and helping medical practitioners to better understand their patients (on-going PhD by Camilla Barbini at the CoBTeK lab).

Among the original creators of Garden, check Joan Sol Roo and Renaud Gervais other projects. As about Garden as a product used in the field, more information is available on the Ullo company website.

Associated publications

Joan Sol Roo, Renaud Gervais, Jérémy Frey, Martin Hachet. Inner Garden: Connecting Inner States to a Mixed Reality Sandbox for Mindfulness. CHI ’17 – SIGCHI Conference on Human Factors in Computing System, 2017. 🏆 Best Paper Honorable Mention. ⟨10.1145/3025453.3025743⟩⟨hal-01455174⟩ PDF

Joan Sol Roo, Renaud Gervais, Jérmy Frey, Martin Hachet, Martin. Introspectibles: Tangible Interaction to Foster Introspection. CHI ’16 Workshop – Computing and Mental Health, 2016. ⟨hal-01455174⟩PDF