Coral

This project started as a(nother) collaboration with the Potioc research team, and the then-post-doc Joan Sol Roo. Through this projects we wanted to address some of the pitfalls related to tangible representations of physiological states.

At this point we had been working approximately 8 years on the topic, creating, experimenting with and teaching about devices using physiological signals; while at the same time exchanging with various researchers, designers, artists, private industries, enthusiasts or lay users. We started have a pretty good idea of the various frictions points. Among the main issues we encountered: building devices is hard, even more so when they should be used outside of the lab, or given to novices. We started to work more on electronics because of that, for example relying more on embedded displays instead of spatial augmented reality, but we wanted to go one step further and explore a modular design, that people could freely manipulate and customize. We first started to wonder what basic “atoms” were necessary to be able to recreate our past projects. Not so many it appeared. Most projects boil down to a power source, a sensor, some processing, an output, no more. Various output can be aggregated to give multi-modal feedback. Communication can be added, to send data to or receive from another location, as with Breeze. Data can be recorded or replayed. Some special form of processing can occur to fusion multiple sensors (e.g. extract an index of cardiac coherence) or to measure the synchrony between several persons, as with Cosmos. And this is it, we have or sets of atoms, or bricks, that people can assemble in various way, to redo or create new devices. Going tangible always comes with a trade-off in terms of flexibility or freedom as compared to digital or virtual (e.g. it is harder and more costly to duplicate an item), but it also brings invaluable features, with people more likely to manipulate, explore and tinker with physical objects (there is more to the debate; for another place).

We are of course not the firsts to create a modular toolkit; many projects and products provide approach to explore electronics or computer science, and the archetypal example — that is also a direct inspiration –, comes from the Lego bricks themselves. However we push for such form factor in the realm of physiological computing. More importantly, we use the properties of such a modular design to answer to other issues pertaining to biofeedback applications: how to ensure that the resulting applications empower users and do not enslave them?

Throughout the project, we aimed at making possible to interact with the artifacts under the same expectations of honest communication that occur between people, based on understanding, trust, and agency.

  • Understanding: Mutual comprehension implies a model of your interlocutor’s behavior and goals, and a communication protocol understand- able by both parts. Objects should be explicable, a property that is facilitated when each one performs atomic and specific tasks.
  • Trust: To ensure trust and prevent artifacts to appear as a threat, their behavior must be consistent and verifiable, and they should perform only explicitly accepted tasks for given objectives. Users should be able to doubt the inner workings of a device, overriding the code or the hardware if they wish to do so.
  • Agency: As the objective is to act towards desirable experiences, control and consent are important (which cannot happen without understanding and trust). Users should be capable to disable undesired functionalities, and to customize how and to whom the information is presented. Objects should be simple and inexpensive enough so that users can easily extend their behavior.

Coral was created (or “blobs”, “totems”, “physio-bricks”, “physio-stacks”… names were many) to implement those requirements. In particular, bricks were made:

  • Atomic: each brick should perform a single task.
  • Explicit: each task should be explicitly listed.
  • Explicable: the inner behavior of an element should be understandable.
  • Specific: each element capabilities should be restricted to the desired behavior, and unable to perform unexpected actions.
  • Doubtable: behaviors can be checked or overridden.
  • Extensible: new behaviors should be easily supported, providing forward compatibility.
  • Simple: As a mean to achieve the previous items, simplicity should be prioritized.
  • Inexpensive: to support the creation of diverse, specific elements, each of them should be rather low cost.

For example, in order to keep the communication between atoms Simple and Explicable, we opted for analog communication. Because no additional meta-data is shared outside a given atom unless explicitly stated, the design is Extensible, rendering possible to create new atoms and functionality, similar to the approach used for
modular audio synthesis. A side effect of analog communication is its sensitivity to noise: we accepted these as we consider the gain in transparency is worth it. It can be argued that this approach leads to a naturally degrading signal (i.e. a “biodegradable biofeedback”), ensuring that the data has limited life span and thus limiting the risk that it could leak outside its initial scope and application. Going beyond, in order to explicitly inform users, we established labels to notify
them what type of “dangerous” action the system is capable of performing. A standard iconography was chosen to represent basic functions (e.g. floppy disk
for storage, gears for processing, waves for wireless communication, …). We consider that, similar to food labeling that is being enforced in some countries, users should be aware of the risks involved for their data when they engage with a device, and thus being able to make an informed decision. On par with our objectives, everything is meant to be opoen-source, from the code to the schematics to the instructions — we just have to populate the holder at https://ullolabs.github.io/physio-stacks.docs/.

Over the last two years [as of 2021] we already performed several tests with Coral, on small scales, demoing and presenting them to researchers, designers or children (more details in Mobile HCI ’20 paper below). The project is also moving fast in terms of engineering, with the third iteration of the bricks, now easier to use and to craft (3D printing, soldering iron, basic electric components). While the first designs were based on the idea of “stacking” bricks, the latter ones explore the 2D space, more alike building tangrams.

This tangible, modular approach enable the construction of physiological interfaces that can be used as a prototyping toolkit by designers and researchers, or as didactic tools by educators and pupils. We are actively working with our collaborators toward producing a version of the bricks that could be used in class, to combine teaching of STEM-related disciplines to benevolent applications that could favor interaction and bring well-being.

We are also very keen interface the bricks with existing devices. Since the communication between bricks is analog it is directly possible to interact with standard controllers such as the Microsoft Xbox Adaptive controller (XAC), to play existing games with our physiological activity, or to interact with analog audio synthesis.

Our work was presented at the Mobile HCI ’20 conference, video below for a quick summary of the research:Our work was presented at the Mobile HCI ’20 conference, video below for a quick summary of the research:

Contributors

In the spirit of honest and transparent communication, here is the list of past and current contributors to the project (by a very rough order of appearance):

Joan Sol Roo: Discussion, Concept, Fabrication, Applications Writing (v1)
Jérémy Frey: Discussion, Concept, Applications, Writing, Funding (v1, v2, v3)
Renaud Gervais: Early concept (v1)
Thibault Lainé: Discussion, Electronics and Fabrication considerations (v1)
Pierre-Antoine Cinquin: Discussion, Human and Social considerations (v1)
Martin Hachet: Project coordination, Funding (v1)
Alexis Gay: Scenarios (v1)
Rémy Ramadour: Electronics, Fabrication, Applications, Funding (v2, v3)
Thibault Roung: Electronics, Fabrication (v2, v3)
Brigitte Laurent: Applications, Scenarios (v2, v3)
Didier Roy: Scenarios (v2)
Emmanuel Page: Scenarios (v2)
Cassandra Dumas: Electronics, Fabrication (v3)
Laura Lalieve: Electronics, Fabrication (v3)
Sacha Benrabia: Electronics, Fabrication (v3)

Associated Publications

Joan Sol Roo, Renaud Gervais, Thibault Lainé, Pierre-Antoine Cinquin, Martin Hachet, Jérémy Frey. Physio-Stacks: Supporting Communication with Ourselves and Others via Tangible, Modular Physiological Devices. MobileHCI ’20: 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services, Oct 2020, Oldenburg / Virtual, Germany. pp.1-12, ⟨10.1145/3379503.3403562⟩⟨hal-02958470⟩PDF

Breeze

Breeze is a research project conducted in collaboration with the Magic Lab laboratory from Ben Gurion University (Israel). Breeze is a necklace pendant that captures breathing from one user while conveying their breathing patterns to a paired pendant worn by another user.

The seed of this project was planted during the development of Echo, when we started to envision scenarios of biofeedback applications involving multiple users. One form factor that we considered as a follow-up of the Echo avatar was a wearable, that we could more easily bring with us and use in everyday life situations. In Breeze first prototypes the feedback was only conveyed through lights, but over the course of the project we added two other modalities, vibrations and sounds. The rationale behind is to let user choose the feedback depending on the context of use — notably social context. For example a breathing biofeedback through sounds can be shared with people around, while vibrations could be perceived only by the person wearing the pendant. Early-on, we also integrated sensing in the pendant, measuring breathing thanks to an inertial measurement unit. The final pendant is meant to increase connectedness by creating a new communication channels between relatives. It can also serve as a non-intrusive sensor that accounts for new features of breathing, correlated with emotions.

The presentation of the paper at CHI ’18 was recorded in the video below. (If you have the patience to watch until the questions, to answer one that caught me off guard and haunted me ever since: the fact that in the lexicon an increase in amplitude is not associated with a perceived increase in arousal can be explained by the fact that here we tested separately each breathing feature, whereas in a typical fight of flight response most often breathing rate and amplitude increase at the same time).

Something we did not describe in the published paper: Breeze contains an additional mode, the “compass”. What happens if someone removes the pendant from their neck? Because of change in absolute orientation, we can detect such event and signal to the wearer of the paired pendant that there is a good reason why the breathing biofeedback stopped. Then this partner could hold their own pendant on the horizontal, which now acts as a compass: the feedback will change not depending on the breathing rate of the partner, but depending on their location. Light changes when pointing Breeze toward the paired pendant and both pendants vibrate when users are face to face… even thousands of kilometers apart. Your loved one became your North, a nice touch for those living apart. More than just a gimmick, this is actually another way to mediate communication through the pendant, another layer of interactivity so that users can choose what information they want to share. A mode that probably should become its own project.

In a lab study, we investigated up to which point users could understand and interpret various breathing patterns when they were conveyed through the pendant. We showed how people associate valence, arousal and dominance with specific characteristics of breathing. We found, for instance, that shallow breaths are associated with low dominance and slow breathing with low arousal. We showed, for the first time, how features such as inhalations are associated with a high arousal and unveiled a “lexicon” of breathing features. This latter result is still overlooked in the HCI context the publication took place, but we believe that breathing patterns hold a huge potential to account for inner states. While most research in physiological computing only extract a breathing rate, there is much more in terms of features (amplitude, but also pauses between breaths, difference between inspiration time and exhalation time, etc.). Breathing is actually hard to properly measure. The features we unveiled could not be inferred from heart rate variability for example, they require a dedicated sensor. Surprisingly, we also found out during the study that participants intentionally modified their own breathing to match the biofeedback, as a technique for understanding the underlying emotion. This is an encouraging result, as it paves the way for utilizing Breeze for communication.

The next phase of the project will be two-folds. On the one hand, we hypothesize that wearable technology can be used to monitor a person’s emotional state over time in support of the diagnosis of mental health disorders. With new features that can extract from breathing with a non-intrusive wearable, we plan to conduct longitudinal studies to compare physiology with a mental health index. On the other hand, we are also very much interested in studying how Breeze as an “inter-biofeedback” could alter the relationship between two persons. He would like to investigate how Breeze could increase empathy, by giving pendants to pairs of users for several months. These studies come with their own technical challenges and their supervision require an important involvement. We are on the look-out for partners or calls that could help push in these directions.

Associated publications

Jérémy Frey, Jessica Cauchard. Remote Biofeedback Sharing, Opportunities and Challenges. WellComp – UbiComp/ISWC’18 Adjunct, Oct 2018, Singapore, Singapore. ⟨10.1145/3267305.3267701⟩⟨hal-01861830⟩. PDF

Jérémy Frey, May Grabli, Ronit Slyper, Jessica Cauchard. Breeze: Sharing Biofeedback Through Wearable Technologies. CHI ’18 – SIGCHI Conference on Human Factors in Computing System, Apr 2018, Montreal, Canada. ⟨10.1145/3173574.3174219⟩⟨hal-01708620⟩. PDF

Cosmos

Cosmos is a shared experience where multiple users can observe a biofeedback of their heart-rate activity, displayed on a large screen.

Cosmos - new ways of interaction and breath sync

To each user is associated a(n archetypal) heart that is wandering in space. The heart rate defines the position of the heart symbol on a diagonal from low to high while the heart rate variability will make the heart wander up and down. This playful visualization is simple and yet effective to account for different features associated with heart rate. More importantly, beyond giving information about each individual, Cosmos in the background is constantly computing various metrics related to the synchronization between users. Events are triggered in the environment depending on the similarities that are extracted from all heart rates. For example, a correlation between two heart rates will display rainbows launching from one heart to the other, while a cluster of hearts (similar heart rates and heart rate variability) will provoke an avalanche of shooting stars and the cutest sounds you will ever hear. Cosmos prompts for introspection (take control of one’s heart) as well as for interaction among users (trigger events by joining efforts). It is also a good support to explain the physiological phenomenon associated with heart rate activity, the link between physiology and cognition or the natural synchrony that can occur between people.

We had the opportunity to showcase Cosmos during many public exhibitions. More that once, we observed how relationships could shift when a user was facing their heart rate and how it related with others’. People question what is happening and they would start various activities to try to control their hearty avatar; relaxation, physical exercises or social interactions… in various forms, often without realizing that their is still a world outside the experience. The kawaii factor® does help to lift anxieties linked to “exposing” oneself through biosignals. Playfulness prevails then, which in turns open the door to unique interactions, even between strangers.

On the technical side, because (for once) Cosmos does not rely on any specific object, it can be quickly setup. We can also interface it with most devices measuring heart rate (there is a standard bluetooth BLE connectivity in the industry), hence we can envision scenarios involving large group of users — we tested up to twelve at the moment. To study the impact of such biofeedback at the level of the group, Cosmos will have its own research in due time.

Echo

Echo is meant to be you tangible avatar, representing in real time your physiological signals (breathing, heart-rate, …) as well as higher-lever inner and mental states (cognitive load, attention level, emotions, …). This is somehow a tangible “out-of-body experience”, hence “Tobe”, the first name of the project back when it was a research project in the Potioc research team at Inria Bordeaux. Echo was not the first avatar we built, though. Before it was Teegi, which was specifically aimed at showing brain activity — a more focused projects, aimed at learning as much as introspection, that went on on its own.

Through Echo, users can observe and interact with what occurs within. In addition, part of the research consist in finding ways to let users shape their avatar, customizing it to increase identification. With the first version, which relied on spatial augmented reality (SAR, projector to display any graphics, external tracking system to project right onto the puppet), users could choose in which way they would represent their inner states. For example they could pick and draw on a dedicated device the color, size or form of their heart, and even animate their heart rate however they saw fit. Echo is conceived and 3D printed from scratch; tedious process for mass production but more flexibility when it comes to adjust shape and size to users’ liking. If it started as a cumbersome SAR setup, Echo ended up as a self-contained object with embedded display and computational unit, nowadays ready to shine with the flick of a switch.

We were able with Echo to investigate for the first time a shared biofeedback between pairs of users, back in 2014, with shared relaxation exercises. Among the other use cases we imagined for echo: a display that could re-enact how you felt along side the picture of a dear souvenir; an avatar representing a loved one remotely (scenario that since then we pushed through Breeze; or a proxy that could help people with sensory challenges to communicate with those around (e.g. autism spectrum disorder). This latter scenario is one of the applications we anticipate the most.

We are currently running a study, a two-users scenarios, where we want to formally assess up to which point an avatar such as Echo could alter the perception people have of one another. We hypothesize that communicating through physiological signals with such interface could create an additional layer of presence when to people meet and share a moment.

Even though Echo is still mostly a research project at the moment, several of them already lives outside of Ullo, up to Japan and Creact headquarters, where they are meant to be used in education context.

Additional resources: repository hosting the first version of Echo, the spatial augmented version based on visual programming language for both the rendering (with vvvv) and the processing (with OpenViBE): https://github.com/introspectibles. Personal page of Renaud Gervais, the other father of this first version.

Associated publications

Renaud Gervais, Jérémy Frey, Alexis Gay, Fabien Lotte, Martin Hachet. TOBE: Tangible Out-of-Body Experience. TEI’16 ACM Conference on Tangible, Embedded and Embodied Interaction, 2016⟨10.1145/2839462.2839486⟩⟨hal-01215499⟩. PDF

Flower

Flower is a device specifically aimed at providing breathing exercises. Patterns with different paces and colors are shown on the petals, that users can sync with. The main use-case is to use it as a way to reduce anxiety, when users choose to take a break. It is also envisioned as ambient device operating in the peripheral vision, with the slow pulsating light gently guiding users, without intruding into their routine. While we envisioned various designs at the beginning of the project, before the name settled, in the end a “flower” as a form factor is reminiscent of a plant next to which one would breathe in in order to smell it.

When the Flower is connected to a smartwatch, the breathing guide adapts to users, speeding up slightly when heart rate is higher, slowing down when heart rate is lower. This is on par with existing literature around the physiological phenomenon of cardiac coherence (in a nutshell: heart rate variability synced with breathing). There is indeed not one breathing to rule them all, and users benefits from adapting the breathing guide to their taste and physiology in order to provide a more effective guide.

To this day tow studies took place. One occurred in a replica apartment in order to assess its usability and how people would appropriate the device. The second study assessed the effect of the device when stressors were presented to participants, collecting along the way subjective measures, performance metrics and markers extracted from heart rate variability. In the associated paper we describe how the design of the Flower was praised by users and how it can reduce symptoms of stress when users focus their attention on it, as well as increase performance on a cognitive task (N-back). We did attempt to investigate whether an ambient biofeedback could alleviate stress, however this other experimental condition did not yield any significant difference compared to a sham feedback — most probably because an ambient feedback take a longer than mere minutes before it could be effective.

At this stage a couple dozens devices are being used by various people, including therapists that integrated the device in their practice — more information about this version on the company’s website. Beside providing breathing exercice, a second usage that emerged from the field consists in using the Flower as a timer, to orchestrate the day for people suffering from disorientation. We are actively working toward a second iteration that would offer more interaction when it is being manipulated and that could be mass-produced. At the same time we are building a platform that could help stimulate interactions between users and that could be used to gather data for field studies. We are also considering use-cases where one Flower could serve as a biofeedback for a whole group, the color changing depending on the overall heart rate variability.

Associated publications

Morgane Hamon, Rémy Ramadour, Jérémy Frey. Exploring Biofeedback with a Tangible Interface Designed for Relaxation. PhyCS – International Conference on Physiological Computing Systems, Sep 2018, Seville, Spain. ⟨10.5220/0006961200540063⟩⟨hal-01861829⟩. PDF

Morgane Hamon, Rémy Ramadour, Jérémy Frey. Inner Flower: Design and Evaluation of a Tangible Biofeedback for Relaxation. Physiological Computing Systems, Springer, Cham, 2019. ⟨10.1007/978-3-030-27950-9_8⟩PDF

Prism

Did you ever get lost while reading a book, living through the characters and the events, being transported over the course of a story in a foreign world? What if such written universe could evolve depending on you; the text reacting discretely when your heart is racing for of paragraph of action, or when your breath is taken away by the unexpected revelation of a protagonist?

This is a project about an interactive fiction fueled by physiological signals that we hereby add to our stash. While there was hints of a first prototype published four years prior, the current version is the result of a collaboration with the Magic Lab laboratory from Ben Gurion University, Israel. We published at CHI ’20 our paper entitled “Physiologically Driven Storytelling: Concept and Software Tool”. We received there a “Best Paper Honorable Mention”, awarding the top 5% of the submissions — references and link to the original article at the bottom.

Beyond the publication and the research, we wish to provide to writers and readers alike a new form of storytelling. Thanks to the “PIF” engine (Physiological Interactive Fiction), it is now possible to write stories which narrative can branch in real time depending of signals such as breathing, perspiration or pupils dilatation, among others. To do so, the system combines a simplified markup language Ink, a video-game rendering engine (Unity) and a robust software to process in real time physiological signals (OpenViBE). Depending on the situation, physiological signals can be acquired with a laboratory equipment, as well as with off-the-shelf devices like smartwatches or… with Ullo’s own sensors.

Interactive fiction’s origin story takes place in the late 70s, a time during which “Choose Your Own Adventures” books (and alike) emerged alongside video games that were purely textual. In the former one has to turn to a specific page depending on the choice made for the character; in the latter, precursors to adventures games, players have to type on the keyboard simple instructions alike OPEN DOOR or GO NORTH to advance in the story — one of the most famous game: Zork, by Infocom. (Zork, that I [Jeremy] must confess never being able to finish, unlike jewels such as A Mind Forever Voyaging or Planetfal, developped by the same company). More detailed story in the paper. Here to explicit interaction from the reader we substitute implicit interaction, relying on the physiology, with a pinch of machine learning to understand signals’ evolution depending on the context. Transparent for the reader, and no need to wield a programming language complex to learn for the writer, but a light syntax quick and easy to grasp.

If the vision — which is not without resemblance to elements that one can find in science-not-so-fiction-anymore work such as The Ender’s Game or The Diamond Age — is ambitious, the project is still in its infancy. Yet, two studies on the menu for the first published full paper. In one we investigate the link between, on the one hand, proximity of a story with the reader and, on the other hand, empathy toward the character. In the other study we look at which information physiological signals can bring about the reader, with first classification scores on constructs related to emotions, attention, or to the complexity of the story. From there a whole world is to explore, with long-term measures and more focused stories. One of the scientific objectives we want to carry on is to understand how this technology could favor empathy: for example opening-up readers perspectives by helping them to better encompass the point of view of a character which seems definitely too foreign at first. One lead among many, and on the way awareness about all the different (mis)usages.

Beside this more fundamental research, during project’s next phase we expect to organize workshops around the tool. If you are an author boiling with curiosity, whether established or hobbyist — and not necessarily kin on new tech –, don’t hesitate to reach for us to try it out. We are also looking to build a community around the open-source software we developed, contributors are welcomed!

On the technical side, next we won’t deny ourselves the pleasure of integrating devices such as the Muse 2 for a drop of muscular and brain activity, or exploring virtual reality rendering (first visuals with the proof of concept “VIF” http://phd.jfrey.info/vif/), or creating narratives worlds shared among several readers.

For more information, to keep an eye on the news related to the project, or to get acquainted with the (still rough) source code, you can visit the dedicated website we are putting up: https://pif-engine.github.io/.

Associated publications

Jérémy Frey, Gilad Ostrin, May Grabli, Jessica Cauchard. Physiologically Driven Storytelling: Concept and Software Tool. CHI ’20 – SIGCHI Conference on Human Factors in Computing System, Apr 2020, Honolulu, United States. 🏆 Best Paper Honorable Mention. ⟨10.1145/3313831.3376643⟩⟨hal-02501375⟩. PDF

Gilad Ostrin, Jérémy Frey, Jessica Cauchard. Interactive Narrative in Virtual Reality. MUM 2018 Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia, Nov 2018, Cairo, Egypt. pp.463-467, ⟨10.1145/3282894.3289740⟩⟨hal-01971380⟩. PDF

Jérémy Frey. VIF: Virtual Interactive Fiction (with a twist). CHI’16 Workshop Pervasive Play, 2016. ⟨hal-01305799⟩. PDF

Pulse

Pulse is an experience that we showcased first during the 2019 edition of the CES. With Pulse we wanted provide a moment where users can reflect not only on their heart rate, but also on the heart rates of those around, while retaining complete agency. We indeed brought back more control in the hands of the users; quite literally: thanks to ECG electrodes embedded in the spheres, the physiological measures only occur when users decide to grasp the device. Pulse involves three components. 1. The “planet”, that illuminates when users put their hands on it, starting to, you get it, pulse through light and vibration at the same pace of the heart rate. 2. The central “hub”, that gathers the heart rates of all users (up to four in current version), and that changes color accordingly to the synchronization it detects among them. 3. Finally the cable, also called… actually we don’t have yet a good name for it, but nonetheless it does more than conveying electrical signals and information: you can observe pulses of lights that accompany each heartbeat from the planets to the hub. Beyond pleasing the eye, that the cable explicitly conveys signals is also a way to remind users about what is going on behind the curtain.

Pulse was conceived in collaboration with the Potioc team at Inria; in particular Thibault Lainé and Joan Sol Roo, who worked on the very first proof of concept (sensors + the… cable). Current design would not be complete without the craft of Alexis Gay from GayA concept, who carefully transformed white blobs in shiny planets and helped to refine the user experience.

Due to its modular nature and “hands-on” approach, Pulse shares similarities with our Coral. More than what meets the eye at first: thanks to analog output here as well we can connect the hub to other gear. As such we built for the CES a device that converts analog signals to the MIDI protocol; a device (yet to have its own page) that in turns we connected to a sequencer and synthesizer, the teenage engineering OP-Z. As a result: a soundtrack that speeds up or down depending on the group average heart-rate, and notes and chimes triggered by each heart beat. Space does sound at times.

Because light, vibration and music was not enough, Pulse’s hub can also act as a (Bluetooth) sensor to connect the group to our Cosmos, to create an even more engaging experience. By merging these different moralities and projects, we are building a whole universe that revolves around participants.

Vibes

Imagine holding in your hand an enclosure shaped as a heart, that is pulsating through lights and vibrations at the same pace of your own heart. Classic biofeedback. Now picture giving it to someone in front of you: we assure that the words “take this, this is my heart” has quite an effect on visitors during an exhibition. This is Vibes, the tangible biofeedback you hold onto, that you can share, exchange, feel and make feel. Among the use cases we envision with Vibes: send your heart rate (i.e. your vibes) to someone far away, to remind them that you think about them. For the moment we can easily switch the signals between pairs of devices, to have users comparing their rhythms one with another.

Vibes
Two persons exchaging their heart-rate through Vibes

In the next iteration we will improve the haptic feedback. While vibration motors give satisfactory results to mimic a beating heart, we are in the process of integrating novel actuators, which versatility enables to explore any dynamic (think tiny boomboxes in the palm of the hand).

Vibes is still at an early stage, and yet we witnessed first hand how giving away a pulsating heart — even a nice-looking one — has an immediate effect on users. There is intimacy involved. Interestingly, often people would compare the pace of Vibes to the one that can they can measure themselves, placing for example a finger on the jugular. We observed these situation occurring more frequently that with our other biofeedback devices. People tends to underestimate the pace of their heart rate; maybe because of the proximity between the representation and the actual phenomenon, any perceived discrepancy might prompt for investigation (still on the right side of the uncanny valley?). This relationship between representation and effectiveness is still an hypothesis, one that we hope to investigate in the future.

Garden

Connecting inner states to a mixed reality sandbox

Garden started within the Potioc research team in Inria Bordeaux, at the end of 2015. This project was a new iteration of the “introspectibles” that were first investigated with Teegi and Tobe (now Echo). This human-size interactive sandbox lets users shape a landscape, which colors and animation evolve not only depending on the topology — piling sand create mountains, digging holes create lakes –, but also depending the physiological signals and inner states, to help the users stay focused on the body — breathing commands the waves crashing on the sandy shores, being relaxed make the forest grow. At the time an additional headset enabled users to immerse themselves in their creation, experiencing it from within, looking up to for the trees, breathing now animating the wind.

Our ambition back then was to investigate up to which point such tool could be use as a support and facilitator for mindfulness, the act of paying a deliberate and non-judgmental attention to the present moment, that had been shown to have a positive impact on a person’s health and subjective well-being. To do so we invited participants versed into meditation (long time practitioners and even a Buddhist lama) as well as medical caregivers (psychologist, psycho-motor therapist).

The feedback and results we gathered indicated that the system was indeed well suited for mindfulness, inducing a calm and mindful state on the user. We also collected many interesting qualitative data, opening up the Garden for new usages. In particular, we realized that such playful multi-modal device could become a tool to alleviate stress or act as mediator to facilitate communication between caregivers and their patients. We then started to envision how the Garden could be used in medical settings and benefit even people with cognitive disorders.

Despite the technical challenges that were hovering above, this is why we started to work, through Ullo, to a portable version of the device, that could be smoothly deployed outside of the lab and used in the field by medical practitioners. It took several iterations to reach this goal, prototypes that were each tested in-situ over the years in partner institutions (nursing home, hospitals, medical and education institutes). Now Garden has finally become an actual product, used across France to help people with diseases ranging from Alzheimer and dementia to autism or ADHD.

Garden was recognized by the scientific community — Honorable Mention Award (top 5% submissions) for its publication at ACM CHI, the leading conference in human-computer interaction; Best Demo Award at IHM 2018, the French counterpart — as well as by the tech industry — CES 2019 Honoree Innovation award Tech For A Better World — or by official representatives — AFNOR certification Testé et Approuvé par les Seniors.

This in only the beginning of the journey though, as we are constantly adding features and imagining new usages for the device — plus of course a packed research agenda. The Garden is currently being investigated as a tool that could be used in education settings, through a pending collaboration with the Nancy-Metz academy and the PErsEUS research team. We are also very interested in pushing further how users could interact with the Garden, connecting new sensors, creating new ways for users to express themselves (from sound synthesis to an “audioscape” matching the land), eventually combining again the Garden with XR devices. Finally we also started to study how features such as the shape of the landscape could bring new information to the table, accounting for users states and helping medical practitioners to better understand their patients (on-going PhD by Camilla Barbini at the CoBTeK lab).

Among the original creators of Garden, check Joan Sol Roo and Renaud Gervais other projects. As about Garden as a product used in the field, more information is available on the Ullo company website.

Associated publications

Joan Sol Roo, Renaud Gervais, Jérémy Frey, Martin Hachet. Inner Garden: Connecting Inner States to a Mixed Reality Sandbox for Mindfulness. CHI ’17 – SIGCHI Conference on Human Factors in Computing System, 2017. 🏆 Best Paper Honorable Mention. ⟨10.1145/3025453.3025743⟩⟨hal-01455174⟩ PDF

Joan Sol Roo, Renaud Gervais, Jérmy Frey, Martin Hachet, Martin. Introspectibles: Tangible Interaction to Foster Introspection. CHI ’16 Workshop – Computing and Mental Health, 2016. ⟨hal-01455174⟩PDF

Ambient objects

According to a study by Gartner and Idate, in 2020 the number of connected objects in circulation around the world is between 50 and 80 billion. This dizzying number includes smartphones and connected watches, among others. Among these objects, we can gradually see the appearance of connected objects of a new generation, ambient objects.

What’s an ambient object?

An ambient object is part of the family of ambient information systems (ambient displays). It is an object that disseminates information, in an indirect, unsolicited manner to the person. By indirect, we mean that the information will be on the periphery of our attention. We are therefore sensitive to it implicitly, without processing the information directly in an explicit manner. This is the central tenet of calm technology described by Weiser and Brown in 1997 (Designing Calm Technology) which suggests that the display of information should move easily from the periphery of our attention to the center, and vice versa. As an example, we can cite the illuminated sign with the inscription “on air” which lights up above the door of the recording studios when a recording is in progress and goes out when it is finished. Information is transmitted without distracting or interfering with what we were doing.

Additionally, ambient information systems are connected objects that transmit information through subtle changes in a person’s environment (for example, decorative objects, ambient sound or light). These displays aim to blend seamlessly into a physical environment, where various everyday objects are transformed into an interface between people and digital information. For example, in some smart homes, switching on lightening and light intensity vary depending on the natural light. The fact that the lights switch on tells us that it is getting late without distracting us.

Why use an ambient object?

This method of disseminating information makes it possible to less distract the person, by not disrupting their attention. It then requires minimal effort on the part of the user while still providing knowledge. So instead of imposing information on the person with disturbing notifications, like a smartphone or a connected watch could do, it is the user who will grab it when they needs it.

When the information being disseminated is a breathing guidance (peripheral rhythmic breathing device), it has been shown that people will gradually synchronize with the guide and eventually breathe at the same rate as the guide. This occurs even when people are focused on another task, that requires their full attention (Morajevi et al 2011).

Where does it come from?

On the one hand, ambient objects come from the concept of ubiquitous computing (UbiComp, ubiquitous computing) imagined by Weiser in 1991. Ubiquitous computing aims to make all kinds of services accessible, anywhere, while “hiding” the computing units. In this human-computer interface paradigm computers run in the background. Thus, the user, no longer having the constraints of using a computer (being seated in front of a keyboard, a screen, etc.) regains their freedom of action, their freedom of movement.

On the other hand, this concept of displaying peripheral information, blended in the environment, also has to do with to the notion of ecological device. Indeed, ambient objects are less likely to alter the environment if which people operate, enabling thus smoother integration of the technology in their surroundings.

Bibliography

Moraveji, N., Olson, B. et al. Peripheral paced respiration : influencing user physiology during information work.
UIST ’11: Proceedings of the 24th annual ACM symposium on User interface software and technology October 2011 Pages 423–428
https://doi.org/10.1145/2047196.2047250
https://dl.acm.org/doi/abs/10.1145/2047196.2047250 

Weiser, M., Seely Brown, J. Designing Calm Technology. PowerGrid Journal, 1996.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.135.9788&rep=rep1&type=pdf

Yu, B., Hu, J., Funk, M. et al. DeLight: biofeedback through ambient light for stress intervention and relaxation assistance. Pers Ubiquit Comput 22, 787–805 (2018).
https://doi.org/10.1007/s00779-018-1141-6 
https://link.springer.com/article/10.1007/s00779-018-1141-6

Vogel, D. Balakrishnan. Interactive public ambient displays : transitioning from implicit to explicit, public to personal, interaction with multiple users.
UIST ’04: Proceedings of the 17th annual ACM symposium on User interface software and technology October 2004 Pages 137–146
https://doi.org/10.1145/1029632.1029656
http://www.dgp.toronto.edu/~ravin/papers/uist2004_ambient.pdf