A Very Real Looper: Behind the Scenes

How could a game engine and virtual reality technology be used to enable expressive interactions with music that is physically located in the real world?

This personal research question emerged in early 2018 and marked the birth of A Very Real Looper. This writeup begins with a detailed account of that moment, primarily because it transformed my relation to technology and human-computer interaction. If you would rather just read the technical breakdown of this project, click here. Otherwise, hang in for the ride!


It is March 24th, 2018. I have just exhibited a virtual reality project at a festival at Stanford University. It is late in the evening and I am in an Uber. I’m exhausted yet excited; my destination is a non-profit research lab named Dynamicland located in Oakland, California. I have no idea what to expect from the informal tour that Daniel Windham, a volunteer at the group, has generously arranged for me.

To give you some context, Dynamicland is composed of a group of programmers and designers that are focused on creating a new computational medium. Their goal is to enable people to work together with real objects in the real world, and not alone with virtual objects on a screen. When describing Dynamicland to others, I often say that ‘it’s a community workspace for people interested in programming, playing with code, and learning through making — but without screens.

I arrive in Oakland outside of a nondescript building and walk up a flight of stairs. I enter a massive, open room with high ceilings, wooden floors, and tables dotted throughout. The walls are plastered with diagrams and posters. I spot one titled ‘Design Values’ that lists things such as ‘equity of the physical world’, ‘human scale’, and ‘don’t simulate the real world’. I am led to one of the tables and quickly learn that the table is, in fact, part of the computational medium.

Things I touched and that responded to me while inside of the medium.

I sit beside Daniel. We touch things on the table, together, and the table responds to our every move through projection. I move popsicle sticks around to control the y-value of a graph. I spin a lazy susan to change the opacity of a circle. I want to understand what is going on and Daniel immediately points at a projection of the source code, located on the table. I use my finger to point at a part of the code, which Daniel then changes, causing a color to change in real time. For the first time in my life I am actively participating in a computational process while being present in space and time with someone else. I feel able to both intuitively understand and quickly reshape what is unfolding in front of me. A sense of agency is vividly present in my body.

I was only at Dynamicland for about an hour. But what I saw and felt there has stayed with me ever since, slowly reshaping my relation to technology and human-computer interaction. At that point in my artistic practice I was creating virtual reality (VR) experiences that were beautiful visual simulations, yet time-consuming to make and hard to share. Dynamicland’s radical emphasis on building an interactive medium that both starts and ends in the real world motivated me to rethink how I was using this fledgling medium. I had an open summer ahead of me and decided to concentrate on a single project so as to process and better articulate the new questions that were forming in my head. This marked the birth of A Very Real Looper.


I began with a list of criteria that I wanted the project to meet:

This list was by no means definitive; I did not plan to stick to it unyieldingly. Doing so would have constrained my research and lead to a torturous, awkward journey. Rather, I kept this list in mind more so as a set of ideas that could guide me in the right direction if I felt uncertain about where to go next.


The first step was to choose the type of feedback that A Very Real Looper would provide. I quickly decided that it should be musical, mainly because sound and music are something many of us can relate to in a basic way. I was now building a giant instrument.

The VR system that I had on hand was the HTC Vive. Based on previous experiments in VR I knew that the Vive base stations could be used in conjunction with a game engine to map 3D models into the real world, regardless if the headset was active or not. I knew that these 3D models could detect collisions between themselves and the Vive controllers. I also knew that these collision messages could be programmed to trigger various kinds of events, such as a sound or sequence of sounds, within the game engine. Thus, I had the basic building blocks to build an instrument using VR technology that was interactive, could exist in the real world around a person’s body, and, crucially, did not require the use of a VR headset.


The next step was to figure out what kinds of sounds or musical sequences my instrument would trigger. I knew that digital music often utilized the MIDI standard. The video game engine I had chosen for this project was Unity and so, after some research, I discovered a plugin for Unity called Audio Helm that contained a full synthesizer and MIDI sequencer. Audio Helm became my tool of choice because it enabled me to trigger individual MIDI notes and loop MIDI sequences using C# scripts.

The Audio Helm Sequencer.

Having decided that collisions between 3D models and Vive controllers would trigger musical sounds, I wrote some scripts that would fire different MIDI events after detecting a collision. I attached these scripts to spheres that would now respond musically after a Vive controller collides into them. These scripts utilized either the OnTriggerEnter or the OnTriggerStay class. Spheres that utilized OnTriggerEnter would only play a sound at the moment of collision, while spheres that utilized OnTriggerStay would continue to play a sound as long as the controller remained inside of them.

I also made each sphere movable using the controllers so that one could move them into new locations in space, thus enabling different kinds of bodily interactions.

The musical spheres.

With these building blocks in hand, I continued on with the project but quickly ran into an obvious problem. If the VR headset isn’t worn, then how could anyone locate these musical spheres that are invisible yet present in the room? I realized that I needed each sphere to provide some kind of non-visual feedback that would indicate its location to the user. I wondered about this for a few days and then remembered that the Vive controllers could provide haptic feedback to someone holding them. I also knew that this feedback could be fine-tuned through code. I now had a way to non-visually indicate the location of a musical sphere!

I proceeded to write a script that caused the Vive controller to trigger haptic feedback whenever it was held in the vicinity of a musical sphere. I figured that someone could now walk around the room, easily triangulate the locations of the spheres using both controllers, and start hitting them to make music.

Triangulating the location of a musical sphere using haptic feedback.

Unfortunately things did not turn out so smoothly. I conducted some user tests in which eight volunteers were asked to first locate the spheres and then use them to play a simple melody. At first, a number of volunteers remarked that the process of locating a sphere through sound and touch was enjoyable because it required them to concentrate on the sonic and haptic feedback, and, by doing so, become more aware of the surrounding space and their bodies.

However, after participants familiarized themselves with the instrument and began moving the spheres around the room they began to quickly lose track of them. This meant that rather than being able to easily locate the spheres and use them to create a simple musical composition, users instead spent large chunks of time relocating each sphere.

One participant described how overwhelming this felt and stated that his “spatial memory felt extremely overloaded” because he “needed to remember where each sphere was without any points of reference.” Remember that quote, which states that “The Best Interface is No Interface?” Well, at this point in the project, I had pretty much arrived at the point of “No Interface” and it did not feel right. Instead of creating a simple interactive experience I had created a difficult and cognitively demanding one.


I had assumed that a combination of sonic and haptic cues would be enough for users to first locate and then interact with the musical spheres. But, as it turned out, without any visual representations of the spheres users experienced cognitive overload as they attempted to keep track of the locations of the spheres while hitting or moving them with the Vive controllers. This felt like an important moment during the project — a kind of ground zero of interface design, if you will, that raised a number of questions revolving around the limits of working memory and the effect of visual representations (or the lack thereof) on spatial reasoning.

Solving this problem turned out to be easier than expected. Thinking back to my visit to Dynamicland, and how the computational processes there felt palpably grounded in physical objects, I decided to reintroduce the real world into my project. I began by asking my landlady if I could borrow an odd selection of pool noodles, glass bowls, and rocks from her house. I arranged these in space and experimented with moving the musical spheres on top of these objects. I played around for a bit and realized that A Very Real Looper was working as intended! Now, whenever I wanted to know where a specific musical sphere was, I simply needed look over to the physical object that marked that spheres location.

An arrangement of physical objects marking the locations of the virtual musical spheres.

Thus, with the real world reintroduced, playing A Very Real Looper quickly transformed from an unstable and demanding task into an enjoyable and intuitive musical experience. Around this time I became interested in performing inside of the instrument and was booked to play a show in Toronto. You can watch documentation from that performance below. This, however, marks the end of the first chapter of this project.