A Very Real Looper

A large, non-visual virtual reality instrument that is controlled using gesture and bodily movement.

A blog post describing the process behind the creation of this instrument can be read here. In March I will be presenting this work at the Thirteenth International Conference on Tangible, Embedded, and Embodied Interactions (preprint available here).


Contrary to how virtual reality (VR) is normally utilized, a performer playing A Very Real Looper (AVRL) is not disconnected from their surrounding environment through visual immersion, nor is their body restrained by a head-mounted display.

Rather, AVRL uses VR sensors in conjunction with a game engine to map musical sounds and sequences onto physical objects and spaces. These sounds are then by triggered by a performer simply wielding two controllers.

AVRL thus combines the affordances of the physical world with the modularity of a game engine, consequently activating the expressive potential of the body inside of a large, highly reconfigurable, and musically augmented environment.

A virtual view of AVRL captured in the Unity game engine which depicts the musical sounds (in red) and Vive controllers (in black).

The current iteration of AVRL has been built inside of the Unity game engine for the HTC Vive system. The size of this instrument is determined by the distance between the Vive base stations. This instrument is extra-large in the sense that it usually comprises a medium-sized room but can be set up to have a maximum tracking volume of roughly 3600 cubic feet (19ft x 19fxt x 9ft).

First, Unity and the Vive base stations are used in conjunction to overlay virtual, three-dimensional (3D) models onto physical objects in the real world. These physical objects visually represent the locations of the 3D models to the performer. These 3D models have been programmed to detect collisions between themselves and the Vive controllers which are being animated in real-time using motion tracking data.

A diagram depicting a performer colliding into musical sounds and sequences which have been overlaid as 3D models onto physical objects and locations inside the performance space. The objects act as visual markers that represent the location of the musical sounds or sequences to the performer.

After detecting a collision, the 3D models trigger a musical sample, a MIDI sequence, or a specific MIDI note or chord at very low latency. Thus, inside of AVRL the performer is physically colliding with music. These musical sounds and sequences are triggered only once by default, but can be looped by pressing a button on the controllers. To further assist the performer with non-visually pinpointing the location of the 3D models, strong haptic feedback is provided by the controllers whenever a collision is detected.

Each 3D model contained by AVRL can be repositioned at any time to a new location within the performance space. This allows the performer to create novel sets of bodily interactions by mapping the 3D models into different locations before performing.

The Vive controllers also provide a wide range of easily accessible motion tracking data that includes movement speed, rotation, and position. Inside of AVRL, these data are used to control audio parameters in real-time. For example, a MIDI sequence is first looped and the intensity of an audio effect affecting that sequence is altered by moving the controller either higher or lower in space.


In November 2018 I was invited to present and perform with this project at the Music In New Technologies conference in Halifax, Nova Scotia. This work was also featured in a group exhibition at the Society for Literature, Science, and the Arts conference in Toronto, Ontario.