Performance as an Iterative Process
One of our goals with MINIMUM MASS is to create an emotional connection between the participant and our two protagonists. We began by crafting the relationship between Sky and Rabia. The strength of their relationship invites the participant into their world, and the multiple realities they inhabit. From a writing and directing perspective, we were able to add layers of meaning, emotional intelligence, and depth to Sky and Rabia through the iterative process of virtual production.
In the summer of 2018, we began by working with Krisina Huang and Sean Pickersgill, two Research Assistants at VUW. Kristina and Sean recorded a radio play of the script and timed out roughly how long each scene would be. This allowed us to bring audio tracks into UE4 and begin constructing levels with rough timing, essentially a realtime animatic.
Next, Sean and Kristina built rough props and acted out the scenes in motion capture. We mapped this uncleaned data onto our digital puppets (which were still in development) and brought them into the empty levels we’d set up. With basic black box theater-style lighting we again constructed a full version of the experience. Seeing characters, motion, and dialogue in sequence allowed to get a sense of timing and the coherence of plot.
The next stage was to build sets and environments. In directing Sean and Kristina, we worked out basic blocking and layout. In order to progress quickly, we collaged marketplace assets and basic geometry. We dropped the previously captured performances into these rough environments and created another pass of the experience. This version created a lot more context and allowed us to better understand the relationship between the size of our motion capture volume (5 meters by 4 meters) and our digital sets. For several sets we had to reconfigure the main area of action to correspond with the size of our capture volume.
For our next iteration, we the directors got into the suits ourselves, and with the help of Carrie Thiel and Sean as occasional stand ins, performed all the scenes. This allowed us to better understand from the position of directors, how our intention on the page translated to the embodiment of performance. We further revised each scene to address the player start position in virtual reality—where the participant is located at the beginning of each scene—so that the performer interacts through gaze and proximity with the participant at the edge of the volume. John Aberdein, our Motion Capture Manager and Mac Pipson, Motion Capture Assistant, built physical versions of several key props. And finally, from a technical stand point, this iteration of performance capture allowed us to test out our facial capture system.





Based on what we learned from the previous shoot, we revised the scale of several digital props to match the physical onset props. We also revised the layout of sets to match new performance blocking. With three complete iterations of performance we put together a complexity pass for the entire script and scheduled our final shoot. We knew that once we got Frankie Adams and Allan Henry on the stage, we wanted to move as quickly and efficiently as possible. The final shoot was scheduled over 5 consecutive days. We were able to stay on schedule and complete ADR sessions for each scene.
Below is uncleaned motion capture of the final version of the Departure scene. While the overall tone and mood of the scene remains, the environment, layout, performance blocking, and cinematography changed significantly since the first pass of the scene. The final performance most accurately represents the vision we as directors aspired towards in the earliest concepting of the story.