The Contemporary Performance Think Tank is housed in the John Wells Directing Program MFA at Carnegie Mellon University’s School Of Drama under the direction of Caden Manson. Each year the Think Tank focuses on a set of topics concerning the fields of Theater and Contemporary Performance and conducts research and interviews to produce a paper as a resource for practitioners. This year’s topic is contemporary performing artists and companies redefining relationships with audience and pushing the formal relationships of architecture, artist, and audience. For this paper the Think Tank chose five areas on the forefront of this research to explore; Contemporary Choreography, Mixed Reality Performance, Performance Cabaret, Immersive Theatre, and Social Engaged Art. Each section of this paper includes an introduction to the specific practice, a conversation with an artist, and a list of artists working in and around the specific practice. “What is Mixed Reality Performance?” is part of a series of posts. Check back daily to see the next posts.
Mixed Reality Performance1
by Rachel Karp
“mixed reality performances deliberately adopt hybrid forms that combine the real and virtual in multiple ways and through this, encourage multiple and shifting viewpoints”
– Performing Mixed Reality, Steve Benford and Gabriella Giannachi2
As technology continues to invade and evolve our lives, so too does it continue to invade and evolve our theater. While seeing technology on stage is nothing new, more and more theater-makers are investigating entirely new forms of performance that completely revolve around high-tech modes and devices. The traditional Western separation of audience and actor is being usurped by performance pieces that put art literally into audiences’ hands through the likes of apps, iPhones and iPads, webpages, and VR sets.
Blast Theory has been using interactive media to create groundbreaking theatrical experiences that explore “the social and political aspects of technology” for over 20 years.3 One of its current creations, Karen (2015), is entirely confined to an app—part of an interactive performative field known as app-drama.4 The app consists of a life coach, Karen, who asks a series of questions taken from psychological profiling but embedded into a less sinister-seeming context.5 At least at first. The experience is episodic, consisting of short interactions (usually 3-5 minutes in length)6 in which Karen asks a few questions, the “player” (as Blast Theory calls the solo audience member) provides answers, and the app then directs the player to return a number of hours later.7 The experience is immediately intimate, from limitations of the system—the sound only functions when headphones are plugged in—to the content of the questions—“Do you ever lie awake thinking about someone you shouldn’t?”8—to the way, if notifications are turned on, Karen frequently interrupts a player’s life, as often to discuss her own well-being as the player’s. After a few of these interactions, it becomes clear that Karen doesn’t have many boundaries, and things go far beyond what would be expected between a life coach and her client.9
The project was developed with National Theatre Wales starting in 2013, with Blast Theory wanting to make an intimate smartphone experience in which each audience member interacted directly with the work’s protagonist. This led to an investigation into big data and how governments and large companies collect data on users, usually without consent, and how they rely on various psychologist-developed techniques to measure personalities through the data they collect. Blast Theory makes these ideas explicit at the end of the experience, when each player is offered a personalized data report that details how the player behaved and how decisions affected Karen. Players can also compare their reports to other players’.10
As Blast Theory explains, “We feel it’s our job as artists to pose questions about this new world where technology is ever more personalised and intrusive. We love having our services tailored to us and we’re scared of the price we’re paying for that personalisation.”11 A New York Times profile of the app echoed this sentiment, describing it as “deliberately unsettling […] an intriguing tool for exploring the knotty relationship between digital personalization and human solipsism [… that asks,] Where do we draw the line between our devices and ourselves?”12
Another company that has experimented with the line between device and self is Rimini Protokoll, led by Helgard Kim Haug, Stefan Kaegi, and Daniel Wetzel. The trio has been making theater together since the year 2000. They describe the focus of their work as “the continuous development of the tools of the theater to allow for unusual perspectives on our reality.”13
For Rimini Protokoll, the tools of the theater often involve use of the latest technologies. To this end, they, too, have experimented with app-drama.14 They have also relied on other smartphone-style technology within larger theatrical works. A prime example is Situation Rooms (2013), which has toured extensively over the past four years. A “multiplayer video piece,” Situation Rooms gives audience members an intimate view of twenty lives that have been shaped by the arms trade. These lives include a hacker, child soldier, arms manufacturer, lawyer, sniper, peace activist, gunship pilot, refugee, member of parliament, surgeon, and war photographer in countries including Syria, Switzerland, Sierra Leone, South Sudan, Dubai, Mexico, Gaza, Pakistan, and Abu Dhabi. Throughout the experience, audience members carry iPads and wear headphones that, via commands, guide them on individual paths through a film set that recreates the world of weapons.15 Audience members assume ten different identities, for about 7 minutes each.16 In these identities, they often interact with other audience members who assume other arms-related identities, as the iPads show video of the real people whose stories are being told and were filmed telling their stories on the same set.17
As Rimini Protokoll describes on its website, “The audience does not sit opposite the piece to watch and judge it from the outside; instead, the spectators ensnare themselves in a network of incidents, slipping into the perspectives of the protagonists, whose traces are followed by other spectators.”18 When the show toured to Australia’s Perth Festival in 2014, a review in The Guardian called it a “remarkable experience” and summarized it as follows: “it’s theatre with the audience as actors; journalism with the consumer interacting directly with the story; a video game where the screen bleeds into real and constructed worlds.”19
Mixing video game and performance is a major trend within the mixed reality world. Hwa Young Jung, a multidisciplinary artist who often combines artwork with science through what she calls “interactive non-fiction,” also works at the intersection of game and performance.20 Her recent work Beba me – Drink Me (2016) is a web-based game created with Sabrina Lopez (writer), Saulo Jacques (biologist), and Aline Furtado (architect). They made Beba me in Brazil during the 2016 Interactivos?, “a collaborative laboratory for developing projects.”21 The focus of Interactivos?’16 was “water and autonomy,” with an explicit goal of encouraging “cross-disciplinary action between popular, scientific, technical and artistic knowledge to create solutions to water issues through a citizen perspective.”22
Beba me, available to be played in Portuguese and English, explores how humans can affect the water system in Serrinha, a city in Eastern Brazil.23 It has both a journalistic and fantastical feel, beginning with a small fish telling the player not to drink the water from the Rio Alambari. If they decide to heed the fish’s warning and talk to it rather than risk a drink (which is the better move, because drinking leads–in good dark humor–to immediate death), players learn the water is polluted and are given two options: go west or go east to try to find clean water. While navigating geographically, meeting other talking animals and some humans, too, players learn about different ways people and animals thrive and struggle in Serrinha and specific technologies they rely on to try to get clean water. In addition to choosing where to go and what to talk to, players can click certain highlighted words to learn even more about practices specific to the region. Death does creep throughout the game, but a back button allows players to correct their wrong moves in the essential search to quench thirst.24
Victor Morales experiments with gaming, technology, and intimate performative experiences in a very different mode. Victor is a director, performer, and designer who works in video animation, game design, text, sound, puppetry, and movement. For just under fifteen years, he has been “obsessed with the art of video game modifications and has implemented different game engines into most of the works he has participated in or created.”25
Morales’ latest work, Skiff/Faustroll (2017), recently played at 3LD, a preeminent venue for experimental, “technology-driven” theater, where Morales is an Associate Artist and the Digital Technical Director.26 Skiff/Faustroll, inspired by the Alfred Jarry text “Exploits and Opinions of Dr. Faustroll,” relies on a number of technologies to create an augmented reality journey that explores fears of global warming, inequality, and technology. Created by a video game engine, the piece also makes use of projection mapping, real-time mocap, and physical computing, and the set relies on arduinos, servos, and stepper motors to create an environment that acts like a pop-up book.27
AR is increasingly being incorporated into performance, as exemplified by Skiff/Faustroll and the previously described Situation Rooms, which Rimini Protokoll calls “augmented reality as three-dimensional as only theatre can be.”28 Another company experimenting with AR is The Builders Association.
Since 1994, The Builders Association has been making work that mixes performance and media, taking inspiration from contemporary life.29 Their Elements of Oz (2015), which also played recently at 3LD, riffs on The Wizard of Oz and its creation. Builders describes the work as follows: “Through the use of YouTube tributes, a re-contextualization of the film, and the incorporation of new technologies, ELEMENTS OF OZ celebrates and deconstructs this incredibly rich cultural artifact.”30 What are these new technologies? Before arriving at the venue, audience members receive an email in which they are instructed to download an app called, also, “Elements of Oz.” Upon arrival, various members of the staff and production team make sure that audience members have followed instructions, and they also offer device charging for those who might need some extra power to use the app throughout the performance. The app is to remain open for the performance’s 90 minutes, and at times it indicates to hold the device up to the stage. When this is done, digital imagery overlays what is seen with the human eye. Tornados, poppy fields, and flying monkeys all jump up on the screen, and if viewers move the screen, the augmentation tracks along.31 While viewers remain seated a la the more traditional separation of audience and actor, the AR app puts the performance experience much more directly into the audience’s hands. As the Time Out New York review summarized, Elements of Oz, “taps into a modern feeling of technological wonder while demystifying an earlier one.”32
In addition to AR, VR is also invading the theater world. CREW is an experimental company that describes its work as “Scientific Fiction,” aiming to give audiences “a glimpse of the future by questioning new digital possibilities and putting them to use in an alternative way.”33 To this end, they have been mixing realities for years. Their 2008 work W (Double U) is an immersive work for two participants that lets one see through the others’ eyes. Each participant wears a helmet that uses a “head-swap” technology developed by CREW with EDM, the Expertise Centre for Digital Media at the Hasselt University.34 With this technology, goggles fully cover a participant’s own vision, a camera above the goggles feeds video of what is seen to the inside of the other participant’s goggles, and sensors track head movements to enable the image to adapt. The resulting effect creates “an illusion that one is not looking at projected images but actually is present in that [other] world.”35 Over thirty minutes,36 the pair works together so that each of them may make their way through a public space.37 The experience opens up the possibility for empathy,38 which has become something of a buzzword for VR.39
CREW’s C.A.P.E. Drop_Dog (2016) is a more recent project in “virtuality performance.”40 C.A.P.E. stands for Cave Automatic Personal Environment, CREW’s technology through which, they say, “we can shift your presence from one place to another in no time.”41 First released in 2010, but clearly building off the technology used in W (Double U), C.A.P.E. works as follows:
A visitor is equipped with video-goggles, headphones and a portable computer. This ‘immersive device’ allows him to enter a different reality. Pre-recorded or real-time 360° film images in the video-goggles and omni-directional sound puts him literally in the middle of the image. He can look around into this filmed environment at free will. Moreover, he can move around and walk inside the virtual space. He is, so to speak, teleported from one place to another or from one time to another.
The combination of looking, hearing, moving (and sometimes even touching) creates this bewildering illusion. An important mechanism here is the so-called sensorial deprivation: the lack of observation of one’s own body, in favor of a newly represented body, intensifies the experience.42
In C.A.P.E. Drop_Dog, currently on tour, the virtual space is inspired by two short stories by acclaimed Dutch writer Tonnus Oosterhoff.43 Upending the typical experience of sitting down to read a story and seeing it only in one’s imagination, C.A.P.E. Drop_Dog enables participants to experience the stories on their feet.44 Participants are guided through the stories with aural and tactile input. Each story is eight minutes long, during which time the text serves as floating thoughts that merge with images to construct an immersive narrative through juxtaposition.45
Yehuda Duenyas is another theater artist working in VR. Duenyas, formerly a member of company National Theater of the United States of America (NTUSA), is now an experience designer who makes immersive encounters that draw on a background in theater, gaming, interactive technology, physical computing, ride design, and reality TV.46
His most recent VR creation, CVRTAIN, premiered at PS 122’s COIL 2017. In it, a person puts on headphones and a VR headset and holds sensors equipped for haptic feedback. The image that appears is that of a red curtain, which parts to reveal an ornate theater filled with an adoring audience waiting to see what the person will do. For five to ten minutes, the person can explore a range of motions to see how the virtual audience will respond, subverting the typical audience-performer relationship.47 To add another layer of complexity, those waiting for their turn can watch those experiencing it, adding a real audience to the virtual one.
Other VR experiences by Duenyas includes Airflow, in which a rider travels over a mountain range as if by jetpack,48 and The Ascent, “the first mind-controlled ride/game,” in which an individual can levitate more than 30 feet by the power of their focus.49 (For more on Yehuda, see interview below.)
All of these hybrid theater works allow for a new kind of personalized experience, with a level of intimacy that is much harder (and some may say impossible) to achieve by having audience members sit in seats and watch a work unfold before them. As our technology has become more and more individualized, and as we treat it with more comfort and give it more access to all aspects of our lives, these theater-makers—and many more like them—have tried to create art that is just as personal and present. Our technology won’t be going away anytime soon; the continued challenge to make theater more reflective of our technology-obsessed world is unlikely to, either.
- Steve Benford and Gabriella Giannachi, Performing Mixed Reality (Cambridge, MA: The MIT Press, 2011).
- Ibid., 4.
- “Our History & Approach,” Blast Theory, accessed May 29, 2017, http://www.blasttheory.co.uk/our-history-approach.
- Erin B. Mee, “The Audience is the Message: Blast Theory’s App-Drama Karen,” TDR: The Drama Review, 60 no. 3 (Fall 2016) : 165-171.
- “Karen,” Blast Theory, accessed May 29, 2017, http://www.blasttheory.co.uk/projects/karen.
- Mee, “The Audience is the Message,” 168.
- “Karen,” Blast Theory.
- Blast Theory, Karen, Phone Application, 2015, https://itunes.apple.com/us/app/karen-by-blast-theory/id945629374?mt=8.
- “Karen,” Blast Theory.
- Frank Rose, “Karen, an App That Knows You All Too Well,” New York Times, April 2, 2015, http://www.nyti.ms/2lBiabW.
- “Rimini Protokoll,” Rimini Protokoll, accessed May 28, 2017, http://www.rimini-protokoll.de/website/en/about.
- See Rimini Protokoll’s collaborations with Sonido Ciudad (Chile), AppRecuerdos, at http://www.rimini-protokoll.de/website/en/project/apprecuerdos.
- “Situation Rooms,” Rimini Protokoll, accessed May 28, 2017, http://www.rimini-protokoll.de/website/en/project/situation-rooms.
- “Situation Rooms at Perth Festival,” ABC Arts, Video, February 21, 2014, accessed May 28, 2017, http://www.abc.net.au/arts/blog/Video/Perth-Festival-situation-room-theatre-as-journalism-140220/default.htm.
- Vicky Frost, “Situation Rooms by Rimini Protokoll – Review, ” Guardian, February 17, 2014, http://www.theguardian.com/culture/australia-culture-blog/2014/feb/18/situation-rooms-by-rimini-protokoll-review.
- “Situation Rooms,” Rimini Protokoll.
- Frost, “Situation Rooms by Rimini Protokoll – Review.”
- “About,” Hwa Young Jung, accessed May 26, 2017, http://www.slyrabbit.net/about.
- “Background,” Interactivos?’16, accessed May 26, 2017, http://www.interactivos16.info/english.html.
- “Beba-me – Drink Me,” Hwa Young Jung, accessed May 26, 2017, http://www.slyrabbit.net/beba-me%F0%9F%92%A7drink-me.
- Aline Franceschini et al., Beba-me – Drink Me, Web game, 2016, http://www.drinkme.textadventuretime.co.uk/play/index.html.
- “moralvictor,” Vimeo, accessed January 30, 2017, http://www.vimeo.com/moralvictor/about.
- “About: Fostering Cutting Edge Artwork since 2006.” 3LD, accessed January 30, 2017, http://www.3ldnyc.org/about.html.
- “Pataphysical February,” 3LD, accessed January 2017, http://www.3ldnyc.org/pataphysical-february.html.
- “Situation Rooms,” Rimini Protokoll.
- “About,” Builders Association, accessed May 15, 2017, http://www.thebuildersassociation.org/about_mission.html.
- “Elements of Oz: Project Description,” Builders Association, accessed May 15, 2017, http://www.thebuildersassociation.org/prod_oz.html.
- Dien Tran, “6 Shows that Perfectly Combine Tech and Text,” American Theatre Magazine, July 6, 2016, http://www.americantheatre.org/2016/07/06/6-shows-that-perfectly-combine-tech-and-text.
- Adam Feldman, “Theater review: Elements of Oz takes audiences on a high-tech rainbow tour,” Time Out New York, December 7, 2016, http://www.timeout.com/newyork/blog/theater-review-elements-of-oz-takes-audiences-on-a-high-tech-rainbow-tour-120716.
- “CREW home,” CREW, accessed May 17, 2017, http://www.crewonline.org/art/home.
- “_ART: W (Double U),” CREW, accessed May 17, 2017, http://www.crewonline.org/art/project/51.
- Sigrid Merx, “Doing Phenomenology: The Empathetic Implications of Crew’s Head-Swap Technology in ‘W’ (Double U),” in Performance and Phenomenology: Traditions and Transformations, ed. Maaike Bleeker, Jon Foley Sherman, Erini Nedelkopoulou (New York: Routledge, 2015), 204.
- “Fair E-Tales #4: Double U brings you another reality,” Live Magazine, August 24, 2010.
- “_ART: W (Double U),” CREW.
- Merx, “Doing Phenomenology,” 205.
- See, for example, Wired’s article “Is Virtual Reality the Ultimate Empathy Machine?” at www.wired.com/brandlab/2015/11/is-virtual-reality-the-ultimate-empathy-machine. The article discusses a similar project, The Machine to Be Another.
- “_ART: C.a.p.e. Drop_Dog,” CREW, accessed May 17, 2017, http://www.crewonline.org/art/project/704.
- “_ART: C.a.p.e. Release: 2010,” CREW, accessed May 17, 2017, http://www.crewonline.org/art/project/179.
- “C.a.p.e. Drop-Dog,” DREAMSPACE, accessed May 17, 2017, http://www.dreamspaceproject.eu/Productions/Year-3-Final-Experimental-Productions/C.a.p.e.-Drop-Dog.
- “_ART: C.a.p.e. Drop_Dog,” CREW.
- “C.a.p.e. Drop-Dog,” DREAMSPACE.
- “Yehuda Duenyas’ CVRTAIN: About the Artists,” PS122, accessed May 10, 2017, http://www.ps122.org/cvrtain/#artists.
- “Yehuda Duenyas’ CVRTAIN,” PS122, accessed May 10, 2017, http://www.ps122.org/cvrtain.
- “Airflow VR,” Mindride, accessed May 10, 2017, http://www.mindride.co/make-1.
- “The Ascent by xxxy,” The Ascent, accessed May 10, 2017, http://www.theascent.co.