Media Clown

Digital Media and Live Performance

History

The first stage of the research began as a yearlong series of creative conversations in 2017 between myself and clown/Associate Professor Paul Kalina on how to integrate technology into live clown shows. The project then moved to concrete research and practical development in 2018 with the award of an Interdisciplinary Research Grant ($18,000) from the University of Iowa’s Obermann Center for Advanced Studies. After the 2018 summer intensive of developmental research, we put together a research team of three Theatre Arts graduate students and partnered with Leader/Lecturer Shannon Harvey and his students of Live Visual Design and Production at Backstage Academy (South Kirkby, UK). The research team worked virtually with students in the UK for a year to develop the integration of the technical system using motion capture to drive the performance. We spent four weeks in an invited residency at Backstage Academy in May of 2019 putting together the first version of the show. The entire cross-University research team premiered the results of  the performance research/development as an official entry into at the PQ: Festival, part of the 2019 Prague Quadrennial (PQ), the largest international exhibition and festival event dedicated to scenography, performance design and theatre architecture.

 

Fine and Kalina raised $50,050.00 in cash and $500,000.00 in in-kind equipment short-term rentals for this performance. The sharing of the creative research at the festival garnered positive feedback from an international audience of industry leaders. As a result of the performance at PQ, Media Clown received open invitations to present the final work from Ohio State, Cal Arts and Portsmouth College UK, confirming the teams proof of concept and moving the team closer to their goal of a fully formed touring performance.

 

The story/narrative of this phase of the research was driven by two major elements. The first was the special skills of our clown, Paul Kalina. Since so much of the research process was about integrating various technologies and systems into live performance, we wanted to stay within the skillset that Paul was already bringing to the project as a clown. We started small and more scientific, incorporating and exploring the possibilities of different technologies one at a time and creating short vignettes based around each technology. The second driving force of the early research phase was to keep the storyline simple and straightforward so that the majority of our efforts were focused on the co-performance between Paul and the technology rather than on story and world building.

 

Paul had multiple types of technology embedded into his costume:

  1. Blacktrax beacons: This camera-based motion capture system allowed us to explore with two things:
    1. Having moving lights act as intelligent follow spots, lighting Paul wherever he was onstage.
    2. Provide 0,0,0 in XYZ 3D space, something that the intertia, skeletal motion capture system could not provide in real-time. Knowing where the center, center, center location is of Paul in physical space and how that translates to 3D space is the only way to accurately place Paul inside the digital.
  2. Inertial based Perception Nueron skeletal tracking system. This wireless, markerless, WIFI based tracking system allowed Paul to control a digital avatar – or the digital clown.
  3. A custom-made, wireless DMX light up jacket, that at the climax of the story, connected the analog clown to the digital world.
  4. A wireless receiver for his ukulele to plug-into the sound system.

 

Paul developed a rather traditional style clown: one that doesn’t talk and constantly finds himself getting into trouble with a multitude of opportunities for physical comedy. We developed a rather straight-forward story with a score of scenes: (for further details, please see story outline/score here.) An analog ukulele musician shows up to the venue to perform a set, but everything goes wrong: he gets into snag/dance with his uncooperating music stand and finally his sheet music explodes, leaving him with no way to finish his concert. A stagehand brings him an iPad with sheet music. He doesn’t understand the digital and accidentally gets sucked into a giant version of the iPad. Two distinct playing spaces were created: one downstage where the analog clown performed and one upstage where the oversized, theatrical set of the iPad was located. The iPad set piece was faced to look like the frame of an iPad. At the rear was a Rear Projection screen and, in the front, a hologauze screen (a holographic effects screen designed to reflect projections that make it look like they are magically are appearing in air). This combination of rear and front projection, with the performer in the middle makes it appear that the performer is immersed inside the digital world of an iPad. Surrounded by an immersive digital world, the analog clown interfaced with a digital version of himself inside this giant iPad.

 

For decades, artists have been updating a classic Marx Brothers routine from the movie Duck Soup, which was based on an old vaudeville routine. The PQ performance built upon this lineage with a novel, modern approach to the classic mirror routine. At one point in the show an avatar appears on the hologauze that resembles the clown’s image. Through the use of motion capture the movements of the avatar mirror the movements of the clown, which is quite disconcerting for the clown and leads to a great deal of physical comedy and a twist in the end that can only be achieved through motion capture technology. The audience members unfamiliar with the Marx Brothers’ version found it surprising and hilarious and those that were familiar with the routine enjoyed the new technological version with the added knowledge of its origins.

 

One of the key technological aspects of Media Clown is the use of a Motion Capture system, like the technology used by films such as Avatar to capture the movements of actors to control digital avatars. For the UK and PQ performance research Media Clown used an inertia motion capture suit that utilizes the Earth’s magnetic field to map the performers body to create the digital effects on stage in real time. The motion capture data was the heart of a workflow that allowed multiple systems - video, lighting, sound - to use the location data of the performer (and audience) to create real-time digital effects. Unfortunately, the team discovered that the inertia motion capture system was very unstable when near metal or concrete because it affects the magnetic pull of the sensors and distorted the avatar’s image. The team found a temporary work-around, but a long-term, stable solution for motion capture would need to be found.

 

Upon returning from Prague, Paul and I began to research other types of motion capture systems. This led to Paul contacting the head of Sony Studio’s motion capture lab, Charles Ghislandi. After discussing the project, Ghislandi recommended a camera based system for skeletal tracking for its stability and consulted with the team on the best motion capture solution.

 

These conversations made it clear that the team was missing a collaborator who worked in advanced real-time system workflows and high-end motion capture. Fine and Kalina approached long-time collaborator, Matthew Ragan, former Director of Software at The Madison Square Garden Company. Matthew is a designer and educator whose work explores the challenges and opportunities of large immersive systems. At the forefront of his research is the development of pipelines for real time content creation and the intersection of interactive digital media and live performance. Matthew agreed to join the project as Motion Capture, Real-Time Content and Software Director.

 

The second round of research is funded by a $25,000.00 external grant from Epic Games and is a collaboration with industry professional Matthew Ragan and sound designer Noel Nichols, the original three graduate students who have since graduated, three additional UI recent alumni (one graduate and two undergraduates) and one current undergraduate.

 

In terms of story and character, we felt that it was time to create a more advanced/in-depth story world for the next phase of research. We invited UI alumni, Leigh Marshall to join the research team as lead writer. Throughout history, the clown often mocks/reflects those in power in our society - holding up a mirror to how the powerful in our society wield their control. We were curious as to who the powerful in our society are and how the clown might shed new light onto how that power affects us. In keeping with our research agenda of integrating technology into clown and live performance, our  answer to this question was the technology gurus who build billion dollar tech companies. The new clown protagonist is modelled after the likes of Elon Musk, Mark Zuckerberg, Adam Newman, Jensen Huang, Jeff Bezos, and Elizabeth Holmes to name a few.

 

We worked with Leigh and the team for a year to develop new characters and a new storyline. Our plan was to embed technology into the meaning-making of the show. The new clown character, Perceval, an eccentric tech guru, has invented the biggest thing since sliced bread - the yu (pronounced you), a humanoid holographic avatar that can physically inhabit the three-dimensional world. We developed an outline for 5 act structure for a live, in-person performance. See here for a timeline of the story world and here for an outline for the five act structure.

 

PERCEVAL is an intensely introverted entrepreneur with ADHD, a raw idealist who doesn’t protect himself or his ideas with a fortress-like social facade (the drawbridge is always down, so to speak). His physical and virtual space is curated to suit his own particular brain chemistry - he accumulates things that comfort and interest him, surrounds himself with clocks/timers, archaic cell phones - his mind goes mile-a-minute with big ideas but when he’s in public (or has an audience) he gets like a deer in headlights. His shyness, however, comes with a massive ego, so he tries to cultivate an air of mystery around his image in order to comfortably satisfy his desire to be a public figure - this also allows him to obscure information he wants no one to ever know, like his humble background (he comes from someplace like Peoria or Detroit).

 

de.Z was created by Perceval to be the Ideal Perceval: suave, articulate, a popular extravert. He’s the embodiment of a sex jam with a goldfish brain. The Hugh Hefner of Big Tech. Highly developed but still a bit glitchy.

 

A key component to clown and how Paul works is improvisation and audience interaction. In keeping with the liveness and the improvisation of clowning and live performance, we explored various methods to create dialogue/script written through Machine Learning. We collected several databases, such as tech start-up advertising campaigns, tech guru speeches/talks/product launches, technology product disclaimers, etc, and fed those databases into a machine learning AI (Artificial Intelligence) to create different models. We then gave the models different prompts, which provided different results. The results become the text that is spoken by de.Z. By creating a high-end chatbox through Machine Learning, that is able to respond to our live performer and audience in real-time, we maintained the improvisational element so important to clown and live performance work.

 

Ultimately, due to Covid Pandemic, we had to shift to a real-time, virtual performance, putting the Machine Learning research and the development of the live, in-person theatrical event on hold. For a year during the Pandemic, the team worked on a virtual performance, which had a workshop/proof of concept series of performances in June 2021. The new clown character, Perceval, the eccentric tech guru, who invented the Yu, hosted a Game [The System] Night fundraiser - A Night of You - to raise capital and gain investors for his brilliant invention. During the live-Zoom event, he will get to show off his own personal yu, and demonstrate exactly why you should invest early in this lucrative idea. Problem is, the yu isn't actually perfect yet - and when it is, it'll change the game forever. Learn more about the yu and Perceval here.

 

We researched various Motion Capture technology systems and real-time avatar pipelines, such as UnReal’s MetaHuman. Ultimately, for the virtual performance we decided to use Snap Cam, an Augmented Reality system to create a custom avatar for the protagonists representation of their Yu. The team, worked remotely via multiple online collaborative and real-time tools and video conference systems to:

  • Build a real-time 3D virtual production pipeline for live production. Using current extended reality (XR) techniques, we created a workflow for the remote clown to be filmed in front of a greenscreen and then composited into a virtual 3D environment, have an AR filter applied in real-time and then streamed to live audiences via Zoom.
  • Create a prototype online/virtual performance system for research/rehearsals and performance to include a workflow with Unreal Engine, Touch Designer, Snap Cam (AR digital avatar), QLab, Dante Virtual Audio, and Zoom.
  • Create a Covid-safe, remote/virtual production studio in ABW250 for interactive, real-time performance in Zoom.
  • Create prototype audience participation system for online/virtual show.
  • Experiment with alternate story-telling modalities including text and email of Tik-Tok videos to audience members, pre and post show.

 

What we discovered in this virtual performance space was an exciting new, multi-modal approach to story-telling that could include live virtual events and in the future live in-person events. The future of the research continues to evolve, with the new storyline and explorations in Motion Capture, Machine Learning, Artificial Intelligence, Augmented Reality, and Virtual and in-person performances. We plan to integrate the newly acquired motion capture system at the University of Iowa Theatre Arts department into a workflow to track the live performer, props and the physical camera to align with the virtual world.

copyright 2024