top of page

Hanging Out with S.A.M.



So adventures with an actor, Shakespeare and VR ensued on Thursday 1st February. We (Andy Purves and I) took over CentrE17 for the day to interrogate how best to work with an actor in a space/volume in relation to the 360° camera… or as I prefer to call it, S.A.M. (Single Audience Member). I firmly believe that when we create stories in virtual performance spaces we have to remember that there will be a person at the centre of that experience who participates both intellectually and physiologically. That’s where S.A.M. comes in… a gentle, gender-neutral reminder that we are developing our creations around a human being and not a camera. Keeping the human being in mind means that we will be more aware of inadvertently and unnecessarily moving S.A.M. and we will also start to use their physiological responses as another tool in our storytelling arsenal.

So upon arrival at CentrE17 we radially mapped out the floor in 1m, 2.5m and 4.5m rings (like a dart board) so that once we transferred the footage to a headset we would have measurable markers to determine actual distance as it relates to the success of the storytelling (i.e.: proximity of the actor, in this case Richard Pryal, to S.A.M.). Whilst we’ve yet to do an HMD test of the material, from what we could tell from the room, the closer Rich was to S.A.M. the more immediate and engaging the text was. We are capturing performances at this stage using a Go Pro Fusion 360° Camera. The camera links to an app for the iPhone (Android soon to come I believe) and from the app you can remotely control the camera and view the capture in real time which is profoundly helpful for noting/tweaking the performance. So the day resulted in the exploration of four monologues: the Poor Tom speech from King Lear, ‘To be thus is nothing…’ from Macbeth, York’s speech to the Queen of France from Henry VI Part 3, and the opening speech from Richard III. We captured several versions of each and are in the process now of creating a rough prototype of our Shakespeare VR experience from the best takes of each speech. It should be mentioned that each capture was a single take as internal editing is near impossible in 360° VR. This is the reason for using theatre actors in VR… they are great at working with long stretches of text and fixing things on the fly.

The learning never seems to stop on this project. Some things are huge, like coming to understand the ins and outs of editing with Premiere Pro, understanding and integrating ambisonic sound and, from a directing perspective, learning what virtual spaces and editing software add to my storytelling tool kit. This week I’m undertaking an exploration of VR transition tools that are now part of Premiere Pro so that I can plan staging/blocking with them in mind; early days have already made me a fan of the VR Iris Wipe and the Mobius Zoom. I’ve also begun research into proxemics and their effects on the physiological responses of S.A.M. (more on this to come).

Getting back to S.A.M., it was a unique experience for both Rich and I working with S.A.M.. As Rich noted, he has never had to work in this particular actor/audience configuration and a good deal of our first hour of actual work (after our floor tape-out) was spent discussing who S.A.M. would be to Rich’s character. One of the reasons I chose to use Shakespeare as the base texts is that I believe they lend themselves well to VR. Shakespeare wrote these plays for a round space that puts a good deal of audience at the centre of the performance. When you stand on stage at The Globe it is apparent that audience share that space with you. Shakespeare wrote for the specifics of this architecture in that he crafted narratives that consistently and seamlessly acknowledged the audience-participants (a great number of S.A.M.’s if you will) who were sharing the space with the performers. Soliloquies and asides are the obvious points of relevance here. As an actor I was told by my coaches to you use the audience as the character’s conscience where soliloquies and asides were concerned. This seemed to put us right for three of the four pieces and gave Rich the freedom to live in the solitary moment of these speeches whilst still being able to deliberately connect with S.A.M. In our exploration of York, we actually cast S.A.M. as the Queen of France and I stood in to play Northumberland so that as the queen, S.A.M. had a confederate in the space. Even from watching the playbacks in Premiere Pro, this casting of S.A.M. as a character has huge potential and gives S.A.M. a physical sense of the play that cannot be achieved from being an outside observer or as the universal conscience for the play’s characters. I still have questions about how to effectively cast S.A.M. and how we establish the rules of the world that one participates in when we continually change S.A.M.’s/your role within the story... some things to ponder as we explore.

On the news front, we will be speaking at the Digital Catapult VR Storytelling meet up at the FuseBox space in Brighton on April 5th and we are aiming to have a solid prototype of a soliloquy experience done for that event so that people can try it out (and we can learn from their experience of it). I also have to say that I met up with Henry Stuart at Visualise last Wednesday and his/their active involvement in this project is going to really advance the work that we do. I’m very grateful for their support, knowledge and enthusiasm.

Until next week then!


Single post: Blog_Single_Post_Widget
bottom of page