Multi-channel real-time media performance system (work in progress)
Twentieth-century televisual apparatus proposed a particular spatial and temporal relationship between an audience and program material it watched. As compared with, for example, filmic cinema, a distinguishing feature of the television network was its ability to instantiate real-time, circuit-switched, audio-visual connectivity between multiple sites. The “liveness” of the early television camera, the primary (and before videotape recording, the only) means of program origination, set up a tension between program intent and the possibility that a distributed audience might witness errors, accidents or irruptions of unprogrammed content.
Television in this live mode is a more or less distant memory: its real-time tension dissipated by recorded media playback; its synchronous relationship with an audience's attention diffused by internet time and packet-switched networking. Yet within this very obscurity lie potentials that go beyond a retrograde fascination with historical video art's efforts to appropriate television technique. Even as it recedes as a broadcast paradigm, real-time audio-video transmission continues to suggest possibilities for rewriting the spatio-temporal “contract” between a performer, a network and an audience.
Grounded in an investigation of site relationships that reworks an audience's relationship to a transmission, the approach I am continuing to develop proposes a configuration of the live audio-visual image that articulates its structure as distributed, switched and multiple. Instead of being treated as extra-diegetic infrastructure, the camera-microphone system and the system's relationship to the objects it depicts are the subjects of direct perceptual investigation.
The primary tools of this work are the camera and the microphone – devices that register the activities taking place in their respective pick-up fields. Varying degrees of coincidence between the auditory and visual channels of the system leverage the dynamics of off-screen sound to reconstruct spatial perception within an image for an audience – even as the audience's relationship to the image itself is inflected by environmental clues about the “liveness” of the representation.
In some ways this microphone application is similar to location audio recording for traditional cinema production (e.g.: a person is walking off-screen towards the camera, passes into and then out of the frame; we want to capture the sound of their footsteps...). What is different here is that both the video and audio are an entirely live process / system: there is no notion of “cutting” at the end of a take to collect "content" for future editing.
When the work is presented to audiences, a display and diffusion scenario (video projection, various mode of audio diffusion, etc.) is developed in relation to the properties of the site to bring the live screen/sound image to that audience in a particular way. The stakes of the liveness of this image obviously vary in relation to the way duration, site and image diffusion are treated in each specific presentation.
A series of 8 images illustrates potential camera and microphone relationships:
Images 1-4: Examples of multi-camera configurations.
Images 5-6: Microphone alignment with camera axis; resulting sound image.
Images 7-8: Microphone mix independent of camera axis; resulting sound image.