Wednesday, December 27, 2017

Friday, December 08, 2017

Resisting immersion in Visual Music, Greenwich Sound/Image Colloquium


Transcript of a presentation given at the 2017 Sound/Image Colloquium at Greenwich University.



Resisting immersion in Visual Music: the case for heightened listening and looking and against pseudo-synaesthesia

The quest for a synaesthetic melding of the senses, for the revelation of an underlying correlation between sound and image has underpinned the development of visual music, from Aristotle’s Music of The Spheres, through Castel’s Ocular Harpsichord, to twentieth century advocates such as Whitney (1980, pp 40-44) who sought  to discover their laws of harmonic relationships”. In contemporary visual music practice the term immersive is increasingly being used, denoting an all-enveloping synaesthetic experience, be it more populist examples such as Bjork’s foray into VR, or installations at Ars Electronica. The impetus for immersion comes from a number of directions, including developments in digital technology, and a renewed desire for a symbiotic relationship between science and the arts; Miller’s (2014) Colliding Worlds.

Whilst the case for an absolute correspondence between colour and harmony has been repeatedly debunked, not least by practitioners themselves – see Le Grice’s (2001, pp?) mathematical reasoning why such a correspondence is fanciful, the terms synaesthetic and especially immersive continue to be used, with little interrogation of whether a blurring of sensory boundaries or an enveloping of the audience is a positive step forward. This paper argues that instead of synaesthetic immersion, what should be encouraged is a heightened state of looking/listening brought about by a reflexive engagement between the work and the audience. Three methods for potentially achieving such a heightened sate are proposed each employing a form of arbitrary function.  In each case a short one-minute extract from my own practice will be used as a brief illustration.

To argue that there is no absolute sound/image or tone/colour correspondence is not to suggest that there is no propensity to make such correlations, but rather it is to locate the adhesion of sound and image in the minds of the audience as they engage with a piece. Adhesion was first identified by Eisenstein, Pudovkin and Alexandrov, in their Statement on Sound of 1928, when they noted that marrying sound with moving image could all too easily produce the “‘illusion’ of talking people, of audible objects, etc.”. The Russian filmmakers response to illusionist adhesion was asynchronism, a technique employed and nuanced by both Eisenstein and Pudovkin in subsequent writings and films. In neither case should asynchronism be viewed as meaning in some way out of sync. For Eisenstein the term became increasingly to mean a form of quasi-musical counterpointing, whilst for Pudovkin (1929) a looser connection is advocated in which occasional moments of adhesion form part of an asynchronous push-pull rhythm, with the audience drawn in and out of the frame, visually and sonically. Pudovkin then utilises the propensity for adhesion as part of a strategy that creates a productive tension and interplay between the senses.

Asynchronism was adopted by a number of filmmakers such as Cavalcanti in the first half of the twentieth century. In contrast in visual music adhesion was often actively sought. For example the prologue to Fischinger’s Optical Poem (1938), states:
To most of us, music suggests definite mental images of form and colour. The picture you are about to see is a novel scientific experiment. Its object is to convey these mental images in visual form. (Fischinger, 1938)

Here it is not just adhesion that is desired but something more, an equation between musical and visual forms, the synaesthetic and seeing sound, hearing colour equation. One might ask if there is a visual music equivalent of asynchronism that can be applied to offset this for of illusionism? An examination of various visual music pieces suggest a number of strategies, which broadly down into three methods.

The first method is close to Pudovkin’s asynchronism in that it uses momentary adhesion. Examples of this approach can be seen in the films of Lye and Le Grice in which the moving images are not synchronised note for note with the soundtrack music, but married to syncopated musical rhythms. In Lye’s Trade Tattoo (1937) it is Cuban dance music, whilst in Le Grice’s Berlin Horse (1970) the looping imagery is counterpointed by Eno's phasing piano loops.  In both cases sound and image work together, but retain their identity, there is no beat-by-beat or 4/4 dynamics, cementing the audio-visual relationship, but rather flashes of momentary adhesion, occur simultaneously, at different tempi and at different locations within the frame. This open-ended and shifting correspondence has a dynamic and yet arbitrary quality, arbitrary not as in random, but in the sense that adhesions are being actively made and broken by each member of the audience, independently and somewhat differently at the moment of audition. In my own piece Landfill (2008), an animated morphing topography is married with a soundtrack of treated yodelling a form of early sonar. There are no designated points of correspondence, but rather a series of arbitrary adhesions.

Landfill (2008) from Philip Sanderson on Vimeo.

To look at further possible implementations of the arbitrary let us examine optical sound films made at the London Filmmakers Co-operative in the 1970s by two filmmakers, Sherwin and Rhodes. Optical sound films rely on what Sherwin calls “an accident of technological synaesthesia”, namely that when the images on an optical film soundtrack are the same as those in the main projected frame, one in effect has a means of both transforming images into sound and of their simultaneous synchronised reproduction, (Sherwin & Hegarty 2007, pp 5).

Sherwin made a number of optical sound films in which the images were also printed on the optical track including Musical Stairs (1977), and Railings (1977). The sounds produced by this process are in sync with the images but are not those which would be made had the railings or stairs been recorded with a microphone.  In Musical Stairs it is the panning of the camera up and down a flight of metal stairs, which when those images pass over the optical head produces a musical scale, whilst in Railings, by filming the ironwork from different angles, a sequence of electronic pulses are generated (Hamlyn, 2005). Sherwin’s pieces counter the illusion of sound and image correspondence by in part employing the adhesive tendency against itself, sound and image stick, but in a way, which forces the audience to question causality rather than accept it. It is the movement of the filmic representation that generates the audio not the represented object.

This second arbitrary function can be applied to either abstract or representational imagery, upsetting the expected dynamics of causal relationships. Whilst optical sound offers plentiful scope for experimentation digital technology allows one to expand and develop the arbitrary function. As an example lets look at Moth Flight (2016) made to celebrate the 75th anniversary of the death of Amy Johnson, the first female pilot to fly from Britain to Australia in her Puss Moth plane. Here the audience is encouraged to ask, is the ‘action’ producing the sound, or is the movement of the image in some way generating the sound or…

Moth Flight (2016) from Philip Sanderson on Vimeo.


The third arbitrary function is best illuminated by Rhodes optical sound film entitled Light Music (1975-77), in which she printed a series of horizontal black lines on both the optical track and film frame. By varying the thickness of the lines, the pitch of the sound rises and falls in sync with the projected light patterns. (Hamlyn, 2011, pp 215). In an interview at the time of the piece’s exhibition at Tate Modern in 2012, Rhodes stated “what you see is what you hear”, a sentence which invokes both the basic synaesthetic equation.

Curiously rather than demonstrating literal equation, Light Music suggests a further arbitrary function.  Two types of optical track were routinely employed, the bilateral variable-area method (consisting of wavy curvaceous lines) and the variable density method, the straight lines used in Light Music. Both methods produce the exact same sound, but if Light Music had employed the bilateral method the projected image would have had a very different appearance.  Nonetheless, we would have still perceived correspondence and made equation. One might go so far as to conjecture that if the optical track, and the projected image had not been identical, but been some other visual form that reciprocally changed as the sound did, a similar connection would still be made. The third arbitrary function requires a foregrounding of the arbitrary nature of the correspondence. Digital mapping offers the possibility to create just such self-declared reciprocal sound and image changes.

An early example of this reciprocal mapping is Le Grice’s computer piece Arbitrary Logic (1988), in which the same data is used the to produce both the on-screen colour fields, and via MIDI (Musical Instrument Digital Interface) the sound. When Le Grice was making Arbitrary Logic, digital technology was in its infancy, and in his writings he speculates about the future possibilities of mapping (2001, pp 284). Such opportunities would become available some ten years later in software such as Max/MSP/Jitter (Cycling 74) in which digital MIDI data can be used to control audio parameters such as pitch, velocity, volume, envelope, whilst simultaneously being mapped to visual manipulations such as: rotation, zoom, hue, video feedback, and so on.

Thus the tendency towards literalness can be offset by varying the parametric relationship; for example if in one section of a work as the frequency rises the hue changes, this can be offset elsewhere, by mapping pitch to changes in form, or another visual element. By such strategies, the arbitrary nature of the audio-visual correlation is foregrounded, as the audience is encouraged to make first one equation and then another. Here is Quadrangle, an early example made back in 2005  in which a patch was built in Max/MSP to generate quasi-random trills, and staccato bursts of data. This information was then mapped to control both the animation of a white square, and via MIDI, a synthesizer. As the music starts and stops, so the square performs a spatial choreography: changing colour, moving across the frame, advancing and retreating, etc. The arbitrary element is introduced by keeping the sound parameters constant throughout, whilst the visual mapping parameters are changed.

Quadrangle (2005) from Philip Sanderson on Vimeo.

Synaesthesia has both underpinned and arguably thwarted the development of visual music practice. This paper started from the position that recent tendencies towards immersion have exacerbated many of the negative aspects of the genre and that this can only be countered by a continual reflexive interrogation of the audio-visual relationship. Three arbitrary functions designed to introduce just such a reflexive tension at the moment of audition were outlined. Key to all three is the recognition of the propensity on the part of the audiences for making causal audio-visual equations, but rather than use this to encourage immersive synaesthesia this desire to adhere can be utilised as part of a range of strategies for denying equation, questioning causality and reflexive mapping that all contribute towards creating a heightened state of looking and listening.


--> ReferencesEisenstein, S. M., Pudovkin, V. I., and Aleksandrov, G. V., 1928. A Statement. In: E. Weis and J. Belton (eds). 1985.  Film Sound: Theory and Practice. New York: Columbia University Press.
Hamlyn, N., 2003. Film Art Phenomena. London: BFI.
Hamlyn, N., 2011. Mutable screens: the expanded films of Guy Sherwin, Lis Rhodes, Steve Farrer and Nicky Hamlyn. In: A.L. Rees, D.Curtis, S.Ball, D, White (eds) 2011. Expanded cinema: art, performance, film. Tate Publishing, London, pp 212-220.
Le Grice, M., 2001. Experimental cinema in the digital age. London: British Film Institute.
Miller, A.I., 2014. Colliding worlds: how cutting-edge science is redefining contemporary art. London: WW Norton & Company.
Pudovkin, V. I., 1929. Asynchronsim as a Principle of Sound Film. In: E. Weis and J. Belton (eds). 1985.  Film Sound: Theory and Practice. New York: Columbia University Press.
Sherwin, Guy K. and Hegarty, Sebastiane (2007), Optical Sound Films 1971 – 2007, DVD, London: Lux.
Whitney, J., 1980. Digital Harmony: On the Complementarity of Music and Visual Art.  Peterborough New Hampshire: Byte Books/McGraw-Hill.