Friday, June 02, 2017

Upsetting the Synaesthetic - The Arbitrary Function in Visual Music

Proposition: to counter the synaesthetic, and thus create dialectic in visual music an arbitrary function must be introduced.

Of all the senses, it is sight and sound that work most in tandem, so it is perhaps not surprising that moving images and audio are inherently attracted to one another. At the slightest hint of simultaneity the two adhere, a tendency noted with some anxiety by Eisenstein, Pudovkin and Alexandrov in their 1928 ‘Statement’ on sound, written just as a reliable method for reproducing sync was being introduced in cinema. The Russian filmmakers were concerned about the sight of synchronised lips, and the sound of dialogue producing a theatrical self-contained cinema, something they sought to counter by the use of asynchronous combinations of sound and image. Asynchronism is often misunderstood as audio and vision being in some way out of sync, but what was being proposed is a form of juxtaposition, rather than of sound merely reinforcing image.  

The problem of adhesion is if anything more acute in visual music, for synchronising abstract imagery and music doesn’t create a hermetically sealed story world, but rather potentially the suggestion of a literal equation of the two media. Reflecting this, numerous papers on visual music have titles such as 'seeing sound, hearing colour'.

The tendency to equate sound and image, and specifically pitch and colour is as old as visual music itself: from Louis Bertrand Castel’s “ocular harpsichord for the eyes” built in the 1700s (a keyboard above which were 60 small windows, each with different coloured-glass and a small curtain, which as the player depressed the relevant key would open), through to Rimington, Scriabin and Whitney. Such synasethetic combinations claim not just equation but an underpinning ‘natural’ harmonic relationship between visual and musical forms. Arguably the history of visual music has been dogged by a search for such quasi-spiritual correlation.

So why is this problematic?  Firstly an absolute or natural correlation would require consensus on the precise nature of the relationships, and even Castel changed his mind several times during his many of years of working on his ocular harpsichord as to what hue should match with what pitch. From a mathematical perspective, the principles of western musical notation and harmony do no lend themselves to a systematic translation into a colour wheel, and in any case western musical harmony is only one method for organising sound.

Technicalities aside, the core issue is that the quest for absolute synaesthetic correlation is at odds with achieving dialectic tension in the audio-visual relationship. In synaesthesia, it is not so much sensory interplay that is encouraged, as a form of submersion and sublimation, an immersive running together of the senses, thereby reducing the audience’s potential participation in making meaning. Such synaesthetic inclinations have recently been given impetus within contemporary visual music practice, where digital technologies have opened up myriad new ways of combining sound and image. 

If Eisenstein (et al) sought to upset audio-visual adhesion in narrative film by the use of asynchronism, then what is proposed is that in visual music to offset literalness and a tendency towards immersive synaesthesia, another technique,must be deployed, the arbitrary.

The arbitrary does not denote simple randomness, but rather a changing and reciprocal choreography between the audio and the visual, which recognises, and even embraces the potential for adhesion and equation, but that either then foregrounds these relationships as arbitrary rather than essential, or uses arbitrary elements to problematise the correlations and hence offset the synaesthetic.

The Three Methods
There are three principle ways of introducing the arbitrary. Firstly one can allow adhesion to take place but then willfully multiply the number of audio-visual equations. This process is facilitated by digital mapping, so at its simplest, if say an equation of pitch and colour is created in one part of a work, such that as the frequency rises the hue changes, this can be offset elsewhere, by mapping pitch to changes in form, or some other visual parameter. By shifting and changing the mapping, the arbitrary nature of the audio-visual correlation is foregrounded, and revealed, as the audience find themselves making first one equation and then another.

The second arbitrary function involves questioning causality and can be applied to either abstract or representational imagery, but arguably works best with the latter. In narrative cinema on-screen action or activity is typically perceived as having a causal relationship with the sound one hears. A car pulls up, the engine stops, the door opens and closes, footsteps on the gravel, etc. Indeed sound without an accompanying visual source is often used as a way of creating dramatic tension, building to the moment when the two become united. We hear the sound of footsteps approaching a door before it opens to reveal who is coming through.  In visual music one can question or upset this causal relationship, such that the audience asks, is the image or action producing the sound, or is the soundtrack in some way generating or producing the image? Again digital technology allows us to do this in various ways, for example by either using the same data stream or algorithms to manipulate both image and audio, or by scanning the moving image to produce the soundtrack (in a manner not dissimilar to optical sound in film). So it is the action of the frame, rather than the action in the frame that is the causal link.

The third arbitrary approach is looser and closer to asynchronism, as in this model, moving images are not synchronised note for note with music or sounds, but married with syncopated musical rhythms. Each retains its own channel and identity and the arbitrary element comes in the open-ended nature of this correspondence, there is no beat-by-beat or 4/4 dynamics, locking the audio-visual relationship, but rather flashes of momentary adhesion, possibly taking place at different tempi and at different points within the frame. As such there may be many possible moments of adhesion or of equation. 

This third approach requires no special digital technology and can be seen in the films of Len Lye and his use of Cuban dance band rhythms, and Malcolm Le Grice’s Berlin Horse (1970) with its looping imagery counterpointed by by Brian Eno's piano loops.  The choice of image and music is quite particular however, as though some adhesion and equation will happen if one combines just about any music and moving imagery, without careful selection and counterpointing, the effect can be to diminish rather than enhance both audio-visual elements. Whilst digital technology is not required in this third method, it nonetheless can facilitate the making of syncopated moving images and audio. 

The three methods are not exclusive and elements from the three approaches may be combined.

To follow - practice based examples

Thursday, May 25, 2017

The VCS3, a West Coast Syntheszier?

Wendy Carlos’s Switched-on-Bach (1968) popularised the idea of the synthesizer, and along with other early Moog players such as Keith Emerson helped shape the perception of it as a keyboard instrument;  taking a device potentially capable of producing all manner of previously unheard sounds, and turning into it a form of expanded piano/organ. Compounding this was the Moog philosophy, which favours a form of subtractive synthesis, in which the signal chain takes ‘raw’ oscillator waves from a VCO (Voltage Controlled Oscillator), and then filters them via voltage controlled filer (the VCF) and then amplifies them (VCA) to produce the classic ‘warm’ analogue Moog sound.

Such a linear signal chain, VCO-VCF-VCA which in the form of the Minimoog became predetermined or hard wired, has all but become synonymous with analogue synthesis, and there are numerous variations on the theme, all with their fans and their detractors, often arguing over the merits of their respective filters. Ever since synthesizers produced by Roland, Yamaha, Korg have followed this model with little real variation, making it as easy as possible for the keyboard player to access a small palette of sounds such as, ‘screaming leads’, ‘deep basses’ and so on, but offering little scope for more adventurous sonic experiments.

In contrast the British built VCS3 and the briefcase version the Synthi A, used by many popular artists in the early to mid 1970s including: Brian Eno, Pink Floyd, Kraftwerk, Tangerine Dream, Hawkwind, Jean Michelle Jarre etc, does not come with a built in keyboard (though a separate unit is available), and has a unique pin matrix system that allows great flexibility in terms of patching the various components together. Though the classic Moog chain is possible, it is not predetermined, and the matrix system together with the wide-ranging oscillators, ring modulator, and quirky trapezoid envelope gnerator encourages experimentation. This is indeed how it was initially used – often to provide explicitly electronic sounds rather than imitations of conventional instruments or the classic filter swept Moog sound.

In this way the VCS3 can be aligned with the philosophy of West Coast synthesizers builders such as Buchla and Serge. In the West Coast philosophy one starts with what is called a complex oscillator, whose output is waveshaped rather than filtered to produce different timbres. Early Buchla’s didn't have a filter as such. FM synthesis, and much more sophisticated envelope or slope generators that can be re-triggered and act as a form of LFO, all play a part in the West Coast sound, much favoured by composers such as Morton Subotnick and Suzanne Ciani. Buchla’s were expensive but developed a niche and loyal following, and there was little imperative to try and compete with the success of Moog let alone the Japanese manufacturers who cam along in the late 1970s.

The VCS3 offers many of features of the West Coast synthesizer but in a reduced form. The oscillators' waveshapes can be swept to produce different timbres, but by hand, to access the CV control one needs to modify the standard model. The trapezoid generator can re-trigger but offers less scope than Buchla or Serge envelopes or slope generators. FM synthesis and hard sync are possible, though again the latter requires modification. In short the VCS3 has all the makings and potential of West Coast synthesis, with the added flexibility of the patch matrix, but by comparison is limited in various ways.

Having been taken up enthusiastically by many popular music artists (as per the list above) in the early 1970s and also being found in many UK university studios (such as Goldsmiths and Morley College) and radio stations such as the BBC Radiophonic Workshop and WDR (Studio für elektronische Musik des Westdeutschen Rundfunks) both of which ordered the large Synthi 100, EMS who produced the VCS3 were initially very successful, but a lack of an either East or West coast philosophy hindered development.

In the mainstream, many bands by the mid 1970s moved away from the VCS3 to the Minimoog, or kept the former as a special FX unit whilst the latter would be used to play lead lines. The more experimental university and radio station studios were not that dissimilar. The Goldsmiths studio was by the late 1970s acquiring a Roland system 100, the Radiophonic workshop added Yamaha equipment.

EMS seemed unsure how to respond, a prototype Synthi P was produced with more stable oscillators and a few refinements, but it never went into production and was neither an answer to the Minimoog, or sufficiently different to the Synthi A such that people would have replaced their existing kit. Had EMS embraced the West Coast philosophy and developed its oscillators and trapezoid generators, allying these with the pin matrix and Zinovieff’s investigation of computer controlled circuits then it could have had a future as Buchla had, instead EMS went bankrupt in 1979.

This was not the end of EMS as after changing hands a number of times Robin Wood a former employee now produces very limited quantities of VCS3s from his Cornwall base. As a compact synthesizer it still offers much greater scope for experimentation than most commercial synths, and the modifications listed above can be added when ordering. Nonetheless the basic oscillator and trapezoid designs are unchanged from the model produced in the 1970s, whereas Buchla continued to develop and expand, inventing new components up to his death, the VCS3 has been frozen in time.  More recently new companies such as Make Noise and Pittsburgh Modular have begun to produce synthesizers that combine elements of East and West Coast philosophies.  The VCS3 matrix routing remains unique, and with enhanced oscillators and digital control, EMS could if so inclined produce a British contemporary synthesizer that was a worthy heir to the VCS3.

Tuesday, May 16, 2017


Back in 1990/91 I shot a few reels of Standard 8 film, making much use of in-camera superimposition. Footage shot at the old London Filmmakers Co-op can be seen in another post, but a reel was also shot in the woods near my then flat in New Eltham. There are numerous sections where the film became light fogged with flashes of yellow and red. 'Flares' takes a very short section from the beginning of the reel, and loops it three times.  Loops two and three are a little shorter causing them to slip in and out of sync with each other as they repeat. Placed side by side, rather as with the Chronocuts, elements seem to move from one frame to the other, in particular the light fogged 'Flares'. Unlike the Chroncuts which maintain a fixed time interval, the different loop lengths causes the 'Flares' to dance about somewhat unpredictably.   The soundtrack was produced by reworking some Max/MSP/Jitter moving image to sound patches I made for 'Fleshtones'. Here the changing luminosity produces a series of notes which are then fed to a software vocoder and tweaked in real time creating a chord each time the light changes. As the piece progresses more overlays of both sound and image were added. The whole process is (aside from the footage) entirely digital and I was keen to avoid  the piece fetishising analogue aberrations, in the way pop videos include self-consciously scratchy Super 8 as a stylistic device.  "Flares" seems to escape retro nostalgia through the linkage of the variations in the footage to the mechanism of sound production. What we hear is clearly not optical sound but a digital process which as such declares its material (in as much as digital ever can) and thus acknowledges the digitised footage as source or sample rather than as badge of analogue authenticity.

Flares from Philip Sanderson on Vimeo.

Friday, April 07, 2017

Cut-up incantations over different grades of electronic mudslide

Review in the May issue of the Wire magazine of No No No No, a download release that came out late last year. 

Philip Sanderson was first musically active as part of the duo Storm Bugs on the early 1980s cassette scene, a DIY scene predicated on cheap reproduction, and the Bandcamp era offers such micro-cultures an ideal second life (for now.) So as well as making the original output of Storm Bugs’ Snatch Tapes label available, it’s also given Sanderson an outlet for a run of new tapes. No No No No, on which Sanderson mutters cut-up incantations over different grades of electronic mudslide, doesn’t sound out of place in 2017 either; like Mordant Music, Ship Canal or Hacker Farm, it’s a very English sound. Made with the means to hand, and completely unafraid of grime or decay; these are sounds left out of the fridge, pulled out from under sofas, disinterred from lofts with a dusting of fibreglass. Sam Davies 

Saturday, February 18, 2017

History of the London Filmmakers Co-op (in French)

In the early 1990s I was rooting round in the the LFMC cupboard and came across a somewhat unloved wind-up Kodak Standard 8 camera. Unlike Super 8, Standard 8 can be passed through the camera several times to create superimpositions. If you underexpose the first layer you get a degree of latensification, enhancing the shadow areas ins subsequent layers. In practice with the Standard 8 camera I was using it is all a little hit and miss but does work as can be seen in the water ripples half way through. The footage was shot mostly in and around the Gloucester Avenue LFMC building and on the Regent Canal next door. There were also some shots from my then home in Forest Hill and even a touch of Whitewall Creek down in Strood. The footage was transferred to Umatic and then to digital in the late 90s. I still have the reel of film so should get a decent transfer done sometime. The tongue in cheek soundtrack is contemporary.  

Tuesday, February 07, 2017

I Haven't Stopped Dancing Yet

In narrative film, synchronised sound is linked causally to onscreen action or dialogue. Even when the source of the sound is not immediately visible such as approaching footsteps behind a door, the dynamic is such that the short term absence of the visual is designed to create anticipation that is then dramatically and visually resolved on screen by the sound source becoming visible. The addition of wild track sound, such as say birdsong on a country scene helps to further cement the hermetically sealed narrative construct, papering over edits and providing continuity between disparate shots. Add in music and one has a self- contained story world.

Outside of this narrative go round filmmakers such as Guy Sherwin in his optical film series including Railings (1977) and Musical Stairs (1977) use the footage itself transferred onto the optical track as a sound source. It becomes literally the manipulated image of the railings and the stairs that produces the sound rather than the representation of onscreen action. This process helps to break open the hermetically sealed sound and image film world.

I have been experimenting for sometime, across a number of pieces with using digital techniques that mirror and extend these optical sound experiments. In the case of I Haven’t Stopped Dancing Yet (2017) an image taken back in the early 80s when visiting my parents in Kent is manipulated in various ways. The image flips from side to side and twists in a crude and humorous approximation of dancing. Unlike optical sound here it is the data from the digital manipulations of the image that are then numerically turned into audio. So when the image flips from side to side a steady rhythm is created but then as the image is stretched one gets a sound similar to that from a scratched record. Humour aside all of this helps the viewer question audio-visual causality. Is the man dancing to the music or being danced by it? In practice it is somewhere between the two.

Monday, January 02, 2017

Towards an Asynchronous Cinema

One of the things that happened in 2016 was I finally got to complete my PhD. I say finally as I had been umming and ahhing over doing one for some years; initially wanting to do a practice based PhD, but never quite finding the right supervisor or setting. A chance conversation with Michael Szpakowski at a screening of Kerry Baldry's One minute programme at Furtherfield led to Westminster University where Michael was completing a PhD by publication. The publications being not texts in the traditional sense, but videos made in the last ten years. This seemed an ideal pathway and I signed up as well.

For a PhD by publication, alongside the work, one writes a paper of anywhere between 12 and 60,000 words.  Reflecting on one's own practice turned out to be tricky, talking about one's work in the third person as if one were reviewing it is best avoided, and locating the videos in a critical context informed by other artists' moving image pieces can both exaggerate their influence and/or appear as if one is trying to insert oneself into the canon. For example a strong parallel with my own use of sound  is John Smith's The Girl Chewing Gum (1976) and this is cited at numerous points in the thesis. However it would be very hard to make any work if one was constantly referring back to some seminal piece, rather a number of influences and ideas tend to circulate, often in some unspoken way in one's brain before morphing into a new idea/piece. So the thesis in part makes explicit what was at the time implicit or internalised.    

Theoretically I found myself being drawn back to the Eisenstein (et al) Statement on Sound (1928) and the dangers of adhesion between sound and image as potential illusionist mutual reinforcement. It was not Eisenstein however, but one of the other signatories to the Statement, Pudovkin and his nuanced use of asynchronism in a document a year later which proved more useful. For Pudovkin asynchronism implies a careful jaxtaposition of soudn and image not wilful non-synchronisation. Key to this is that the asynchronous use of sound is punctuated with moments of adhesion, creating a push-pull dynamic that questions causality between the visual and the auditory and leads to dialectic. 

In the coming year I shall try and unpack and extend some of the ideas in the thesis, for now here is a link to the paper.

Sunday, December 25, 2016

Circle Lines

Appealing as the idea of synaesthesia is, the relationship between an on screen visual form and a sound is ultimately arbitrary. A shape could for example expand or change from blue to green synchronised to a sound rising in pitch, but just as easily it could contract and change from red to yellow, and the sound still rise. Such relationships are arbitrary rather than essential, but once a reciprocal framework is established we as viewers will tend to actively adhere sound and image together and see/hear one as in some way causing the other. To create a dialectic one can allow, even encourage this adhesion to take hold but then must offset or upset it in some way to reveal the arbitrary. Circle Lines seeks to do this by having two sound and image production sources that operate in different ways; circles, and lines. The circles create one form of oscillation relative to their size and position, the lines function in a different way producing a stepped sequence according to their position on screen. To add confusion the two overlap visually and acoustically. This 'Test One' is just that a slightly indulgent racket that will form part of a longer piece in 2017.

Saturday, December 10, 2016


Using a form of optical sound married with generated representational imagery

Sunday, October 02, 2016

Tracking - Railway Abstractions

Inspired by the sound, visual effects, shifting light patterns, temporal movements, multi dimensional perspectives, etc that one sees and hears on railways and in railway films - oh and here and there a slight pun on analogue video tape tracking.

Saturday, September 24, 2016

Live Chronocut Machine
The first Chronocuts were made using a web browser, a very unpredictable approach, which at times could in itself be interesting. This technique was updated to using a matt to make slices, and then cutting and pasting clips at intervals. A couple of weeks was spent in 2010 trying to automate the process in After Effects and make something more immediate, but with limited success. Long render times and CPU overload, meant the technique wasn't quicker or more flexible. Over the summer holidays just gone I was revisiting some old patches and had another go in Max and very quickly it came together. A live camera input was tried and here is a minute or so. Though it initially just encourages 'larking about', there are clearly numerous possibilities. Live

Monday, August 29, 2016

Moth Flight

Short, 75 sec piece selected for the Amy Johnson Festival 2016 programme curated by Kerry Baldry in Hull a couple of months ago. 

Saturday, November 28, 2015


Keen followers of this blog may well remember the Chronocuts series that started back in 2008. It all began as a happy accident - whilst testing a new web page in 2007 which contained several movie clips I noticed that the movies loaded one after the other, in a sequential order, a one frame gap between each. By using the same footage for each clip an echoing repetitious moving image was created. With 24 clips placed side-by-side one could in effect watch one second’s worth of footage simultaneously upsetting the singularity of the cinematic moment.

Using web browsers proved to be too dependent on the speed of the web connection and indeed on the browser - in Internet Explorer the clips loaded sequentially but not in many others, so a technique was developed to create new single movies built from several slices. As source footage the Chronocuts used scenes from familiar films ranging from Lindsay Anderson’s If to Hitchcock’s Psycho. Our cinematic memory of the scenes and how they unfold makes the temporal and spatial transformation all the more acute. Rather than glimpse a movement once we see it repeated visually and aurally revealing new relationships and dynamics in the footage. 

So why the post now? Well next week - the 2cnd of December to be precise I will be screening 5 or so Chronocuts upstairs at the Memorial Gallery in Hastings. Especially for the exhibition I have made brand new higher definition versions and an extended version of Iffy plus a new piece Where the air is clear. 

Tuesday, September 15, 2015

Storm Bugs in the Wire

Get yourself a copy of the October issue of the Wire magazine to read a two page spread on Storm Bugs and  Film of the Same Name

Thursday, April 16, 2015

Ice Yacht – Pole of Cold

Tucked away on Snatch 3 (the last of the Snatch Tapes compilations released in 1981) is a track by Ice Yacht called ‘0 Degrees North’. An austere piece of drum loop and drone music, there was talk of a full tape having been recorded by the band (?) but no documentation of any further releases can be found.

Rumour had it that members of Ice Yacht had embarked on an ill-fated trip to the North Pole attempting to retrace a journey made by the Norwegian explorer Fridtjof Nansen in 1888. This story was given credence when last year a polar research group uncovered a cassette tape in a vacuum-sealed case in the permafrost. The white tape was marked simply Ice Yacht – Pole of Cold.

After letting the tape thaw out it was then baked in a temperature controlled oven for 5 hours and though some oxide had peeled away the tape was playable allowing the transfer of the recordings. Pole of Cold contains six tracks of analogue electronics seemingly mapping an arctic exploration to the coldest place on the planet though exactly when and where they were recorded is unknown. 

Fragment Factory have now made 100 numbered replica copies of the tape and  it is available now on white cassette