The PhotoIreland festival is in full swing right now and I am running around trying to catch as much of it as I can. I went to a talk last week by a curator from London called Rodrigo Orrantia entitled “Photography And The Search For Lost Time”. In spite of the fact that there were only about 5 people there, it was really good, and Orrantia was an enthusiastic and interesting presenter. He was talking about photographic artists that address issues to do with time in their work and stretch the temporal boundaries of what might usually be considered normal photography.
He talked about some I was already familiar with (Tokihiro Sato, Hiroshima Sugimoto) but also introduced me to some that I hadn’t encountered before, for example Idris Khan. It’s questionable whether Khan should be considered a photographer at all, essentially he is a digital artist whose raw materials are typically photographs, often appropriated rather than taken himself. What he does is this – he takes sequences of images and layers them together in Photoshop to create a single composite image. Often the source images are existing photographs. So, for example, he has produced several works based on Bernd and Hilla Becher’s photographic typologies. The image below was done by compositing together every one of their spherical gasholder pictures.
He has also applied something similar to books. By scanning every page of Roland Barthes’ Camera Lucida and combining them, he produces an image which is some sort of visual synthesis of the book. He has done the same thing with Sontag’s On Photography and with the Koran.
I really like his work. I like the way it takes something that would normally entail extended contemplation over time and compresses it into a single image that can be looked at in one moment. I like the way it draws you in visually and gets you trying to figure out how these visual traces in the result relate to the source material. I like the way that much of the source material is seminal texts in photography theory (Becher’s photographs, Barthes, Sontag) so he is clearly inviting us to consider the ramifications of what he is doing within that particular context. And I like the way that he is creating visually interesting but semi-abstract imagery by means of a very clearly defined process.
Another artist that cropped up was Jim Campbell and specifically an image he created by taking all of the frames of the Hitchcock film Psycho and averaging them out into a single image. You can see the result below (weirdly I had come away from the talk thinking that the Psycho image was by Idris Khan as well and it was only when I went looking for it that I realised it was’nt).
This is a fascinating idea. The surprising thing about the picture above is that distinct details do emerge, for example there is a lampshade clearly visible in the top left. I haven’t seen the film in a long time so I can’t verify this but there is obviously a scene with a bright lampshade in it that goes on for a while and is a single fixed shot (or at least cuts to that shot a lot). Orrantia drew attention to the parallels between this and Sugimoto’s Theaters series but pointed out a distinct difference. Sugimoto is using an analogue process of addition, whereby each “pixel” of the image is created by adding together the amount of light that strikes it over the course of the long exposure. Campbell on the other hand is using a digital process of averaging, whereby each pixel is created by averaging the amount of light that strikes it over the course of the movie. So where Sugimoto gets a white screen, Campbell doesn’t.
This obviously got me thinking about how Campbell’s approach could be adopted to doing the long exposure photography at gigs that I am working on, and whether by doing so I could solve the problem of areas of the image being overexposed due to strong lights (or whited-out like Sugimoto’s cinema screens). It would entail videoing the song (or even the whole gig) from the same position as I photograph, and with the video camera fixed on a tripod. I would then have to use some sort of software to extract all of the frames from the video sequence into separate images (I think most video editing software can do this) and then composite them together with some sort of pixel by pixel averaging process. I’m sure this could be done but there are a few snags one of which is this. When I scan the images that I get from the 4×5 camera I get a giant resolution and I need this resolution to capture the kind of detail within the image that I want. It’s normally around 10,000 by 8000 pixels and as my friend Stephen Sheridan commented to me last week, not even one of the fabled Red One digital video cameras is going to do that for you. Still, it’s an interesting thought.
The photograph at the top of this post is the excellent Squarehead playing in The Button Factory a while ago (opening up for Black Lips). It’s an exposure of about 3 minutes, which is pretty much the length of all their songs. Here’s one of those songs.