In this post I consider the Introduction and first chapter of Lev Manovich’s influential 2001 book, The Language Of New Media 1. Manovich’s book is a comprehensive and wide-ranging attempt to provide what he calls a “theory of the present”: an analysis of new media as it emerges in the late 20th Century. Future posts will look at subsequent chapters of the book.
The Language Of New Media covers a lot of ground over the course of its six chapters but perhaps the most concise insight into where Manovich is coming from can be gleaned by means of an autobiographical anecdote he relates at the very start. Manovich studied computer science in Moscow in the mid-70s and he recalls how neither himself nor any of his classmates had access to computers in order to test the programs that they were learning to write. Everything was done on paper, as opposed to inputted into a machine, and the experienced professors would evaluate the work of students by mentally executing the hand-written programs that were submitted to them.
What this story illustrates is that there is an abstract idea of computation that exists independently of actual computers and that the new media forms that these computers have facilitated have an ontology that is rooted not so much in the workings and practices of existing hardware and software but rather in the core principles of computer science that have made such hardware and software possible. Manovich calls his approach digital materialism – looking at the principles of computing in order to see how they conspire to produce new cultural objects – and proposes to use this as a means of constructing a theory of new media from the ground up.
This materialist approach informs the definition that he produces for new media. For Manovich, new media is the result of the “translation of all existing media into numerical data accessible through computers” (p.20). In other words, what is new about new media is that it is a form of media that has been digitised, turned into numbers, and is therefore be subject to computation i.e. can be processed by computer programs. There are some curious consequences of this. For example, a film that is shot on a digital camera but otherwise has the same stylistic and aesthetic properties as one from the early 20th Century still counts as “new media”. What matters here is the possibility of types of manipulation that only become feasible by means of the convergence of media and computer technology.
Manovich emphasises the importance of this convergence repeatedly and situates his analysis historically by describing how new media evolved by means of two difference trajectories of innovation. The first one is the development of modern media forms (photography, film etc.). This starts with Daguerre in 1839 and is fundamentally about the development of techniques for media storage. This trajectory of course also encompasses the reproduction and transmission of imagery and the consequent establishment of the ‘mass media’. The second trajectory is the development of computer technology which starts with Babbage around the same time and really takes off in the middle of the 20th Century with the innovations of Alan Turing among others. So, media and computing develop in parallel, and Manovich suggests, in a somewhat Foucaldian way, that both were necessary for the functioning of a modern society (p.22):
(the) ability to disseminate the same texts, images and sounds to millions of citizens – thus assuring the same ideological beliefs – was as essential as the ability to keep track of their birth records, employment records, medical records, and police records.
This parallel development does not however mean that there are not areas of overlap between the two worlds. For example, Manovich points out that Babbage’s initial ideas for the computing machine drew inspiration from J.M. Jacquard’s loom, a device which was controlled by punched paper cards and therefore can be seen as an early example of a machine for generating imagery. Similarly the Universal Turing Machine is essentially a “kind of film camera and projector at once” (p.24). These moments of contact eventually culminate in full convergence when techniques for representing analogue media in digital form are arrived at and, as Manovich has it (p.25):
All existing media are translated into numerical data accessible for the computer. The result: graphics, moving images, sounds, shapes, spaces, and texts become computable, that is, simply sets of computer data. In short, media become new media.
This concentration on the nuts of bolts (or ones and zeros) leads him to establish a set of five principles of new media which are intended to differentiate such forms from their earlier counterparts. The first of these, obviously enough, is numerical representation. New media is media represented in terms of numbers: If this isn’t the case then it’s not new media. For Manovich, numerical representation implies that the content (a) is described mathematically and (b) can be operated on algorithmically. New media can be created from old media by digitisation which in turns consists of sampling and quantisation. The sampling process turns something continuous into something discrete, however as Manovich points out we must be careful not to fall into the trap of assuming there is a direct correspondence between analogue/digital and continuous/discrete, as many earlier pre-digital media forms combined both the discrete and the continuous. For example, while each individual frame of film is continuous in nature, the set of frames that make up the film can be seen as a sequence of discrete elements. Essentially a traditional film camera is sampling the world as presented to it at 24 times a second 3. The difference between this and new media however is that these samples are not then quantised (converted to numbers), therefore remaining in an analogue state as opposed to a digital one, and therefore not amenable to computational manipulation.
Another important thing to note at the this point is that something has to be comprised of discrete units in order to function as a language. In the case of natural languages we have sentences, words, letters, morphemes and other linguistic elements. In the case of visual language we might turn to semiotics and talk about the presence of various signs within the image and so on. New media is discrete through and through, as the computing technology upon which it is based has no means of handling the continuous at all. Its fundamental discrete quantity is the bit (which can be in a state of on/off) and everything is built upon this, including discrete representations of numbers, words, images and so on. This means that new media, at all levels, is ripe for interpretation as language and Manovich make it clear, not least through the actual title of his book, that this is a central concern. This is something of a formalist approach and this is confirmed early on when he states that he is interested in “emergent conventions, recurrent design patterns, and key forms of new media” (p.12) with less of a focus on “sociological, economic and political dimensions” (p.12).
The second principle of new media that Manovich identifies is modularity. What he means by this is that ” … a new media object consists of independent parts, each of which consists of smaller independent parts, and so on, down to the level of the smallest ‘atoms’ – pixels, 3D points, or text characters” (p.31) 4. Modularity is a well-establised established principle in computer science whereby problems are broken down into sub-problems, these sub-problems broken down in smaller sub-problems, and then units of code (or modules) created to solve them. When constructed correctly, these modules can be re-used and re-combined together in different ways to solve new problems. For new media, this principle of modularity is the thing that provides the bridge between high-level applications and low-level representations. The example that Manovich uses is the web page. This consists of a collection of discrete new media elements (e.g. images, text, graphics) that are combined together. Each of these elements are themselves comprised of independent parts (pixels in the case of images, words/letters in the case of text, shapes/vectors in the case of graphics) each of which may of may not consist of smaller independent parts but each of which ultimately boils down to a numerical representation of some kind or another.
When we combine numerical representation together with modularity then we come to Manovich’s third principle, which is that of automation. This refers to the possibility of performing computational operations on new media objects and automatically manipulating them in some fashion. Both of the preceding conditions are necessary for this to be possible. Computer programs ultimately can only manipulate numbers so numerical representation is a must and without a degree of modularity facilitating discrete units at various levels of representation there would be nothing to meaningfully manipulate in the first place. The sorts of operations being referred to here include creation and distribution as well as manipulation. The most interesting aspect of this part of Manovich’s discussion is that automation implies that “human intentionality can be removed from the creative process, at least in part” (p.32). This leads Manovich into a discussion of technologies such as AI and Automated Search but perhaps what is also worth exploring further here is the philosophical implications of removing, or at least de-centering, human subjectivity from the discourse around new media 5.
Another consequence of the combination of numerical representation and modularity is variability, which is Manovich’s fourth principle. This means that for any new media object we can not just create infinite versions of it, but all these versions can vary. As Manovich puts it: ” … a new media object typically gives rise to many different versions. And rather than being created completely by a human author, these versions are often in part automatically assembled by a computer” (p.36). This property is one of the things that distinguishes new media from Benjamin’s vision of mass reproduction 6. It’s not just the case that we can infinitely reproduce something but also that these reproductions can infinitely vary. Furthermore, these variations can be effected automatically either explicitly or implicitly in response to the user. An obvious example is how a website like Amazon customizes the view it presents to the user based on their previous browsing or purchasing history. Manovich points out that this change in media technology parallels a social change that occurred over the course of the 20th Century:
If the logic of old media corresponded to the logic of industrial mass society, the logic of new media fits the logic of postindustrial society, which values individuality over conformity. (p.41).
Finally, principle number five is that of transcoding. Strictly speaking this refers to the process of translating information from one coded representation into another. However, Manovich has something slightly different in mind here. Between the base-level numerical representation of new media objects and the form in which these new media objects are presented to us there is a layer of representation, which he calls the computer layer, which formulates things according to principles and procedures that are designed to make the content accessible to computers. This is the layer of data structures (stacks, linked lists, trees), networking (packets, IP addresses, protocols) and programming languages (variables, functions, objects). What Manovich means by transcoding is the ability to translate between this layer and what he calls the cultural layer, which is how the content is then presented to us as viewers and/or users. His point is that the cultural layer is influenced by the computer layer and vice versa. The principles of computer science that are employed in order to build the infrastructure that makes new media possible are not neutral but in fact both constrain and expand what is possible at the cultural layer (just as the demands of the cultural layer help shape how these principles are put into practice at the computer layer).
This notion of the computer layer links back to the previous discussions of both new media as sets of interlocking languages operating at different levels and also the idea of the nonhuman (or at least non-human-subject-oriented) workings of the new media apparatus. There is something of the Marxist base/superstructure distinction at work here also: the suggestion being that in order to understand how the cultural layer operates we need to understand the computer layer that lies beneath it. This is also in keeping with the original methodological notion of digital materialism as introduced at the beginning of the book – if you want to understand something, understand what it’s made of. Or, as Manovich puts it at the end of this section: “To understand the logic of new media, we need to turn to computer science.” (p.48).
Footnotes
- Manovich, L. The Language Of New Media MIT Press, 2000
- Bolter, J.D. and Grusin, R. Remediation: Understanding New Media MIT Press, 1999
- We see the results of sampling errors, or more specifically undersampling, when the sample rate of 24 per second does not prove sufficient to represent certain forms of movement – for example the well known effect of wagon wheels appearing to move rotate the wrong direction.
- Manovich’s formulation of this might seem to imply that different forms of new media have different fundamental ‘atoms’ from which everything else is derived but of course, as we noted previously, all new media forms are ultimately comprised of bits at the lowest level.
- See for example Joanna Zylinska’s recent work on “nonhuman photography” (Zylinska, Joanna. “The creative power of nonhuman photography.” (2015): 132-154 and Zylinska, J. (2017). Nonhuman photography. MIT Press.)
- Benjamin, Walter. The work of art in the age of mechanical reproduction. Penguin UK, 2008 (originally published in 1935).
Leave a Reply