Printed from www.flong.com/texts/essays/essay_4x4/
Contents © 2017 Golan Levin and Collaborators
Golan Levin and Collaborators
Essays and Statements
- Peer-Reviewed Publications
- Essays and Statements
- Interviews and Dialogues
- Catalogues and Lists
- Project Reports
- Press Clippings
- 03 2015. Brown & Son (Catalogue Essay)
- 03 2015. For Us, By Us
- 05 2009. Audiovisual Software Art: A Partial History
- 07 2006. Hands Up! The 'Media Art Posture'
- 07 2006. Some thoughts on Maeda for Ars 2006
- 07 2006. Einige Gedanken zu John Maeda
- 02 2006. Notes on Visualization Without Computers
- 10 2005. Artist Statement, October 2005
- 09 2005. Three Questions for Generative Artists
- 07 2005. Net Vision Jury Statement, Prix Ars
- 07 2005. Net Vision Jury-Begründung, Prix Ars
- 12 2004. Computer Vision for Artists and Designers
- 08 2003. Essay for John Maeda's Creative Code
- 01 2003. Pedagogical Statement
- 11 2001. Essay for 4x4: Beyond Photoshop
- 10 2001. Statement for Graphic Design in the 21st C.
- 10 2001. Statement of the Jury, Berlin Transmediale
- 05 2000. Statement For Communication Arts
- 12 1999. MIT Thesis Proposal
- 12 1997. MIT Statement of Objectives
Essay for 4x4: Beyond Photoshop with Code
Published by Friends of Ed in "4x4: Beyond Photoshop with Code".
Golan Levin, October 2001.
Most art schools today teach classes in "digital art," which to them means "how to use Adobe Photoshop." Although such courses often claim to explore the possibilities of a new medium, they generally explore little more than the possibilities that somebody else (namely, Adobe) has found convenient to package in a piece of commercial software. The fact is that computers are capable of an unimaginably greater number of things than any specific piece of software might lead one to believe. In the short essays I present here, it is my intention to encourage visual artists to understand and work beyond the limitations imposed on them by their software tools. Whereas the other books in this series may have focused on the tricks that allow one to get the most out of Photoshop, I'd like to offer a glimpse of what can be achieved when an artist steps away from Photoshop altogether, and makes their own software tools—with code.
Popular tools like Photoshop and Director have been both a great boon as well as a great hindrance to the development of interactive media art as a new form. On the one hand, they have radically democratized the production of digital media: today, anyone with a computer can publish and distribute an image or text on the World Wide Web. These tools, however, also radically homogenize the process and products of computer-assisted and interactive artmaking. With identical options to choose from, everyone's art begins to look and taste the same.
I believe individual artists should dictate the possibilities of their chosen media, and not some big companies like Adobe and Macromedia. The tradition of artists creating their own tools is as old as art itself; for centuries, artists ground their own pigments, plucked pig hairs to make their own brushes, and primed their own canvases with glue made from boiled rabbits. Instead of distracting artists from their "true purpose", these crafts actually tightened artists' connections to their materials and process. The revolution in software tools over the past decade, by contrast, has disastrously diminished the intimacy of this practice/practitioner relationship. Our tools, created by anonymous engineers for nobody in particular, are mass-produced, mass-distributed, one-size-fits-all. And all too often, we have no idea how we might make such a tool for ourselves, even if we wanted to. In this essay, I hope to convey the idea that artists can still make their own tools. The territory has largely shifted from paint to code: so that is where we must go.
Of course, it's one thing to make exhortations about the necessity of programming skills in the field of digital art, and quite another to actually acquire and develop such skills. Thus I thought it might be helpful or encouraging to share my own story about how I learned to program the kind of software artworks I now make today. I certainly wasn't born knowing how to program; in fact, my background before 1994 was almost entirely in pure fine arts and music composition. I wasn't even especially eager to learn programming, but (as we shall see) I eventually had to learn out of necessity. In this section, I discuss some of the forces that led me down the path to programming; in the next section, I offer some of my speculations on what kinds of programmed artifacts are worth making, once one is able to make them.
For a long time, at first, I tried to avoid learning how to program computers. Computer science was so poorly taught when I was an undergraduate that I quickly lost interest. The professors would spend an entire week treating the matter of "floating-point roundoff accumulation errors," and I couldn't have cared less. It didn't help that most of the computer science students were already very experienced programmers. Trained as I was in visual arts and music, I was instead seeking what one might call a 'studio art program in computer science'. I daydreamed about a hypothetical course of study in which I would be permitted to assign my own problems to myself. Then I'd learn as I went, learning how to solve problems because they were meaningful to me. I wasn't lucky enough to find such a program at the time, and so I graduated college with pretty much the same skills that I entered with.
When I finished undergraduate school I got my first job, working as a graphic designer in a Silicon Valley research company. I was responsible for making thousands of Macintosh icons for an experimental software system called Media Streams. These icons comprised the hieroglyphic vocabulary of a comprehensive visual language for video annotation; in theory I had to create an icon for anything that might ever occur in a video or film. I ended up making something close to eight thousand icons.
After a few years I started to have dreams that took place in a 32-by-32-pixel universe. People often talk about whether we dream in color or black and white; I can say for certain that I had at least a few dreams in the 8-bit Macintosh system palette. As my enthusiasm for pixel-pushing dwindled, I found new sources of stimulation in the culture of software development all around me. It was 1994, some friends and mentors had freshly introduced me to the concept of "interactive art", and I wanted more than anything else to understand this new form of expression at a basic level.
Around that time I was reading the Dover reprints of Wassily Kandinsky's Point and Line to Plane (1926), Gyorgy Kepes' Language of Vision (1944), and Paul Klee's Pedagogical Sketchbook (1923). All three of these books are masterpieces of design pedagogy by some of the foremost thinkers of the Bauhaus movement. In their teachings, Kandinsky, Kepes and Klee sought to encourage a rigorous study of what they considered to be the basic formal elements of visual communication: point, line, plane, texture, color, rhythm, balance, and so forth. Only by precisely investigating these basic building materials of graphic communication—in isolation and then in restricted combinations—could an art student eventually hope to successfully create more complex compositions. It seemed logical to me that an analogous approach could be applied to the study and creation of interactive systems. I wanted to understand: what are the formal elements of interaction? The designers I knew all used and recommended Macromedia Director, and so I decided to start there.
The first thing I noticed about using Director was that it was incredibly easy to trigger QuickTime videos, and stereo sound clips, and dithery scene transitions—but rather difficult to do anything as simple as drawing a line. Macromedia finally fixed this a year or two ago, but in 1994, animating an arbitrary line in Director required about a page of rather peculiar code to switch between two different cast members....It was needlessly awful. If that wasn't enough, Director made it impossible to rotate a graphical element! Instead, it forced all of one's visual elements into static orientations, like soldiers locked into upright positions. Finally, Director limited the number of simultaneously manipulable elements to 160. These seemed like basic injustices to me.
What is the proper response to such constraints? I knew one co-worker, an especially committed and talented young designer, who, when he wished to rotate a graphical element in Director, would prepare hundreds of rotated bitmaps beforehand in another tool like AfterEffects. I recall one occasion when he wanted to control two parameters of a shape at the same time—rotation and color—and the poor fellow ended up pre-rendering a matrix of ten thousand images. He wasn't making fancy graphics, either; just squares and triangles. There was no reason that a computer couldn't render such a simple shape. The results of his labor were stunning—nobody had ever seen Director look like that before—but I couldn't avoid the feeling that my colleague was being bullied by his own tool.
Director's design positively offended me, not only because of the seemingly fundamental things it prevented me from doing, but also because of what it seemed to suggest that I ought to be doing instead: making dithery scene transitions between QuickTime videos. This wasn't what I conceived interactive multimedia to be, and by 1996 I had encountered more such limitations than I could tolerate. I wanted to be able to control a thousand rotating squares, and I didn't feel like waiting out the four years (and three versions of software) that it would take before Macromedia eventually turned its attention to my needs. With a great deal of anxiety I came to the slow realization that all of the things I wanted to make would require the creation of new software.
Not knowing how to write software myself, I first tried a well-worn solution familiar to so many artists new to electronic media: I sought engineering collaborators who could help me "implement my visions." Engineers are and probably always will be a highly-paid bunch, and this was certainly no less true in Silicon Valley in the mid-1990's. Without money to offer in exchange for their precious engineering time, the most common answer I usually received was a simple and unequivocal "No."
Occasionally I found an engineer who was between jobs, and, astoundingly, they would agree to help. But I hardly had time to enjoy the collaboration when they would suddenly evaporate: "Sorry man. It's a neat art project but I just got a gig earning 150 an hour." So it happened more than once that I became stranded with half-coded, buggy projects that I could neither fix nor finish myself.
One time I actually found an engineer who was both enthusiastic about my ideas and willing to help me build them. I could hardly believe my luck. I'd give him sketches and then watch over his shoulder while he typed mysterious codes into the computer. "Run it! Run it!" I would say excitedly. He hit a button and we watched the results....And they, umm ...didn't look so good. No matter how I tried to explain the particular problem, it was nearly impossible to articulate in words that he could understand. The problems were visual problems, which I felt and understood in my eye and gut, but could not yet fix or express in the language of the machine.
The final straw was an engineer who let it slip that he was helping me as a kind of charity case: Oh, you poor, little artist—I'll help you out. Already frustrated from trying to cajole and bribe engineers into helping me, the feeling of being pitied sent me over the edge. What am I, stupid? I refused to believe that programming a computer was some kind of rocket science. I bought the programming books and dug in. That was in 1997.
After a year or so of slow progress and false starts, I had learned enough to know that my interests would be artificially constrained if I continued to develop them in a commercial context. In my search for an educational environment that could suit my studies, I seriously considered some art and design schools, but I often found that their approach to the use of technology wasn't rigorous enough. On the other hand, I found many schools in computer science to be boring, impersonal, and focused on tiny ideas. That was when I heard about a peculiar little program that had just been formed at MIT: Professor John Maeda's Aesthetics and Computation Group at the MIT Media Laboratory. Maeda was unusually committed to encouraging his students to explore their artistic endeavors in a technically rigorous way. He had truly founded a remarkable "studio art program in computer science"—a small design school in the heart of a technical university, wherein his students' goals are art and design, but their medium is software. I attended Maeda's program for two years, and continue to recommend it without reservation.
Nowadays my artwork is developed in the Java and C++ programming languages. I usually do all of my sketches as Java applets for the web, and then develop final pieces in C++ for specially-configured computers. This strategy has worked out pretty well for me; I've been especially satisfied with the way in which the Java sketches, which I post on my web pages online, allow people to get a small taste of what the larger works are like. Of course, there are plenty of other computer languages, each with different toolkits and advantages, and each differently suited to one's personal tastes and goals; the other computational artists I know use an amazing variety of languages, such as Visual Basic, ActionScript, Perl, C, Max and Lingo. But the more computer languages I learn, the more I realize that they're really all the same—much more similar, in fact, than human spoken languages. Regardless of one's preferred computer language choice, I do think it's essential that an artist-of-the-computer-medium be able to program in some way. For better or for worse, it is the only way in which new computer experiences can be made.
Let me end this section of the essay with a brief anecdote. In my last semester of graduate school I was enrolled in two classes. One was entitled "The Nature of Mathematical Modeling," taught by MIT physicist Neil Gershenfeld. The curriculum began with second-order partial differential equations, and ended with Hidden Markov Models and cluster-weighted modeling. I hardly understood a word and it nearly fried my bsrain. It might even have been the most difficult class I had ever taken, were it not for my other course that semester: a course in abstract painting. Now that was truly hard. The simple fact is that in visual design, there are no right answers—we have nothing to rely on but our own raw talents and our own practiced eyes. In the realm of software engineering, by contrast, nearly everything one might care to know about or use is available and documented in a book. Programming, computation, engineering: everything you wish to know is already out there, waiting for you to seize it. May you already possess the talents that can't be taught, so that you may learn and use well the ones that can.
Recently I served on the jury of an interactive media competition, in the course of which we turned our attention to the question: by what metrics shall we evaluate the quality of an interactive artwork? Our interest was more practical than academic; we simply had several hundred submissions to evaluate and thought it would be helpful to have a short checklist of criteria to keep us focused. Without attempting to be comprehensive, our particular jury reckoned that we cared most about the following issues:
- To what extent are the form and content of the work mutually essential in effecting its communication? Are the two wholly inseparable, or is some aspect of either one arbitrary or irrelevant?
- How and to what extent are the acts performed by the user, through interaction with the system, socially significant?
- What is the depth and character of the feedback loop established between the system and its user?
Entire treatises could be written on any of these questions. The first two of these are chiefly about context, content and communication; put differently, these two questions ask: when is something worth communicating? and, when is it well-communicated? There is no conclusion on these topics that I could possibly present to you here: I entrust the fulfillment of these matters to your own passions, aesthetics and intuitions. The third question, however, is simply one of form. It is the closest to being directly observable, and perhaps the least difficult of the three to discuss and address. And so it is this last criterion—the nature of the human-machine feedback loop—to which I would like to turn our attention.
Interactive systems are deceptive because they wholly and implicitly engulf both static and dynamic media. In this way they masquerade as older forms: if an interactive system moves, it is easy to think it is an animation; if it holds still for a moment, we mistake it for an image. We must not be so easily deceived! for interactive experiences are really quite another thing. They are more than spatial, more than temporal, and more yet, even, than spatiotemporal configurations. The defining property of interactive systems is their use of feedback—in which a system's output affects its subsequent input—and their incorporation of people as essential components in this feedback cycle.
My thinking on the design of successful interactive feedback systems has been considerably shaped by Marshall McLuhan's famous distinction between what he termed "hot" and "cool" media. To McLuhan, "hot" media are high-definition, high-resolution experiences that are "well-filled with data," while "cool" media are low-definition experiences that leave a great deal of information to be filled in by the mind of the viewer or listener. Within McLuhan's scheme, therefore, photography and film would be examples of hot media, while cartoons and telephony would be cool. McLuhan's definitions establish an opposition between the "temperature" of a medium and the degree to which it invites or requires audience participation: hot media demand little completion by their audience, while cool media, "with their promise of depth involvement and integral expression," are highly participatory.
A quick survey of contemporary visual culture clearly shows a large trend toward the development of high-resolution, high-bandwidth, mega-polygon experiences. The products of this focus—typically photorealistic three-dimensional virtual realities and streaming digital movies—have been dazzling and hypnotizing. But our relations to these spaces are rarely ever more than as spectators, and almost never as creators. The industry's rush to develop these hot experiences, and the expensive machinery they require, has left in its wake numerous fertile and untrammeled technologies for cooler, more participatory media.
My own interest is in the development of sophisticated cool media for interactive communication and personal expression. In pursuing this, I interpret McLuhan's specification for cool media—that they demand "completion by a participant"—quite literally. The notable property of cool media, I believe, is that they blur the distinctions we make between subject and object, enabling the completion of each by the other. An example of such a subject/object distinction is that between author and authored, the blurring of which, according to psychologist Mihalyi Csikszentmihalyi, is critical to the Zen-like experience of creative flow. Another such distinction is that between sender and recipient, to whose dissolution, wrote the philosopher Georges Bataille, we owe the delight of communication itself. My goal is to understand, build, and encourage the proliferation of systems that successfully blur these boundaries, enabling the vibrant flow and authentic communication that are possible when people engage, through a medium, in a transparent, continuous and transformative dialogue with themselves and others. My personal criteria for interactive media begin, therefore, not with the question "for how long can I suspend my disbelief in it?" but with the questions: for how long can I feel it to be a seamless extension of myself? and to what depth can I feel connected to another person through it?
To answer these questions well, an interactive medium must become a kind of prosthesis, a participatory information-prosthesis which is so intimately adapted to us that our awareness of it drops away when we engage with it. To do this, we must rethink what it means for a medium to be personal, for a defining feature of our modern era is that nearly everything we touch or experience is impersonal and canned. "Canned" is my word for it, anyway; one might equivalently describe much of the world around us as pre-prepared or mass-produced. Few aspects of life have been spared from this depressing homogenization. Buildings, food, clothing, furniture, entertainment, even the ground beneath our feet: each unit is rolled off the assembly line or poured out of the vat exactly like every other. It is therefore ironic that computers—as mass-produced putty-colored boxes, the most quintessentially generic items of all—have the most unprecedented potential of any technology to break this trend of homogenization, and respond to us in ways that are uniquely individual. Computers can see us, listen to us, sense our movements, collect data about us, connect remote people together, and read what we write! And yet it seems that designers of online art have for the most part foregone the opportunities presented by computer input technologies, and instead seized on the computer as yet another mechanism for the delivery and display of pre-prepared content chunks.
Most of our new "interactive" multimedia artworks are in fact regressions to the old metaphors of the film, the record-player, or the slideshow. The cost of this regression is not merely that we have missed an opportunity to create more "personalized" media. Because computers can sense us in ways that no film or slideshow ever could, our expectations for all of our experiences with the computer are different. From our interactions with the many software systems that do collect and respond to our input—such as word processors, chat spaces and web browsers—we have come to value the feeling computers give us of unlimited freedom and possibility. This is why our computer can ask us: "Where do you want to go today?"—but why our television cannot. If the landscape of Flash movies that comprise the bulk of online visual culture is anything to judge by, then the field of multimedia art has not yet caught up with this intuition. Instead, such works' prevalent use of canned audiovisual materials is one of the clearest indications that their space of possibilities is fundamentally limited. And when a system’s possibilities are easily or quickly exhausted, we get bored.
To create engaging interactive systems which can sustain our interest over repeated encounters, I believe we must eschew canned materials in favor of creating interesting generative relationships between input and output. When looked at in this way, the number of possible relationships and their space of possible outcomes is nearly unlimited. No matter what form the user's input takes—whether textual, gestural, auditory or what-have-you—we can imagine a way in which it can be amplified, shrunk, sharpened, dulled, embellished, simplified, stored, reversed, echoed, repeated, reflected, slowed, accelerated, rotated, shifted, fragmented, merged, negated, transmitted, transformed, transmogrified. Taken as a group, these augmentation techniques can be used to present the user with feedback systems whose rules produce surprising and engaging results. If they strike the right balance of intelligibility, novelty and utility, the experience of incorporating oneself into these systems can be deeply enjoyable and perhaps even socially significant.
The creation of generative relationships in interactive artworks is most hampered by the limited malleability of our media. Consider digital audio recordings as an example: there are only a few basic dimensions of a recording which we can control very easily, such as its volume and playback speed. With a bit more difficulty, we can modify the recording's overall tone, or even decouple its tempo from its pitch. But unlike a live band, we can't ask the guitarist in a given audio recording to change his melody, or even to stop playing for a moment. In this way, the recording is inflexible and static. To circumvent such limitations in a medium's malleability, a common design strategy is to switch to a related medium which permits control at a finer level of granularity. In the example of digital sound, we can switch to the use of multitracked MIDI instruments—with which we can control the pitch, volume, instrument, and duration of every individual note—or even direct waveform synthesis, with which we can control the most minute details of timbre and sonic texture. The finer our level of granularity, the more control we have, and also the more work we must do to create interesting results. It's a tradeoff that I truly believe is worth making. From the programmer's point of view, we are faced with the choice of triggering something as simple as a "play" button, or doing the hard work of developing a generative synthesis algorithm. But from the user's point of view, we are faced with hearing the same melody...again, and again...or hearing a sound which responds to our unique presence in the world of information.
Much of my own work has focused on the development of systems for the real-time creation and performance of animated abstract imagery and synthetic sound. Every environment I develop represents an experimental attempt to design an interface which is at once supple and easy to learn, but which can also yield interesting, infinitely variable and personally expressive performances. In pursuing this goal, I've often found it necessary to use the most malleable media possible, and in this way to build up my generative schemes from first principles. Thus many of my pieces make extended use of low-level synthesis techniques in order to directly control every individual pixel and every sound wave displayed and produced by the computer. To complete the feedback of the interaction loop, my systems chiefly derive their inputs from human gesture. The psychological and physiological intimacy of the relationship we have with our own gestures is surprising, and when our marks are used to generate uniquely ephemeral dynamic media, it's possible to create simple and transparent interactions which can nevertheless open new vistas of possibility and experience. The pictures which illustrate this section are stills taken from my interactive works, and give an idea of my systems' expressive range.
The tutorial which accompanies this essay is designed to illustrate many of the ideas I've discussed here. This tutorial shows the development process behind a new interactive work called Dendron, a Java applet which allows its user to gesturally manipulate an organic simulation of generative growth. At the technical level, Dendron's implementation presents fractal morphogenesis and pixel-based rendering as a pair of important alternatives to the use of pre-stored imagery. At the aesthetic level, the interactive feedback loop established by Dendron straddles order and chaos, life and oblivion.
- Ammeraal, Leen. Computer Graphics for Java Programmers. England: John Wiley & Sons, 2001.
- Csikszentmihalyi, Mihalyi. Creativity: Flow and the Psychology of Discovery and Invention. Harper Collins, 1997.
- Heaton, Kelly. Physical Pixels. Master's Thesis, MIT Media Laboratory, 2000.
- Kandinsky, Wassily. Point and Line to Plane. New York: Dover, 1979.
- Kepes, Gyorgy. Language of Vision. New York: Dover, 1995.
- Klee, Paul. Pedagogical Sketchbook. Faber & Faber, 1968.
- Lyon, Douglas A. Image Processing in Java. New Jersey: Prentice Hall, 1999.
- Maeda, John. Design By Numbers. Cambridge: MIT Press, 1999. Interactive Java programming environment available from
- Maeda, John. Maeda@Media. New York: Rizzoli Press, 2000.
- McLuhan, Marshall. Understanding Media. Cambridge: MIT Press, 1997.
- Myler, Harley R. and Arthur R. Weeks. The Pocket Handbook of Image Processing Algorithms in C. New Jersey: Prentice Hall, 1993.
- Prusinkiewicz, Przemyslaw, Mark Hammel, and Radomir Mech. Visual Models of Morphogenesis: A Guided Tour.