Printed from www.flong.com/texts/interviews/interview_xfuns/
Contents © 2020 Golan Levin and Collaborators

Golan Levin and Collaborators

Interviews and Dialogues

Interview by Tori Tan for XFUNS Magazine (Taiwan)

Golan Levin, 22 June 2004.

Please introduce yourself.

I'm an artist who creates new forms of interactive digital media. I'm especially interested in non-verbal expression, and so a lot of my projects involve giving people expressive control over sound and image as a substitute for words. My work takes a lot of different forms, such as installations that allow people to interact directly with my software, or performances, in which I demonstrate the expressive capabilities of one of my creations. In order to make my projects, I've had to become a computer programmer, which is an increasingly common situation for artists who are interested in working with computers in new ways. I also collaborate with a lot of other artists and engineers, because making interactive computer art is a lot like making movies, requiring many people with very different skills. I'm 32 years old, and originally from New York City. I'm currently living in Pittsburgh, where I am an Assistant Professor of Electronic Art at Carnegie Mellon University.

In your other interviews, you have not discussed your musical background, but the influence of music is omnipresent in all dimensions of your art. Perhaps you can share some stories about how you grew up: how did you get your start as a performer?

When I was a teenager, I was intensely interested in electronic music: it seemed to represent the promise of a perfect synthesis of art and technology, which had been a personal goal ever since I was very small. During high school, my friend Mark Rhodes and I had an electronic music band, really just the two of us, with the very nerdy name "Quark Pair". We were influenced by the instrumental electronic music available at the time, such as Jean-Michel Jarre, Isao Tomita, Vangelis, and Tangerine Dream — this was before the birth of techno music in the early 1990's — and so we released a couple of instrumental new-age cassettes, "Relativity" and "Observatory," in 1987 and 1989.

When we finished our second album, we decided to tour it by performing in a few venues around New York City. For us, performing "live" meant hauling an enormous tower of keyboards onto the stage, and then pressing the "start" button on our sequencer. Our machines would then automatically chug away, spitting out a dozen tracks of elaborate pre-recorded sequences, while we would play a simple solo line or two by hand. Neither Mark nor I were very proficient pianists, so sometimes we just faked it completely on keyboards that weren't even connected to anything! Most embarrasingly, it was quite self-evident that the orchestral complexity of our sound was totally incommensurate with the pathetically simple movements of our hands. Performing live in this way felt dishonest, but unfortunately, we couldn't see another option for producing the sound we wanted: we either needed 20 musicians, or we would have had to drastically simplify our music. In the end, we didn't find a feasible solution that we felt had integrity, so we only performed a few times. By the time I was in college, I had stopped performing completely. I retreated to my electronic music studio, where I experimented for several years with subtly-shifting electronic timbres, and with the possibilities of sampling and musique concrète.

What made you return to performance?

Honestly, I didn't expect to, and in fact I did not return to live performance for nearly a decade. By this time, I was already in graduate school at the MIT Media Laboratory, where I was studing interactive art in John Maeda's Aesthetics and Computation Group. At that time, I fell under the influence of the unofficial Media Lab mandate to "demo or die". This basically meant that, in order to secure sponsorship funds from visiting corporations, we students at the laboratory were constantly expected to give demonstrations of our research projects. I was researching the ways in which sound and image could be connected in real-time, and so I had developed the tools which later became my Audiovisual Environment Suite. These tools represented, so far as I was aware, the first time that sound and image could be gesturally performed, simultaneously, in real-time, in a way that was deeply and commensurately malleable. I didn't yet regard them as performance instruments, however, but rather as something closer to drawing tools with which a user could have a satisfying but private experience. It was Maeda who made the observation that I seemed to really enjoy demonstrating my software, and that, for him, it was more enjoyable to watch me demonstrate the software than to actually use it. He recommended that I consider showing it in a formal performance context.

I might have shrugged off Maeda's suggestion, had it not been for an unexpected and unrelated invitation to show the Audiovisual Environment Suite in a performance at the 2000 Ars Electronica Festival. I had submitted the software to the Ars Electronica competition, with the rather boring proposal that the projects be exhibited in a row of five computers. I was astounded when I received a call from the Director of the Festival, Gerfried Stocker, asking to present the work in a half-hour performance. Mostly, I was terrified that the software might crash in the middle of the concert! I knew that a performance like this would be the real test of its expressivity, and so I spent several more months working on it. I also instructed my friends Greg Shakar and Scott Gibbons in how to use the software, and I incorporated their rich wealth of feedback. Our resulting performance, "Scribble," was very successful and really re-started my interest in performing. And I'm indebted to both John Maeda and Gerfried Stocker for their intuition that there was a natural performer lurking in me.

What made Scribble successful, in your opinion?

Many of the computer performances I've seen have suffered from what I would call "opacity": there is very little way for an audience to understand how the music they're hearing relates to what they see the performers doing. The problem is not simply solved by projecting the performers' laptop screens, since often these screens consist of a confusing mess of sliders and Max patches. Often, it is not even possible for an expert to understand such an interface. And when an audience is unable to understand what is going on, and why, they become less invested in the details of what the performers are doing. So I felt that one of the major strengths of using the Audiovisual Environment Suite in a performance was that there was no auxiliary interface — no sliders, buttons or dials that could interpose a layer of symbolic indirection between the performers' gestures and the resulting sounds and images. Instead, the medium was its own interface. The audience could simply observe our cursors moving about the screen, transparently and directly manipulating the image and sound, and in this way they could have a close relationship to what was going on.

I believe a second reason for the success of Scribble was the fact that none of the sounds or images were pre-recorded in any way. There was nothing stored on disc, no pre-rendered models, samples or sprites. Instead, all of the images and sounds were produced live, in real-time, as a function of the performers interacting with the software's audiovisual algorithms. This, for me, was a rejoinder to the problems I had experienced a decade earlier with Quark Pair. You know, an audience can tell when everything is live. There is an altogether different quality of contingency to the performance. Things could go wrong — or, they could simply go differently, perhaps because of some small gestural detail. In Scribble, there was something at stake, and no safety net to catch us if we screwed up. Of course, this is a condition that performers of traditional instruments routinely take for granted! But this is exactly what had disappeared when sequencers and other forms of computerized mechanizations were introduced to electronic music. At a time when computers seemed designed to ensure repeatability and perfection, Scribble succeeded, I think, because it demanded that human imperfection show through.

How has performance influenced your art and your working regime?

Nowadays I am convinced that a performance is absolutely the most challenging context for presenting interactive digital art. With online work for a web browser, one has the chance to continually make improvements, and you don't ever really have to confront your audience face-to-face. With installations, there's usually an opportunity to make some repairs after the opening night, and if things are going really badly, people are often quite forgiving: you can say, hold on, I need to restart the computer, please come back in 5 minutes, and most of the time, they will. But with a performance, you get one chance, and it has to work perfectly that first time! The people have stood in line, paid an admission fee, and expect to be entertained. The stakes are tremendous, and I get thrilled by the sense of risk. Will the software crash? I've tried to make sure it won't happen, but it's entirely possible. Will the performers seem magically connected to their expressive software instruments? I hope so, but I cannot predict. It is this unpredictability that really makes such performances so rewarding for me.

An example of this unpredictability happened with our Dialtones concert, which was entirely performed through the ringing of the audience's mobile phones. This was an enormously difficult project to develop, since we needed to request a great deal of technical resources from a very skeptical and reluctant provider of wireless services. Essentially, in order to dial the audience's mobile phones, we had to ask for a direct, insecure connection into their central switching computer! After months of negotiations, they finally provided us with the digital link, about a week before our concert. So we only had a week to test all of our software, at the same time that we had to compose all of the ringtones. And we would have no idea what kinds of phones the audience would bring to the performance, until the very night of the show itself. Meanwhile, the event had already been advertised for months as the "world's first mobile phone concert!" Well, fortunately everything came together. Of the half-hour performance, I think maybe four minutes is really sublime. That's better than ten percent, which I think is lucky given the millions of things that could have gone wrong.

Stepping back for a moment, I think the field of musical performance can provide a wealth of powerful suggestions and paradigms for interaction designers, regardless of whether they are actually developing musical tools. The simple fact is that musical instruments are one of the oldest and most universal technologies used by humankind. More importantly, musical instruments may be the oldest technology from which we have consistently derived meaningful and significantly satisfying interactions, for thousands of years. I think music is an intrinsic part of who we have become, as a species, and so I think it's natural for that to motivate new technological artforms as our culture changes.

You seem a little bit obsessed with the granularity of media, making the most malleable media, and generating infinite varieties of feedback. Please tell us why, and what is malleable media? Maybe also talk about how technology responds to these concerns.

Yes, the malleability or plasticity of interactive media has been one of the most significant and continuous threads in my artistic concerns. This concern of mine arose from a what I suspect is a fairly common observation: that most of our interactions with computational media seem to be extremely impersonal. The computer always responds the same way, no matter who is interacting with it, on whatever occasion. Depending on your state of mind, as a user, this either makes you bored or insulted, or both. So I think computers ought to know a lot more about us, and take this knowledge into account my modulating their behavior in ways that are perhaps extremely subtle, but tightly coupled to who we are, what we're doing, and how we're doing it.

In order for computers to respond to us in more subtle, modulated ways, two things are required. The first, as I've mentioned, is greater knowledge about who we are and how we behave. To begin with, they should be aware that we exist! I think Don Norman was the first to point out the irony in the fact that our desktop computers have no idea whether or not we are in front of them, while this has been a commonplace feature in airport urinals for years. But more seriously, there are all kinds of information that we are continuously broadcasting from our bodies, even unconsciously, that computers are not aware of. Computers should be able to sense who we are, and distinguish different people from one another, and recognize when we are doing something for the first, second or hundredth time, and maybe even know a little something about how we're feeling. To some extent, this will require input devices that sense a wider range of human behavior, but it will also require much more sophisticated signal processing algorithms.

The second thing which is necessary in order for computers to respond in more subtle ways to our actions, is for them to have a finer granularity of control over the texts, images and sounds that they are producing in response to us. We have to decide if we want computers to be more than just record-players! Suppose a computer is only able to play a single, pre-recorded sound file in response to some user action. By the tenth time I've performed that action, the meaning of that action has changed for me, but the computer's response to it has not. What if I perform the action in a slightly different way? The computer should be able to recognize the way in which my action was different, and respond with a correspondingly different but related sound. But the computer cannot do this if it merely plays back a pre-recorded sound file. That's why it's so useful to study musical instruments, because they incorporate this property into the essence of their design. For a computer to respond in such a flexible way, we have to start digging in to sound synthesis techniques, which unfortunately require a lot more design planning to control.

Most of my performance projects can be thought of as demonstrations of new forms of malleable media. In Scribble, my goal was for the computer to respond to the performers' mouse-gestures with malleable animation and sound. In Messa di Voce and The Manual Input Sessions, our attention has turned to the voice, as a form of input, and actual hand-gestures.

On the one hand, your performances (such as Messa di Voce) deploy multitudes of interesting narratives, compared with other audiovisual performances. On the other hand, it has been pointed out that your performances have all been organized according to a similar schema, a sequence of vignettes. Why is this the case, and how does this reflect your ideas about structural issues in long-form compositions for electronic media?

This is a fantastically challenging question, as it points out what I think is one of the most significant shortcomings of my performance work to date. Your observation that each of my performances has been structured as a "sequence of vignettes" is certainly true for Scribble, Messa di Voce, and The Manual Input Sessions, though somewhat less so for the Dialtones Telesymphony. And I have to confess that it's become enough of a pattern, in my own work, that I'm strongly motivated to break out of it in the future. This structure has turned out to be expedient, but it doesn't at all represent my ideals for the structure of long-form performances! I've learned, in particular, that long-form works really need much more overall choreography and dramaturgy than this kind of structure can provide. Either that, or, taking the minimalist approach, much, much less!

Basically, this compositional device of organizing a concert into a sequence of vignettes has emerged from the ways in which these projects were developed. Each concert developed from a particular inquiry into how some form of interactive input — cursor, voice, hand — could be related to some form of audiovisual output. And so in creating each of these performances, my collaborators and I sat down and began to brainstorm about the ways in which these domains could be related. At the end of each process, we had a long list of ideas about possible relationships. And each of these relationships described an open-ended interactive system which could, ideally, be an interesting thing to explore and watch for a few minutes. Unfortunately, it's often quite difficult to connect these systems together in a seamless way, because ultimately we're switching from one concept of interaction to another. I haven't got a good answer to this yet, so it's an area of personal research for me.

I am especially interested by one of your descriptions of a specific vignette in Messa di Voce: "In some of the visualizations, projected graphical elements not only represent vocal sounds visually, but also serve as a playable interactive interface by which the sounds they depict can be re-triggered and manipulated by the performers." Could you explain more?

This refers to the software modules we made for the solo performance of one of our collaborating vocalists, Jaap Blonk. The idea is that Jaap emits a stream of black circular bubbles by making a funny cheek-flapping sound. As his sounds grow more vigorous, his bubbles fill up the screen. But the resulting cloud of jostling bubbles is unstable, and they begin to fall down. If a bubble bounces into Jaap's shadow while it is falling, then it "releases" the sound contained inside of it. In other words, the voice-sound that it visually represents is replayed. So in this way, it is actually performable, since Jaap can reach out and touch a bubble with his shadow, to trigger its sound whenever he pleases.

This idea was probably the most important innovation in Messa di Voce, since it introduced a bi-directionality to the concert's interactive audiovisual relationships. This, critically, pushed the concert beyond the domain of mere sound visualization. In other words, instead of simply visualizing sound, our visualizations also became interfaces for producing sound. Here, we were reacting to the concept underlying the sound-visualizing "skins" which are such popular features of desktop MP3 software these days. These skins do a fine job of making an attractive picture in response to recorded sound. But if all Messa di Voce did was make a pretty picture in relationship to a live voice, I don't feel that we would have made a conceptual contribution beyond that of a commonplace piece of software like WinAmp. It was essential to us that we really reinforce the interactive quality of these visualizations. And this required that the visualizations be able to support other forms of manipulative input, subsequent to their generation from sound.

That said, this feature of Messa di Voce was also one of the least developed. It turned out to be incredibly difficult to control, because in addition to provinding a conceptually interesting interactive feedback, it also provided a totally undesireable sound feedback that was almost impossible to manage. Indeed, Jaap would touch a bubble, and it would make a sound, but because this sound was amplified so loud for the theater, it would get picked up by his microphone and start to make new bubbles! Dealing with the sound feedback issues of this idea proved to be very difficult, so we didn't get to explore it as much as we would have liked, particularly in the area of gestural manipulation of replayed sounds.

I am curious about the synthetic sound-image mappings in your latest work The Manual Input Sessions. You didn't talk much on the Tmema site, but since you took extensive research on this realm, please tell us more about your discoveries, old and new. And of course, some introduction to The Manual Input Sessions please.

The Manual Input Sessions is a new performance project, another collaboration with Zach Lieberman. This concert is performed on a combination of our custom interactive software, regular overhead projectors (of the old-fashioned classroom variety), and digital computer video projectors. The analog and digital projectors are aligned such that their projections overlap.

Essentially, we use the analog overhead projectors to cast enormous shadows of our hands. These hand-shadows are then analysed by our computer-vision code, which tries to understand things like: where are our fingertips? and, how are our hands moving? In response, our software generates synthetic graphics and sounds that are tightly coupled to our hand movements. The graphics are then digitally projected in the same location as our hands' shadows, resulting in an unusual quality of hybridized, dynamic light.

The sound-image mappings in this project are actually extremely simple. For example, in one section, I close my thumb and forefinger together (like the "OK" hand gesture) in order to make a closed, negative shape with my hand's shadow. This shape is detected by our software and then projected digitally in the same place. When I open up my fingers, this shape falls out and bounces around! And when it bounces, it makes a sound whose pitch is based on its size, so a big shape makes a low-pitched sound. It's a very fun instrument. But it's important to understand that the innovation in work like this, as with Scribble and Messa di Voce, is not the way that sounds have been mapped to images; rather, it's the way that sound and image together have been mapped to human gesture.

You seem willing to recognize and engage the rather mundane aspects of human nature and cultural context. We sense that your works carry a streak of provocative charm — for example, you show us how a bunch of cell phones can produce a symphony, how numbers reflect our secret desires and habits, how fingers can be spies, how to see noises, and how to paint with meaningless, hilarious sounds. So what's with that?

This has to be one of the kindest observations anyone has made about my work. If it's true, it means that I'm somehow succeeding in making the computer into a medium for personal expression.

One of the biggest problems with digital art, as I've already touched on, is that it often seems very impersonal. I think the problem is that the voice of the artist is frequently drowned out by the voice of the engineers or companies who created the artist's tools. There's an extent to which everything that's been made with Flash looks alike, for example, and it takes a really dedicated artist to find ways of getting around this. With respect to Flash, I think James Patterson from Presstube.com is a really rare example of someone who has found a way of making the medium incredibly personal. Partially, it's because he has put in the labor to find some very individual ways of responding to what Flash can do, and partially, it's because he has forced Flash to bend to his will, in order to express the peculiar ideas he has kicking around his head. It's rare to see such a personal streak in digital art. I can only assume it's because everyone else must be reading the same tutorials.

I can't speculate why so many people are dry and humorless, but it's certainly a second major problem affecting digital art. Perhaps if people were able to make more personal work with the computer, it would have more humorous "provocative charm". I think there's something else that contributes to this problem, however, which is a widespread and uncritical fascination with technology. Technology, by design, is intended to be seductive — if it weren't, we wouldn't be buying so much of it. When artists get caught up in this seduction, they make work whose content is all about bewildering speed, massive quantities of information, and cybernetic surface details. Whether utopian or distopian, this work is almost never humorous. I think that in order for technology-oriented artwork to have any sort of provocative charm, it's essential for that we step back and allow ourselves to see the humorous ways in which technology conditions us. My heroes in this regard are the digital artists Alexei Shulgin and Jonah Brucker-Cohen. Their work is irreverent, insightful, funny, and challenging.

If my own work is humorous, it's not a result of a deliberate attempt on my part to be a comedian. I'm interested in certain ideas, and if some people think my research into these areas is funny, that's fine. I mean what I do seriously, but I also think there is implicit humor latent in nearly everything.

You've worked on several projects like the SAP Lobby in Berlin, or the Amore Pacific flagship store, which explore the emergence of spatiality rich in ubiquitous, multi-dimensional, trans-media and all-sensory interactions. Please tell us what leads you there, and your observations about the influence of diverse architecture disciplines on interaction design today.

It's certainly the case that more and more architectural projects have come to include a plan for the incorporation of electronic media. I think there has been a recognition, in the last ten years or so, that architectures should include an "electronic skin" that stregthens or improves people's relationships to the space. There are a lot of advantages to doing this, such as delivering continually-changing information to visitors in a museum, or reinforcing product branding in a commercial space, or delivering a stimulating conceptual intervention in a public are, or simply making a relaxing zone in a home or hotel environment. There are a number of practitioners who are much more experienced at this than I am; I think David Small is doing fantastic design work with indoor displays in museums, for example, while Rafael Lozano-Hemmer has made some extremely interesting and dramatic art installations in large outdoor spaces.

Interaction design for architectural environments is decidedly different than design for online or performance venues! The most significant thing to remember is that people who enter a given space are probably not expecting to interact with an artwork, and in fact most of them have many other things on their mind. They probably won't enjoy being distracted by some form of artistic intervention, unless they're specifically seeking it out. And so the goal — when working on architectural commisions — is to distract visitors as little as possible, while rewarding those who do decide to pay attention to the electronic environment around them.

Our installation for the Amore Pacific store may be a good example. This is a set of four video projections which sit above a long row of cosmetic counters; Zach Lieberman and I made it under a contract from Frog Design Inc. Most of the time, the artwork displays a gentle simulation of water ripples. These ripples are continually responding to the presence and movements of the people in the store, though most people are not aware of this. But if someone comes up to the counter and picks up a jar of skin cream, then the display reveals some text about that specific product. Already, this is pushing the limit of what is acceptable for a space like this: if the display were any more distracting, then people wouldn't look at the cosmetics! So in such a project it's important to use interaction technologies, like computer vision, that people can interact with simply and passively.

Obviously, an advantage of such architectural projects is that they frequently have a very generous budget, at least compared to making artworks for festivals. These projects have helped keep me alive during the times when I wasn't receiving much money from my artwork. But one observation I have made is that such projects often become badly neglected after they are installed. The simple fact is that computers and projectors aren't as durable as bricks and windows, and they often need maintenance by someone familiar with their inner workings. Very few architects seem to realize this, and so their budgets often forget to figure in the maintenance costs of these interactive installations. It's a very real problem.

We have all had the experience of being confronted by obtrusively complex interfaces. And so there has been a natural reaction among artists, and I think this is true of your work, to design interfaces which are as transparent as possible. But my question is: do you think transparent, open-ended interactions in "cool" media preclude the possibility of embedding important messages in an art project? Do you agree that many interactive pieces are just about offering the audience a game or toy, without stepping up to the challenge of making real statements, as with "hotter" media like film?

My answers to your questions, respectively, are "no", and "yes"! On the first question, I need to express a tremendous skepticism about the idea of "embedding a message" in any kind of work of art. Specifically, I think nearly every artist who attempts to "embed" an "important message" in a work of art is making a fatal mistake, at least insofar as the aesthetic quality of the work is concerned. This is the kind of thinking that underlies the worst kind of ham-handed political art: stuff that is, aesthetically speaking, no better than propaganda. Of course, okay, there are important counter-examples to this, like Picasso's Guernica, made by mature masters. And so this is not to say that art cannot be provocative or effective! But my intuition here is that artwork with challenging subtexts is most effective when it is open-ended, and when the viewer is left to draw their own conclusions. And so I think open-ended systems have the potential to deliver messages very powerfully, because they really engage and implicate the viewer/user in a process.

Unfortunately, your complaint that too many interactive pieces seem to be "content-free" really hits the mark. It is an opinion that, I regret to admit, I share. I see this particularly in the field of so-called "generative art", where (typically) one clicks the mouse, sees some synthetic random picture...and that's it. My contention is not, however, that such artworks ought to deliver the same kinds of "messages" that we find in films. It would be sufficient and appropriate if the concepts underlying such works really delivered on the promise of interactivity itself, namely: some form of bi-directional engagement, cybernetic coupling, that through an interactive process transforms the individual and the way they see the world. In such works, the viewer/user really is allowed to affect the world in some way, and to learn something new about themselves or other people through a novel process of mediated communication.

I think one sees this in the best examples of interactive art, like Myron Krueger's VideoPlace, or Kazuhiko Hachiya's Interdiscommunication Machine; these works, like the best possible games, can hold our fascination for hours and change the way we think about interpersonal communication. Many other excellent works of digital art have no game-like or toy-like aspects at all; I'm thinking of Ben Rubin's "Listening Post" and Rafael Lozano-Hemmer's "Vectorial Elevation" as two excellent starting-points for observing this. Finally, in the case of interactive art, I believe it is a mistake to assume that an open-ended, transparent system contains no message in and of itself. Applying a McLuhan-like "medium is the message" analysis to interactive art, I think one of the most important messages we can find is that of unprecedented empowerment to the viewer/user.

Your project The Secret Lives of Numbers, which we are going to see next month in the Navigator show at the Taiwan Museum of Art in Taichung, deals with a type of interface design which is a little bit unusual in your repertoire. Could you tell us more about this? And of course, some introduction to The Secret Lives of Numbers please.

This project is a visualization of some unusual information that I collected. My collaborators and I conducted an exhaustive empirical study, with the aid of our custom software and public search engines, in order to determine the relative popularity of every integer between 0 and one million.

What's interesting about this information is that it exhibits an extraordinary variety of patterns which reflect and refract our culture, our minds, and our bodies. For example, certain numbers, such as 911, 80486, or 90210, occur more frequently than their neighbors because they are used to denominate the phone numbers, tax forms, computer chips, famous dates, or television programs that figure prominently in our culture. Regular periodicities in the data, located at multiples and powers of ten, mirror our cognitive preference for round numbers in our biologically-driven base-10 numbering system. And Certain numbers, such as 12345 or 8888, appear to be more popular simply because they are easier to remember. So in a way, this data is a numeric snaphot of the collective consciousness, and in the final artwork, our objective was to return our analyses to the public in the form of an interactive visualization, in order to allow people to see these provocative patterns for themselves.

It's true that the design language used in this project is different from most of my audiovisual work, but to me this project is most importantly an extension of my general interest and research into abstract communication. The topic of information visualization as an mode of art practice is a much longer discussion, but as far as The Secret Lives of Numbers is concerned, I regard the piece as an interactive representation of a highly abstracted communications process. What we're able to witness, here, is the communicative behavior-patterns of millions of people, in both gross and fine detail, as they are manifest across literally trillions of data-points — yet synopsized in a way which is, I would like to believe, quite fluid, open-ended, and easy to understand. I find this to be a fascinating area for further research, and I expect you'll be seeing more information visualizations from me in the future!

Please share anything you'd like about being a teacher in interactive arts.

Currently I teach in an art school which attempts to cover the entire range of fine arts practice, from painting and sculpture to video art, animation and interactive forms. To be honest, I'm not sure why the painting students aren't interested in making electronic art, and I'm not certain why the electronic arts students aren't studying painting! I think these disciplines have a lot to say to each other, and I think it is the responsibility of students to expose themselves to a wide range of art methodologies.

I think one of the most significant problems facing electronic art education is a lack of resources by which students could become familiar with the most important older works. There's no "best-of" DVD for interactive art, but we badly need one, or else students will be doomed to repeat decades-old experiments. On the bright side, there are some wonderful new resources of other kinds: three terrific books which are compilations of the most influential writings in digital art (Randall Packer & Ken Jordan's Multimedia: From Wagner to Virtual Reality; Noah Wardrip-Fruin & Nick Montfort's New Media Reader; and Neil Spiller's Cyber Reader: Critical Writings for the Digital Era). And finally, one of the most important resources for design education in over a decade, Casey Reas and Ben Fry's Processing initiative, is totally transforming electronic arts education in a fantastic way (http://www.processing.org). I'm quite hopeful for the future as a result of the terrific work I see blooming there.

Please recommend some things you are currently interested in.

Blogs, robotics, and nanotechnology!