In my last blog, I mused on the shaping and controlling aspects of technology. Among the many examples I used was a DJ software called Live, which would allow even a Neanderthal to throw a bunch of sound samples into it and crap out a minimal techno track. One comment to this statement inspired some further thinking, in particular about the nature of feedback, which I felt deserved a post of its own…

twentytwentyone wrote:

interesting, but i still disagree about Ableton. it’s such a simple tool and as all simple tools it is quite universal.

Well that’s exactly the opposite of what I’m trying to say here, using Live just as an example. No technology is neutral, every kind of technology has a strong social shaping and controlling factor on those who use it for any purpose, and in the creative arts this is probably even stronger! The more alienated we are from these technologies (to borrow the Marxist terminology for a second), the more we are unaware of how much that technology is in fact shaping and controlling us.

Think for a moment that the first musicians probably cut their reed flutes from the tall grass near them, bored some holes in them, played them for a some time while watching the sky and then threw said reed flute into the river when they were finished with it. As such, they were aware of every possibility their instrument had in it, since they themselves had created it (up to and including the intervals it would play), and thus probably even invented the music which they played on it quite spontaneously without reference to pre-existing patterns or compositions.

40,000 year old Neanderthal bone flute. A truly simple tool, no presets included. My attempts to recreate and play such an instrument were met with much humor during a residency in New Zealand a few years ago…

On the other hand, I find that the more ubiquitous, over-designed and “user friendly” modern technology becomes, the more we should treat it with suspicion. Especially when we use it as the primary tools of our creative expression. In fact, you don’t know what kind of shaping and controlling messages come with your arts technologies any more.

In the case of Live, to get back to a very small point in a rather big discussion, I find that not only is it directed entirely towards rasterized, grid-based and quantized production of sound, but it also privileges linear “compositions” (whether using prerecorded samples or sequences of notes) over any kind of free improvisation which could change direction at any moment. As such, I don’t find it a very useful tool for the kind of work I want to be doing.

The past couple years of working more analog has taught me a lot about the weaknesses of certain technologies, and of the weaknesses of the computer in particular, and how one must become extremely linear and logical to make nonlinear, illogical things happen with it. One thing that I discovered while working almost exclusively with feedback loops is that, in analog electronics, everything does happen in real-time. All the electrons in a circuit more or less move simultaneously, so changes in the system do occur instantaneously.

Instantaneous feedback is an essential part of analog circuits, unlike their digital counterparts which execute commmands sequentially…

Compare that with the digital architecture as follows:

To produce a sound, let’s say an oscillator’s tone, the computer must calculate a certain number of samples in a block, and send that block to the soundcard. Now, if the output of the oscillator must be sent back to its frequency input, in order to modulate it’s frequency, that can’t logically be done in the same block of samples. To do so would put the computer in a kind of endless loop, where it couldn’t calculate its output because it was waiting for that output to be its input. So the output must be delayed in some way by at least one block of samples in order to be used as the input for the calculations of the next block of samples.

The classic “DSP loop”, here shown in the Pure Data programming environment. The send/receive pair adds the necessary delay, so that the oscillator is affected by it’s own output only on the following block of computations.

Add to this the fact that computers work in logical, linear time, meaning that they rapidly multitask between the actions required of it (send sound to the soundcard, refresh the screen display, maintain the network connection, catalog the harddrive, scan the keyboard and mouse for input, bounce a widget on the dashboard when your girlfriend wants to Skype with you, etc etc), and for this reason the sound is further buffered to avoid the infamous “glitches” so well known in laptronica. The end result is a system which is far from realtime, and in which any kind of actually simultaneous actions are impossible.

The technicians among us consider all this a “performance issue”, and strain to reduce the amount of latency in the system to a barely noticeable threshold. But for me it’s much more of a conceptual and philosophical issue. The system of the computer itself controls and shapes the sound I’m making, and in essence prohibits me from working the way I prefer to work in the analog realm, where any number of simultaneous events can immediately affect the complex system as a whole in a beautiful, nonlinear sort of way. Much like the world we actually live in…


Comments are closed.