It seems that you've always had a really particular approach to timbre, and that it's especially focused and evolved on this record. There's a certain purity of tone to which you tend to gravitate, as I hear it. Can you talk a bit about how you approach timbral color?
The ambiguity of sonic events always fascinates me. That border between 'real' and 'synthetic' is a quite interesting one, not only in sound design, but also in visual arts. Working with synthetic sound generation sharpens my senses for the real sounds around me, and often I am surprised by how much they can blend. We are not talking anymore of sound generation with a single square wave oscillator and a lowpass filter, but methods that are capable of creating highly complex and rich timbres. Those method's sonic definition matches the complexity of real sounds and this is where the fun starts. I like to place a recording of a metal thing next to a physical model of a metal thing next to a processed sample next to an FM timbre and see how they become a nice ensemble of similar sounds.
I try to avoid 'content'. I am not interested in 'throwing beat loops together'. I do not use presets from other people when it comes to synthesis, this all is just not my way of thinking. Why should I leave that great part of composition which is coming up with interesting timbres, to someone else? I am also not using time stretching / warping as a tool to match beats. I don't like time-stretch artefacts, unless I drive it in the very extreme as a special effect. I don't need factory groove templates, in fact I never use groove at all, if i want to achieve it, I move notes by hand. Apart from that, I'd say I use everything Live has to offer. There is no typical workflow, it highly depends on what I want to do. The most significant difference to the old pre-Live times is to me, that I can make lots of sketches without any special idea in mind, just let go, and save the result once I am bored with it. And much later I can open all those sketches, and see if anything in there is of interest. Then I grab that element and continue working on the basis of this. I have a lot of complex tree structures of fragments on my hard-disk, and this a great source of material and inspiration.
The tools have a strong influence on the result. Take the Monolake PX-18 sequencer. Its way of expanding a one bar loop into something that repeats in longer cycles is based on such a rigid concept, that it enforces a quite specific rhythmical approach. Some patterns are simply not possible, some are very easy to achieve. This is exciting and this is very musical; a piano is an instrument which makes it very easy to treat all twelve notes of a well tempered scale the same. And it is an instrument which makes it impossible to play with any notes that do not fit in such a scale. This is exactly the same interesting tension between enabling and inhibiting expression as with the rhythmical limitation of the PX-18.
There is an interesting interaction going on between developing tools and achieving musical results. The whole process is far from being linear and entirely result orientated. The idea at the beginning is shaped by first results and experiences gained from playing with a simple prototype of a part of the functionality, this drives the further development of the tool, but also influences the musical idea. If I try to build a granular time freezer, and after initial tests I figure out that I need a lot of overlapping grains to get the sound I want, I can also start thinking in swarms of particles, and this might lead to musical ideas that shape how I try to improve the grain thing. Working this way often provides far more interesting results than sticking to an initial plan. As an interesting side note, this way of thinking also finds its way more and more into general software/hardware development and interface/functionality design. The tools of the future need to feel right. One cannot design a multi touch screen application on a piece of paper, implement it and think it will work. It would, technically, but it might not be inspiring to use and therefor most likely not a success in a competitive market.
I find myself doing a lot of things in MAX these days, since the integration in Live made it so easy and rewarding. When I made that MAX statement in NYC, I felt that coding is a trap when it comes to actually creating music. One simply does spend too much time with non musical problems. In many ways, MAX5 and MaxForLive reduced the time needed to get results. And this makes the whole package very attractive again. I started teaching sound design at the Berlin University of Arts a year ago. I can show my students how to create a simple two-operator FM synthesizer with an interesting random modulation within fifteen minutes and the result is a Live set including the MaxForLive part, which I can save and send it to the students as an email so they can open it again an continue working on it. If stuff can be done that fast, it leaves enough headroom to actually use it in a musical context. In retrospective a lot of 90s IDM music was way to much driven by exploring technology. At some point one has to step back and say: okay, now let's actually have a look at the composition and not only at the technical complexity of the algorithm.
So, what's the role of the press in this? One experience I gain from reading the Ableton user forum and from talking with students is that there is a great amount of insecurity about which technology to use. It's the abundance paradox. Which software sounds best? Which compressor do I need to use? Which plugins do I need for mastering housy dub music with a hint of pop and some acoustic guitar? Having the choice between 5000 compressor plugins whilst not understanding what makes a compressor really sound the way it does, it pretty much my idea of hell. So often I have that impulse telling the world: hey, you can use the sidechain input of the compressor you already have in
Live, and you can feed that sidechain with a slightly delayed version of the original signal. You could also apply saturation, filtering, or even reverb or again an instance of the compressor in that side chain signal to shape its timing and response to its input. This will have a result of the compression curve, and this means you can build anything from a very normal compressor up to the most exotic effect you can imagine. And you can store those structures for later re-use. You can automate every single aspect of it. You can use ten or twenty instances of it in a song. Are you guys aware that you have more power right in front of you than the best music producers and hardware designers just ten years ago would have dreamed off? I simply do not want to read any more articles about a new compressor, be it hardware or software, unless it provides insight into the amazing possibilities we already have. I don't want to read anymore sound quality discussions that deal with the last bit of a 24 bit file in a world where people listen to mp3 over mobile phones and enjoy those artefacts. The most exciting new music comes from young kids running some audio software in a bedroom, listening to the result over a shitty hi-fi and use Melodyne 'all the way wrong'. Those folks do not read gear magazines, they could not care less about yet another mastering EQ, but create the most stunning beauty. If people talk too much about gear I usually do not expect too much good music. I am often trapped in this twilight zone between engineer and composer too, so I know what I am talking about here...
Deadlines help. If I know that a project needs to be finished, I simply stop investing time in technology at some point, and instead use what's there. Its a question of discipline and experience too. I try to teach my students that if they are working on a technically challenging project they need to define a deadline for the technical side. If not, they might work till the very last moment on technical stuff and loose focus on the artistic part. At the end, the result counts, not the beautiful MAX patch, which could possibly create a nice result.
I contrast totally straight 16th grooves with material that itself constitutes a rhythmical quality off that grid. In 'Silence' obviously I often used gravity driven processes with their inherent accelerations. Or I played notes with an arpeggiator that is not synced to song time but where I control its rate with a slider. Something Gerhard already did on the very first Monolake track 'Cyan' in 1995. Silence offers quite a few hidden connections to Monolake history. My general approach to groove is simple: I change things in time till it feels right.
There is no general rule. I often just open Live to explore an idea, and end up doing something else because I found an interesting detail along the way. Or I have to work on a highly specific project, and have to discard a lot of the results because they do not work in a given context. Instead of throwing them away, I keep them and this might form the basis for another composition.
Silence is such a great concept. There is no silence, unless in a vacuum, it's that great mystic world which cannot exist in our world. Also, in music the time between the musical events is as important as the events itself. But I really leave it up to the associations of the listener to make sense of the title. And of the liner notes and the photographs and the music. I think there is a lot of room for all sorts of connections and connotations.
