Forty-five years ago, I dreamed of a machine. Not a computer. Not a device. A passage. Something that could do what I couldn’t — take what I heard inside and let it sound outside.
It never quite worked. There was always something in between: technology, time, money, people. Friction.
I worked with an imaginary band, not by choice but by necessity. Because there were only notes. Music had to be passed on in symbols — written, read, played. Black on white. A code. What you heard inside had to fit what could be written, and what could be written was never quite what you heard. Music could only live in real time. A performance.
And then there was the studio. Tape, reels, splicing, noise. Real musicians, real rooms, real time. Everything had to happen at once, or not at all. It was alive — but expensive, slow, out of reach.
And then: Digital. MIDI. DAWs. Samples. Autotune. Beat Detective. Human feel. Band-in-a-Box! Each step promised something: more control, more precision, more possibility. And each step added something else.
Distance.
Between the idea and the sound. Between intention and result. I learned to program feel, to quantize looseness, to simulate breath. Sometimes it worked. Often it didn’t. It was always close. But never quite there.
I don’t want my music to fit the tools I have. I want to hear it as I mean it.
Until my Poormansband started to play.
Now I hear my songs as I mean them. Not perfect, but alive. As if they’ve been there for years. As if I’m not making them, but finding them. A voice that leans into the line. Chords that carry. A melody that unfolds. All at once. Interpretation.
There was a girl who listened to a song of mine. She listened to the end — not a word, not a blink. Just listened.
Rare.
At the end she said: “That’s really good.”
Then she asked: “How did you do that?” Not: “Is it real?”
I said: my song, produced with algorithms.
Of course, then the discussion started. But the first reaction — that was gold.
It’s about tasting, not about the cooking.
They say: why don’t you just play it yourself? You’re a pianist. You can sing.
Yes. I can play the piano, but I’m not a pianist. I can sing, but I’m not a singer.
And that’s the point. I don’t want to hear me. I want to hear the song — carried by someone else, given a voice I don’t have. Something beyond me. What I hear is bigger than my hands, bigger than my voice. This doesn’t replace me. It lets the song become itself. I want the song to live beyond me. I need distance. Only then can I hear what it is. Only then can I know if it holds — or not.
They say: it’s not real. Just words. Notes. Statistics. A program putting things together.
Maybe.
But what I hear breathes. What I hear moves me. For the first time, I hear what I mean.
I spoke to a sound designer. He doesn’t use AI.
“It’s not real,” he said.
I thought of his work — perfect, tight, controlled, and often lifeless. No breath. No friction. None of that small thing that makes it live.
To him, AI wasn’t real. To me, it was the first time it wasn’t dead.
AI doesn’t understand me, but it follows me — fast, without ego, without resistance. What used to take weeks now takes hours. What used to cost money now costs time. What used to be out of reach now sounds like it’s always been there.
A Poormansband, with a rich sound.
Sometimes I want resistance. Surprise. Something that pushes back. There’s a knob for that.
Weirdness.
I’d call it inspiration.
Where resistance used to be an obstacle, it’s now a choice.
They say: AI makes mistakes.
AI?
It’s just like a human.
I don’t use it to outsource my work. I use it to get closer to myself. Less friction. More flow.
A Dreammachine.
Maybe it’s not a machine that simulates dreams. Maybe it’s an instrument that finally lets them be heard. I want the song to live beyond me.
And I hope that one day someone says:
“That’s really good.
I want to sing it.”
Contact: