
Lately I’ve been spending time exploring what happens when you stop treating an AI music system like a tool—and start treating it more like a landscape.
In this case, the system is Suno. It’s very good at writing songs. Maybe too good. Left to itself, it will happily produce endless variations of guitar-driven, well-structured, emotionally coherent music. That’s not the problem.
The question is what happens when you want something it doesn’t already understand.
Instead of trying to get one perfect result, I’ve been working in batches: twenty prompts, forty outputs. It’s less like composing, and more like prospecting. You throw a wide net, then go looking for the one piece that doesn’t behave like the others.
Very quickly, it becomes clear that randomness alone isn’t enough. So the prompts get grouped—same core idea, but pushed in different directions. Tone shifts, structural changes, stylistic anchors. The goal isn’t control. It’s informed variation.
The first real obstacle shows up when you ask for something unusual. Try calling for a Kora or a Kamancheh, and the system will often nod politely and give you a guitar.
That’s not really a failure. It’s a clue. The system defaults to what it knows best unless you push back. So the prompts evolve. You name the instrument multiple ways. You describe how it behaves—harp-like, bowed, continuous tone. You explicitly exclude the usual suspects: no guitar, no piano. You assign hierarchy: this must be the primary instrument.
Even then, you don’t always get the instrument. But you start getting closer to the idea of it.
At a certain point, it becomes more effective to stop asking for things directly and instead ask for the conditions under which those things might exist.
That’s where composers come in.
With John Cage, the prompts lean into silence, randomness, and non-musical sound. The results are unpredictable—often unusable—but occasionally something slips through that feels genuinely new.
With Harry Partch, the focus shifts to microtonality, invented instruments, and physical resonance. The system doesn’t recreate Partch, but it begins to loosen its grip on standard tuning and harmony.
With Moondog, something interesting happens. Structure returns. Rhythm, canon, repetition. The outputs become more stable, more coherent, without losing their strangeness.
And with Dead Can Dance, the balance shifts even further. The system recognizes the territory—modal, atmospheric, ancient-modern fusion—and the hit rate improves. More usable pieces. More consistency.
A few patterns start to emerge. The system responds best to strong anchors, some recognizable structure or style it can hold onto. It can be pushed off-center using constraints—what must be present, what must be excluded, and how the sound should behave over time. Rare instruments work better when described by function and texture instead of just name. Total abstraction can produce breakthroughs, but also a lot of noise.
There’s a kind of spectrum. Pure experimentation tends toward chaos. Hybrid approaches produce more interesting and usable results. Structured atmospheric styles give the highest reliability.
The biggest shift isn’t technical, though. It’s conceptual.
Instead of asking whether the system can make a specific thing, it becomes more useful to ask what set of conditions might cause something like it to emerge.
That shift—from instruction to environment—changes everything.
The system isn’t yet capable of faithfully reproducing every instrument or experimental tradition. But it doesn’t need to. What it can do is move through adjacent spaces, close enough to be recognizable, strange enough to be interesting.
And in a batch of forty outputs, that’s all you really need. One piece that doesn’t quite belong. That’s the one you keep.
===========================================================================
Here’s the Bardo bus!
===========================================================================
See You At The Top!!!
gorby

