Tuesday 28 October 2014

Voice 10 000 notes

Just creating, rendering or playing (what ever you want to call it) 10 000 notes in 3 minutes is one thing, making them sound together in a way that makes music sense and does not turn to mush is quite another.


In this piece there really are more than 10 000 notes in around 3 minutes. The resulting sound sometimes approaches a simultaneous poly note or vibrato but they are all separate notes. To get them to work together is not easy. A 'real' note is not just a tone but a complex sequence of tones and volumes. This is what makes individual notes stand out form one another. Consider playing the same note 10 times very quickly on a harpsichord compared to just playing it once letting it sustain.

Then there is the matter of different voices standing out form one another. A great composer like Vivaldi can help by making sure the different voices do not follow one another too closely. But, consider the main body of strings in this piece. We need to make something which sounds like an ensemble of strings playing together rather than just one. But then the lead which has to spiral about the rest must stand out in tone not volume. Just making it load would be a travesty.

Then we have the bass. Long slow notes with throbbing notes about and then middle pitches build on those. How to make that not turn into the sound of a ship's engine room? Well, here is the voicing code I used:

midi=midis[1]
doMidi(notesStart,notesEnd,notes,midi,orchestralOboe,
    vCorrect=1.0,
    pitchShift=1.000,
    qFactor=0.5,
    subBass=False,
    flatEnv=False,
    pure=False
)
postProcess()
doMidi(notesStart,notesEnd,notes,midi,upperAccent,
    vCorrect=0.3,
    pitchShift=2.000,
    qFactor=0.5,
    subBass=False,
    flatEnv=False,
    pure=False,
    pitchAdd=2.5
)
postProcess()

print "Channel 2"
midi=midis[1]
doMidi(notesStart,notesEnd,notes,midi,clarion,
    vCorrect=0.5,
    pitchShift=1.000,
    qFactor=1.0,
    subBass=False,
    flatEnv=False,
    pure=False,
    pan=0.6
)
postProcessTremolate(rate=4.5)

midi=delayMidi(midis[1],beat,64)
doMidi(notesStart,notesEnd,notes,midi,clarion,
    vCorrect=0.5,
    pitchShift=2.000,
    qFactor=1.0,
    subBass=False,
    flatEnv=False,
    pure=False,
    pan=0.4,
    pitchAdd=2.5
)
postProcessTremolate(rate=3.5)

for chan in range(3,10):
    print "Channel "+str(chan)
    midi=midis[chan]
    doMidi(notesStart,notesEnd,notes,midi,
        voice=viola,
        vCorrect=0.5,
        pitchShift=1.0,
        qFactor=0.5,
        subBass=False,
        flatEnv=True,
        pan=(float(12-chan)/9.0)
    )
    postProcess() 

print "Channel 10"
midi=midis[10]
midi=legatoMidi(midi,beat,128)
doMidi(notesStart,notesEnd,notes,midi,voice=orchestralOboe,  
    vCorrect=0.75,
    pitchShift=1.0,
    qFactor=0.5,
    subBass=False,
    flatEnv=False,
    pure=False,
    pitchAdd=0.0
)
postProcess()

print "Channel 11"
midi=midis[11]
midi=legatoMidi(midi,beat,128)
doMidi(notesStart,notesEnd,notes,midi,voice=clarion,  
    vCorrect=0.75,
    pitchShift=1.0,
    qFactor=0.5,
    subBass=False,
    flatEnv=True,
    pure=False,
    pitchAdd=3.5
)
postProcess()

print "Channel 12"
midi=midis[12]
doMidi(notesStart,notesEnd,notes,midi,voice=clarion,  
    vCorrect=1.0,
    pitchShift=1.0,
    qFactor=0.5,
    subBass=False,
    flatEnv=True,
    pure=False,
    pitchAdd=0.0
)
postProcess()

print "Channel 13"
midi=midis[13]
midi=legatoMidi(midis[13],beat,96)
doMidi(notesStart,notesEnd,notes,midi,voice=leadDiapason,  
    vCorrect=1.5,
    pitchShift=1.0,
    qFactor=1.0,
    subBass=False,
    flatEnv=True,
    pure=True,
    pitchAdd=0.0,
    pan=1.0
)
postProcess()
midi=delayMidi(midis[13],beat,128)
doMidi(notesStart,notesEnd,notes,midi,voice=trostPosaune,  
    vCorrect=2.0,
    pitchShift=0.5,
    qFactor=1.0,
    subBass=False,
    flatEnv=True,
    pure=True,
    pitchAdd=1.0,
    pan=0.0
)
postProcess()

Let us start with the bass (channel 13). This is where I started; bass notes are obscured by higher notes so it is important to work on these first, not last. Far from being the least important from a timbre standpoint, they are the most challenging because a low note has more overtones in the audible range.

So, my bottom notes are voiced using a 'trombone' sound. In this synthesiser I have used is actually a complex organ synth' so the sounds are based on organ pipes. Why Trost Posaune? It is names after a famous 18th century organ builder. One of his organs makes an amazing sound in the Posaune pipes which are rich and quite slow to start with a distinct inharmonic at the beginning. For the bass to have any chance of being detected with all those strings above, this sound was the obvious choice. I placed this one octave below the bass and then filled in the sound with a diapason at the the true bass pitch. This is one of the sounds from the principals of an organ. Here I call it leadDiapason as it is a bright and intense diapason. Doubling Posaunes would be too much, but this combination is the right mix of strength, power and purity for me.

Channels 11 and 12 pulse above the bass. To make this powerful and to give them enough colour to complete I use my 'clarion' voice. This is a trumpet like reed sound but with a bright unstable set of high overtones. When played low down, as here, it makes a warn but strong sound which blends with the lower posaune/diapason mix but does not disappear into it.

So far so easy. But how to make that rising tone which comes next in channel 10. It builds and build tension moving up in pitch and through harmonic sequences. This took a lot of experimentation. It has to really stick in the memory. I wanted a sound which is almost 'too much'; something which pushes the listener to the limit of acceptability and by so doing further enhances the tension which build inexorably during the early bars of this masterpiece. The pipe sound which does all these things for me is the orchestral oboe. I used it in this render of Bach's BWV 659 where the orchestral oboe is again sitting on the dividing line between overwhelmingly powerful and just crass.


OK, now we are into the main ensemble. This should be easy, but we need to be careful. Channels 3 to 10 are all string. They play largely the same notes. But just replicating them would be no good. Instead every instrument is played individually. I chose the viola stop from my pipe set. Then I placed the sounds across the stereo field. Note also that this organ synthesiser add some pitch shifting and tonal instability to every note so that we not only get 7 different stereo positions but the sound at each is unique and differs almost imperceptible form note to note.

Then there is that violin (in the original) which saws and swoops above everything else. What to do with this? Well - I chose a Clarion again for the shimmering intensity of its sound. But, this is an organ synth' not a string synth' and so simple pipes would be too brutal. Some tremulant to take the edge off helped a lot. But then it was too dull. A second Clarion one octave up and with a different tremulant helped. Nevertheless, this remains the least successful part of this render in my view. I would like to experiment with some alteration of the temperament in this voice to see it can make a more pure and sweet sound.

Finally we have the high accompanying channel 1. Here I used string pipe sounds. The strings on an organ are actually flue pipes which are very narrow. The viola sound I used the the mid channels is somewhere between a reed and a flue. I honestly find it more of a reed sound. But the 'string' voice is probably closer to a true organ string. Again, this voice was octave doubled.

Ah, but I have not given up all the tricks. Also, if you inspect the code, would will see that on the voices which are octave doubled there are non zero values for 'pitchAdd'. This is a detuning effect which is the basis of the organ's celeste sound. Each double voicing has a different celeste offset and each really helps liven and soften at the same time.  

Sunday 26 October 2014

Making A Computer Sing

Making singing sounds via processed sampling or re-synthesis is good and effective, but I have - for years - wanted to do it from first principles.

Now I have managed. At least to some extent. The aim (at the moment and since I started about 18 months ago) is to make a convincing, sung vowel sound. This has proved very, very difficult indeed. I have made a few spooky whisper using white noise and formant filtering. The effect is used here:  
The problem always came when trying to move over to the intensity of a true sung note. Formant filtering of a sawtooth, for example, just sounds like a filtered sawtooth and not very nice either. I have spent a long time looking at the spectra of human speech started to notice that the frequency bands of the overtones, even from a single voice, as much wider than for an normal music instrument. Even when not accounting for vibrato, the human voice does not produce a sharp, well defined fundamental and overtones. Yes, the structure is there, but 'fatter' for each partial.
A few more experiments showed that convolving a sawtooth with white noise and then formant filtering did not do the job either. Trying to 'widen' the partials with frequency modulation also failed. Then I started to consider what a real human voice does. The voice system is not a rigid resonator like a clarinet or organ pipe. The vocal folds are 'floppy' and so there needs to be a band of frequencies around each partial. The power of human singing is all in the filtering done with the air column started in the lungs and ending with the nose and mouth. 
This though process lead me to use additive synthesis to produce a rich set of sounds around the base frequency and its partials. Then I upped the filtering a lot. Not just formant filtering, but band pass filtering around the formants. I.e. not just resonant peaks but cutting between the formant.
Listening to the results of this was really interesting. Some notes sounded quite like they were sung, others totally failed and sounded just like string instruments. Careful inspection of the spectra of each case showed that where partials lined up with format frequencies the result sounded like singing; where the formants lay between partials, it sounded like a string instrument. I realised that there is a huge difference between a bad singer (say, me) and a great singer. Maybe great singers are doing something subtle with the formants to get that pure sound.
This is my current point with the technique. Each vowel has 3 formants. I leave the bottom one untouched. However, the upper to I align the formants to the nearest harmonic of the note. Synthesis done this way produces a reliable, consistent sound somewhere between a human singing and a bowed string instrument. Here is an example:
Next I want to try using notch filters to knock out specific harmonics to see if I can get rid of some of that string sound.

Here is the main filter bank for the singing effect heard above:

def findNearestOvertone(fMatch,freq):
    q=float(fMatch)/float(freq)
    q=int(q)
    return freq*q

def doFormant(sig,f1,f2,f3,freq,intensity=4):
    f1b=f1
    f2b=findNearestOvertone(f2,freq)
    f3b=findNearestOvertone(f3,freq)
    print "Match: ",freq,f1,f2,f3,f1b,f2b,f3b
    for x in range(1,intensity):
        s1=sf.RBJBandPass(+sig,f1b,0.25)
        s2=sf.RBJBandPass(+sig,f2b,0.5)
        s3=sf.RBJBandPass(+sig,f3b,0.5)
        sig=sf.FixSize(
            sf.Mix(
                sf.Pcnt10(sig),
                sf.Pcnt50(sf.FixSize(s1)),
                sf.Pcnt20(sf.FixSize(s2)),
                sf.Pcnt30(sf.FixSize(s3))
            )
        )
        s1=sf.RBJPeaking(+sig,f1b,1.0,5)
        s2=sf.RBJPeaking(+sig,f2b,2.0,5)
        s3=sf.RBJPeaking( sig,f3b,2.0,5)
        sig=sf.FixSize(
            sf.Mix(
                sf.Pcnt50(sf.FixSize(s1)),
                sf.Pcnt20(sf.FixSize(s2)),
                sf.Pcnt30(sf.FixSize(s3))
            )
        )

    x=polish(sig,freq)
    x=sf.FixSize(x)
    sf.Check(x)
    return x

The big steps forward have been the frequency matching and the band pass filtering. I hope that 2 notch filters at the centres of s1,s2 and s2,3 will help; however, I expect it will be more complex than that; it always is with singing@

Sunday 19 October 2014

Not Just A Saw Tooth - Generative Complex, Realistic Sounds

The waveform

The common literature on synthesis talks of sawtooth, triangle and square waves. Real sounds are not like this so why do we even discuss them?

Above is the wave form of the first note from my synthesis of BWV 659:


It does not look much like a sawtooth does it? Indeed, that 'presence of God" massive G1 is nothing like any geometrical waveform you will find in any text book or on the front panel of any regular synthesiser.

Spectrum Visualisation

Spectrum Analysis
Interesting sounds have harmonics (partials etc) which move in phase constantly. The pitch the notes are unstable and shifting. The pitch and harmonic content changes though out the note. On top of all of this, they have noise in different colours.

Below are some parts of the code used to create the G1. The final envelope and filter stuff is missing. I am not putting it here to be definitive but to point some stuff out. Like, for example, the additive oscillator bank. Note that to generate just the basic tone I am using 19 'oscillators'. They have random phases each time they are run and they make a harmonic progression with even harmonics begin 180 degrees out of phase with odd ones.

     p1=random.random()
     p2=1.0-p1
...
       sig=sf.Mix(
            sf.PhasedSineWave(length,frequency,p1),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*2.0,p1),2.0),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*3.0,p2),2.0),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*4.0,p1),1.8),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*5.0,p2),1.6),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*6.0,p1),1.4),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*7.0,p2),1.2),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*8.0,p1),1.0),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*9.0,p2),0.8),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*10.0,p1),0.6),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*11.0,p2),0.5),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*12.0,p1),0.4),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*13.0,p2),0.3),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*14.0,p1),0.2),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*15.0,p2),0.1),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*16.0,p1),0.05),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*16.0,p2),0.05),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*17.0,p1),0.05),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*18.0,p2),0.05),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*19.0,p1),0.01)
        )
    sig=sf.Multiply(
        sf.NumericShape((0,0),(32,1),(length,1)),
        sig
    )
...
    b=posaunePulse(length,freq)
    b=sf.MixAt(
        [b,12],
        [
        sf.NumericShape(
            (0, -2.0),
            (4,  2.0),
            (12,-1.00),
            (20, 1.00),
            (28,-1.00),
            (length,0)
        ),0]
    )
    b=sf.RBJPeaking(b,freq*2,2,2)
    b=polish(b,freq)
    sig=sf.Mix(
        b
        ,
        sf.Pcnt20(sf.Multiply(+b,sf.WhiteNoise(length))),          
        sf.Multiply(
            cleanNoise(length,freq*0.5),
            sf.SimpleShape((0,-60),(64,-14),(128,-28),(length,-24))
        )
    )

    return pitchMove(sig)
...
def pitchMove(sig):
    l=sf.Length(+sig)
    if l>1024:
        move=sf.NumericShape(
            (0,0.995+random.random()*0.01),
            (l,0.995+random.random()*0.01)
        )
    elif l>512:
        move=sf.NumericShape(
            (0,0.9975+random.random()*0.005),
            (l,0.9975+random.random()*0.005)
        )
    else:
        return sig

    return sf.Clean(sf.Resample(move,sig))

Next I mix in a 'clunk' sound to model the valve of the pipe opening. This is done using an envelope and a low pass filter. I throw in a resonant all pass filter and a filter to remove aliasing. The signal ins ring modulated (multiplied by) filtered white noise to mimic the sound of air rushing over the reed. Some enveloped white noise is added as well.

Finally, the pitch of the note is made to move slightly throughout its sounding. Actually, this is (as I said) not the whole story as there is another section which applies filtering, and a volume envelope which is also used to very slightly frequency modulate the sound to mimic the way real pitch changes with volume. The envelope applied here is not only dependant on the length of the note but on the length and proximity of the notes either side of it.

The sound you hear is actually three pipes. Each with very, very slightly differently out of tune, sounding three octaves (G1, G0 and G-1). But that is not all as the three are then places at slightly different times to one another and with respect too the left and right channels.

Then some harmonic excitement and filtering and two different impulse response reverberations are applied.

All that is what is required to make that one, single note.

A far cry from 'what does a saw tooth sound like'.

Saturday 11 October 2014

Bringing Flute Pipes To Sonic Field

For this piece I really wanted a very sweet sounding flute pipe:


I found the sound I wanted in the strangest place. Sawtooth waves sound harsh; they can be tamed, but they are never sweet in the way I imagined the flutes to be. Nevertheless, by adding an extra parameter to a simple additive sawtooth generator flutes and strings suddenly appeared! 

In a sawtooth each overtone (harmonic) is scaled down as 1/n where n is the frequency ratio of the overtone. So, the 1st harmonic at 2 times the frequency has 1/2 the volume; the second harmonic at 3 times the frequency is 1/3 the volume. Now, raise that ratio of a power, say 0.5 or 2.0 and so forth. Low powers increase the harmonic content and a string sound appears, how powers decrease it and bright or sweet (even lower content) flutes appear. Here is the code:

def makeSimpleBase(length,frequency,z):
    p=random.random()
    if frequency>4000:
        sig=sf.Mix(
            sf.PhasedSineWave(length,frequency,p),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*2.0,p),(1.0/2.0)**z),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*3.0,p),(1.0/3.0)**z),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*4.0,p),(1.0/4.0)**z),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*5.0,p),(1.0/5.0)**z),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*6.0,p),(1.0/6.0)**z),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*7.0,p),(1.0/7.0)**z),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*8.0,p),(1.0/8.0)**z)
            )
    else:
        sig=sf.Mix(
            sf.PhasedSineWave(length,frequency,p),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*2.0,p),(1.0/2.0)**z),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*3.0,p),(1.0/3.0)**z),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*4.0,p),(1.0/4.0)**z),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*5.0,p),(1.0/5.0)**z),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*6.0,p),(1.0/6.0)**z),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*7.0,p),(1.0/7.0)**z),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*8.0,p),(1.0/8.0)**z),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*9.0,p),(1.0/9.0)**z),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*10.0,p),(1.0/10.0)**z)
        )
    return sf.FixSize(sig)

def niceSaw(length,frequency):
    return makeSimpleBase(length,frequency,1.0)

def violaBase(length,frequency):
    return makeSimpleBase(length,frequency,0.5)

def sweetFluteBase(length,frequency):
    return makeSimpleBase(length,frequency,8.0)
    
def brightFluteBase(length,frequency):
    return makeSimpleBase(length,frequency,3.5)

def celestFlute(length,freq):
    sig=sf.Mix(
        sf.Pcnt50(sweetFluteBase(length,freq)),
        sf.Pcnt50(sweetFluteBase(length,freq+1.0)),
        sf.Multiply(
            cleanNoise(length,freq*0.5),
            sf.SimpleShape((0,-60),(64,-28),(128,-40),(length,-40))
        )
    )
    return pitchMove(sig)

def sweetFlute(length,freq):
    sig=sf.Mix(
        sweetFluteBase(length,freq),
        sf.Multiply(
            cleanNoise(length,freq*0.5),
            sf.SimpleShape((0,-60),(64,-30),(128,-40),(length,-40))
        )
    )
    sig=sf.FixSize(polish(sig,freq))
    return pitchMove(sig)

def brightFlute(length,freq):
    sig=sf.Mix(
        brightFluteBase(length,freq),
        sf.Multiply(
            cleanNoise(length,freq*0.5),
            sf.SimpleShape((0,-60),(64,-28),(128,-40),(length,-40))
        )
    )
    sig=sf.FixSize(polish(sig,freq))

    return pitchMove(sig)


Thursday 9 October 2014

Bombard Pipe

A bombard should make you sit up!

A bombard is a BIG sound. Not quite an ophicleide (how the heck does one pronounce that?) but it should be at least a little intimidating. I am trying to make it happen in Sonic Field. My sounds seem to keep coming out to nice; I need some more wollop. But here is where I am so far.

def bombardPulse(length,frequency):
    p=random.random()
    if frequency>4000:
        sig=sf.Mix(
            sf.PhasedSineWave(length,frequency,p),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*2.0,p),2.0),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*3.0,p),1.5),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*4.0,p),1.3),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*5.0,p),1.2),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*6.0,p),1.0),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*7.0,p),0.8),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*8.0,p),0.5)
            )
    else:
        sig=sf.Mix(
            sf.PhasedSineWave(length,frequency,p),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*2.0,p),2.0),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*3.0,p),1.5),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*4.0,p),1.3),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*5.0,p),1.2),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*6.0,p),1.0),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*7.0,p),0.8),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*8.0,p),0.6),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*9.0,p),0.4),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*10.0,p),0.2)
        )

    return sf.FixSize(sig)

def bombard(length,freq):
    b=sf.MixAt(
            [sf.Pcnt33(bombardPulse(length,freq)),0],
            [sf.Pcnt33(bombardPulse(length,freq)),10],
            [sf.Pcnt33(bombardPulse(length,freq)),20]
    )
    sig=sf.Mix(
        b
        ,
        sf.Pcnt10(sf.Multiply(+b,sf.WhiteNoise(length))),          
        sf.Multiply(
            cleanNoise(length,freq*0.5),
            sf.SimpleShape((0,-60),(64,-20),(128,-28),(length,-24))
        )
    )

    return pitchMove(sig)

So the first thing which is very different from anything I have tried before is the initial wave form. Rather than the more usual decreasing harmonics (each overtone being weaker than the previous and weaker than the fundamental) here I use additive synthesis to create overtones which have greater magnitude than the fundamental. 

The next trick to make a bigger sound is to use three sounds each 10 milliseconds after the previous and mix them. This gives an 'entrance' to the note.

Finally, rather than just mixing some filtered noise with the waveform I add some filtered noise multiplied by it as well. This makes a noise sound modulated by the wave form; if one imagines air rushing over the reed of a massive organ pipe the idea becomes more obvious.

Did it work? Well, it is getting there but I believe I can do even better. The bombard makes some of the bass this piece (the rest is an even richer reed sounds which I will discuss another time):

BWV 645 By JS Bach

  

Saturday 4 October 2014

Python Plays Bach

I showed this Youtube video to a respected colleague and massive Python fan a few days ago. I never expected the reaction I got.

Bach Passacaglia and Fugue in C Minor

He thought my efforts in Python produced the spectrogram which I used for the visual part of the video. Whilst I would love to take credit, they are actually produced via ffmpeg.


When I explain to him that Python made the music; he did not believe me. He said that I was 'having him on' and generally taking the piss. When I further went on to explain that the sounds are not samples, but generated from pure mathematics, the only way to get him to believe my words was to show him some Python...

On github:

  1. https://github.com/nerds-central/SonicFieldRepo/blob/master/SonicField/scripts/python/Bach-Large-Organ.sy
  2. https://github.com/nerds-central/SonicFieldRepo/blob/master/SonicField/scripts/python/reverberate.sy

The programming model is that Python control the sounds and Java does the mathematical heavy listing. For example, here is the FFT code from Sonic Field:
  1. https://github.com/nerds-central/SonicFieldRepo/blob/master/SonicField/src/com/nerdscentral/audio/pitch/algorithm/FFTbase.java
  2. https://github.com/nerds-central/SonicFieldRepo/blob/master/SonicField/src/com/nerdscentral/audio/pitch/algorithm/CacheableFFT.java



He believed me when he saw code: It has been an amazing journey to get to such a point where Sonic Field has started to sound good enough to not be 'synthy'. It am very proud to have proven it can be done, that samples are not required to make beautiful sounds. Clearly, some people will not find the sounds my code has made pleasing, some will be shocked at how synthetic they sound. But at least now we can say at Python, at least for some people, makes music.

Thursday 2 October 2014

Stopped Reed: Warm To Sparkling

I don't actually know if anyone produces stopped reed pipes. In truth, it is not really necessary as a cylindrical bore read would make much the same sound. Anyhow - it was a sound I had in my head so here is how I tried to make it:


def stoppedPulse(length,frequency):
    p=random.random()
    if frequency>3000:
        sig=sf.Mix(
            sf.PhasedSineWave(length,frequency,p),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*3.0,p),1.0/1.0),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*5.0,p),1.0/1.5),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*7.0,p),1.0/2.0)
            )
    else:
        sig=sf.Mix(
            sf.PhasedSineWave(length,frequency,p),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*3.0,p),1.0/1.0),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*5.0,p),1.0/1.5),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*7.0,p),1.0/2.0),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*9.0,p),1.0/4.0),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*11.0,p),1.0/8.0)
        )
    return sf.FixSize(sig)

def stoppedReed(length,freq):
    s1=stoppedPulse(length,freq*1.000)
    s1=sf.ButterworthHighPass(s1,freq*0.66,6)
    s1=sf.Clean(s1)
    
    sig=sf.Mix(
        s1,
        sf.Multiply(
            cleanNoise(length,freq*2.0),
            sf.SimpleShape((0,-60),(64,-16),(128,-20),(length,-20))
        )
    )

    sig=sf.FixSize(sig)
    sig=sf.Mix(
        sf.Pcnt10(sf.Clean(sf.Saturate(+sig))),
        sig
    )
    sig=sf.ButterworthHighPass(sig,freq*0.66,6)
    sig=sf.Clean(sig)

    return sf.FixSize(sf.Clean(sig))

The voice is simple enough, it takes a clean sound from additive synthesis, adds some enveloped, filtered white noise and makes sure no nasty aliasing artefacts sneak in. To brighten it up a little, I have added a touch of sf.Saturate. This tends to add mainly odd harmonics anyhow - so it is a good way to add some colour to what was otherwise rather sterile.

Normally, we might try to make the 'odd harmonic only' sound of a stopped pipe or a cylindrical read (think clarinet) sound using square waves. I did not do this. The waves I used have a much larger contributions from the early harmonics and stop short to ensure no aliasing happens.

        sig=sf.Mix(
            sf.PhasedSineWave(length,frequency,p),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*3.0,p),1.0/1.0),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*5.0,p),1.0/1.5),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*7.0,p),1.0/2.0),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*9.0,p),1.0/4.0),
            sf.NumericVolume(sf.PhasedSineWave(length,frequency*11.0,p),1.0/8.0)
        )

 Here we can see that I am adding only odd harmonics but that the third is actually great a magnitude as the fundamental and that the harmonics die away slowly. This gives a more reed sound; something brighter and potentially more harsh. However, in the later stages of processing filtering takes some of these higher harmonics back down. (note that for this sound, the subBass granular processing is turned off).

        if pitch<256:
            if subBass:
                if pitch < 128:
                    sig=sf.Mix(
                        granularReverb(+sig,ratio=0.501 ,delay=256,density=32,length=256,stretch=1,vol=0.20),
                        granularReverb(+sig,ratio=0.2495,delay=256,density=32,length=256,stretch=1,vol=0.10),
                        sig
                    )
                elif pitch < 192:
                    sig=sf.Mix(
                        granularReverb(+sig,ratio=0.501,delay=256,density=32,length=256,stretch=1,vol=0.25),
                        sig
                    )
                else:
                    sig=sf.Mix(
                        granularReverb(+sig,ratio=0.501,delay=256,density=32,length=256,stretch=1,vol=0.15),
                        sig
                    )
            sig=sf.BesselLowPass(sig,pitch*8.0,2)
        if pitch<392:
            sig=sf.BesselLowPass(sig,pitch*6.0,2)
        elif pitch<512:
            sig=sf.Mix(
                sf.BesselLowPass(+sig,pitch*6.0, 2),
                sf.BesselLowPass( sig,pitch*3.0, 2)
            )                
        elif pitch<640:
            sig=sf.BesselLowPass(sig,pitch*3.5, 2)
        elif pitch<1280:
            sig=sf.Mix(
                sf.BesselLowPass(+sig,pitch*3.5, 2),
                sf.BesselLowPass( sig,pitch*5.0, 2)
            )                
        else:
            sig=sf.Mix(
                sf.BesselLowPass(+sig,pitch*5, 2),
                sf.BesselLowPass( sig,5000,    1)
            )


That combination give rich, warm low tones and bright sparkling high notes. I am really taken with the effect even it I do say so myself. In this video the upper two voices are both generated using this voice: