I'll be honest, today I tried and failed to synthesis Well Tempered Clavier part 2. But, as a consolation prise I got a lovely deep 'cello'.
Now, instruments of the violin family cannot be synthesised in any realistic way; they are just too complex (if someone does it - they are probably using samples directly - convolution with a sample also works). However, one can (I believe) capture their essence using first principles synthesis. The key ingredients are:
Body resonance - if you think you have too much - you probably don't.
Pitch variation due to bowing (as the string in stretched by the bow its pitch varies).
A shockingly rattle at the top end which somehow works out because it is unstable.
A shimmering stereo field cause but the sound being sent in all directions by the body of the instrument.
A strong variation of the notes as they move in and out of resonance with the body.
A very carefully tailored envelope.
Vibrato and tremolo which is not just slapped on but moves and is subtle.
This is the Sonic Field blog so which of these came from the Waldorf Pulse 2 and which from Sonic Field?
2 and 3 were definitely aided by Sonic Field. The reverb had a chunk of excitation to it which built on top of chorusing. This along with the alive nature of the synth (being analogue) helped create this piece. It is not a cello but people tell me it sounds nice :)
Right now I am completely loving working with the Waldorf - the harpsichord sound it can make is beyond belief.
A regular old fashioned ring modulated synth-harpsichord is all very well; but the pulse width modulated effect which is possible with the Waldorf is something else again. The raw sounds from the little synth is a bit rough and a bit electronic but when passed through a touch of harmonic excitation and reverb in Sonic Field the result is stunning. Now, I might be blowing my own trumpet, but I can honestly say everyone who has listened to this live has praised the sound:
To be completely honest, I was a bit lucky. I just tried adding a bit of spring reverberation (using a spring impulse response) and I think that was the final trick to make the sound come to life. The reverb' you can here is a mixture of a few room/hall impulse responses reverbs mixed with a bit of spring.
But, the true secret is the way the signal path of a true analogue synth works. The sounds are all coupled and constantly changing. An electronic audio circuit 'wants' to make audio because the values of the components are set up that way. The circuits in an analogue synth' then interact with one another in music ways. This is distinctly different from pure digital synthesis where nice sounding audio is something one has to force from the algorithms. I am enjoying the mix where the analogue makes amazing feed stuff for digital post processing.
By introducing a new operator into Sonic Field, it has been possible to produce a very stable chorus effect. One could consider chorus to be an FM effect. However, the numeric stability of FM is very poor for sampled sounds (or so I have found). What I have come up with instead is a time shift. Rather than altering the sample rate based on a signal, I shift the point of samples. Thus, minute errors do not accumulate as they do in FM.
The pitch shift then becomes the first differential of the time shift. In chorusing I make the time shift a sine wave to the pitch shift if also a sine wave (or a cosine if you want to be pedantic).
Here is the new operator:
/* For Copyright and License see LICENSE.txt and COPYING.txt in the root directory */
package com.nerdscentral.audio.time;
import java.util.List;
import com.nerdscentral.audio.SFConstants;
import com.nerdscentral.audio.SFSignal;
import com.nerdscentral.sython.Caster;
import com.nerdscentral.sython.SFPL_Context;
import com.nerdscentral.sython.SFPL_Operator;
import com.nerdscentral.sython.SFPL_RuntimeException;
public class SF_TimeShift implements SFPL_Operator
{
/**
*
*/
private static final long serialVersionUID = 1L;
@Override
public Object Interpret(final Object input, final SFPL_Context context) throws SFPL_RuntimeException
{
List<Object> lin = Caster.makeBunch(input);
try (SFSignal in = Caster.makeSFSignal(lin.get(0)); SFSignal shift = Caster.makeSFSignal(lin.get(1)))
{
try (SFSignal y = in.replicateEmpty())
{
int length = y.getLength();
if (shift.getLength() < length) length = shift.getLength();
for (int index = 0; index < length; ++index)
{
double pos = index + SFConstants.SAMPLE_RATE_MS * shift.getSample(index);
y.setSample(index, in.getSampleCubic(pos));
}
length = y.getLength();
for (int index = shift.getLength(); index < length; ++length)
{
y.setSample(index, in.getSample(index));
}
return Caster.prep4Ret(y);
}
}
}
@Override
public String Word()
{
return Messages.getString("SF_TimeShift.0"); //$NON-NLS-1$
}
}
Most of my work in Sonic Field has been using the built in synth abilities of the program. But there is not reason it should to drive an external synth and post process the signal.
I recently bought a Pulse 2 and it is quite amazing. However, it is also a mono synth. I am completely spoilt generating sounds with Sonic Field as it has no upper limit to the number of notes which can be generated at once. Whilst the mono synth sounds has its place, it is also rather limited. So, I needed a solution to give multi-tracking.
The existing midi implementation in SF was just pathetic. I completely ripped it out and pretty much started over. The only piece remaining is the code which maps midi on/off messages into notes and disambiguated overlapping messages on the same track/channel/key combination.
I should go into great detail about how it all works, but I am exhausted after a long day working and evening making music so here is the dump of the patch I used to drive the synthesiser over midi. Yes - I drove the synth from Sonic Field directly!
from com.nerdscentral.audio.midi import MidiFunctions
class Midi(MidiFunctions):
metaTypes={
0x00:'SequenceNumber',
0x01:'text',
0x02:'copyright',
0x03:'track_name',
0x04:'instrument',
0x05:'lyrics',
0x06:'marker',
0x07:'cue',
0x20:'channel',
0x2F:'end',
0x51:'tempo',
0x54:'smpte_offset',
0x58:'time_signature',
0x59:'key_signature',
0x7f:'sequencer_specific'
}
timeTypes={
0.0: 'PPQ',
24.0: 'SMPTE_24',
25.0: 'SMPTE_25',
29.97:'SMPTE_30DROP',
30.0: 'SMPTE_30'
}
@staticmethod
def timeType(sequence):
return Midi.timeTypes[sequence.getDivisionType()]
@staticmethod
def isNote(event):
return event['command']=='note'
@staticmethod
def isMeta(event):
return event['command']=='meta'
@staticmethod
def isCommand(event):
return event['command']=='command'
@staticmethod
def isTempo(event):
Midi.checkMeta(event)
return event['type']==0x51
@staticmethod
def isTimeSignature(event):
Midi.checkMeta(event)
return event['type']==0x58
@staticmethod
def metaType(event):
t=event['type']
if t in Midi.metaTypes:
return Midi.metaTypes[t]
return 'unknown'
@staticmethod
def checkMeta(event):
if not event['command']=='meta':
raise Exception('Not meta message')
@staticmethod
def tempo(event):
Midi.checkMeta(event)
if event['type']!=0x51:
raise Exception('not tempo message')
data=event['data']
if len(data)==0:
raise Exception('no data')
t=0
for i in range(0,len(data)):
if not i==0:
t <<= 8
t+=data[i]
return t
@staticmethod
def timeSignature(event):
Midi.checkMeta(event)
if event['type']!=0x58:
raise Exception('not tempo message')
data=event['data']
if not len(data)==4:
raise Exception('wrong data')
return {
'numerator' :data[0],
'denominator':2**data[1],
'metronome' :data[2],
'32nds/beat' :data[3]
}
@staticmethod
def tickLength(denominator,microPerQuater,sequence):
# if denom = 4 then 1 beat per quater note
# if denom = 8 then 2 beats per quater note
# there fore beats per quater note= denom/4
beatsPerQuaterNote = denominator/4.0
ticksPerBeat = float(sequence.getResolution())
microsPerBeat = float(microPerQuater)/beatsPerQuaterNote
return microsPerBeat/float(ticksPerBeat)
sequence=Midi.readMidiFile("temp/passac.mid")
print 'Sequence Time Type:', Midi.timeType(sequence)
print 'Sequence Resolution:', sequence.getResolution()
print 'Initial tick length:',Midi.tickLength(4,500000,sequence)
otl=Midi.tickLength(4,500000,sequence)
midis=Midi.processSequence(sequence)
sout=Midi.blankSequence(sequence)
# Create the timing information track
tout=sout.createTrack()
for event in midis[0]:
if Midi.isMeta(event):
if Midi.isTempo(event) or Midi.isTimeSignature(event):
tout.add(event['event'])
tout1=sout.createTrack()
tout2=sout.createTrack()
midi1=[]
midi2=[]
flip=True
minKey=999
maxKey=0
# Use 499 for 1 Done
# Use 496 for 2
# Use 497 for 3
# Use 497 for 4
# Use 001 for 5 Done
# Use 002 for 6
midiNo=6
for event in midis[midiNo]:
if Midi.isNote(event):
ev1=event['event']
ev2=event['event-off']
if event['key']>maxKey:
maxKey=event['key']
if event['key']<minKey:
minKey=event['key']
for event in midis[midiNo]:
if Midi.isNote(event):
ev1=event['event']
ev2=event['event-off']
ev1.setTick(ev1.getTick()+600)
ev2.setTick(ev2.getTick()+600)
key=event['key']
pan=127.0*float(key-minKey)/float(maxKey-minKey)
pan=31+pan/2
pan=int(pan)
pan=Midi.makePan(1,ev1.getTick()-1,pan)
if flip:
midi1.append(pan)
midi1.append(event['event'])
midi1.append(event['event-off'])
flip=False
else:
midi2.append(pan)
midi2.append(event['event'])
midi2.append(event['event-off'])
flip=True
Midi.addPan(tout1,1,100,64)
Midi.addPan(tout2,2,100,64)
Midi.addNote(tout1,1,100,120,50,100)
Midi.addNote(tout2,2,100,120,50,100)
midi1=sorted(midi1,key=lambda event: event.getTick())
midi2=sorted(midi2,key=lambda event: event.getTick())
for event in midi1:
Midi.setChannel(event,1)
tout1.add(event)
#for event in midi2:
# Midi.setChannel(event,2)
# tout2.add(event)
Midi.writeMidiFile("temp/temp.midi",sout)
for dev in Midi.getMidiDeviceNames():
print dev
player=Midi.getPlayer(3,2)
player.manual(sout)
player.waitFor()
And here is the post processing patch. I took each separately recorded voice from the synth and mixed them together in Audacity using the note I injected at a known point at the start of each to line them up. Once the mix sounded OK, I post processed with this patch:
def reverbInner(signal,convol,grainLength):
def rii():
mag=sf.Magnitude(+signal)
if mag>0:
signal_=sf.Concatenate(signal,sf.Silence(grainLength))
signal_=sf.FrequencyDomain(signal_)
signal_=sf.CrossMultiply(convol,signal_)
signal_=sf.TimeDomain(signal_)
newMag=sf.Magnitude(+signal_)
if newMag>0:
signal_=sf.NumericVolume(signal_,mag/newMag)
# tail out clicks due to amplitude at end of signal
return sf.Realise(signal_)
else:
return sf.Silence(sf.Length(signal_))
else:
-convol
return signal
return sf_do(rii)
def reverberate(signal,convol):
def revi():
grainLength = sf.Length(+convol)
convol_=sf.FrequencyDomain(sf.Concatenate(convol,sf.Silence(grainLength)))
signal_=sf.Concatenate(signal,sf.Silence(grainLength))
out=[]
for grain in sf.Granulate(signal_,grainLength):
(signal_i,at)=grain
out.append((reverbInner(signal_i,+convol_,grainLength),at))
-convol_
return sf.Clean(sf.FixSize(sf.MixAt(out)))
return sf_do(revi)
def excite(sig_,mix,power):
def exciteInner():
sig=sig_
m=sf.Magnitude(+sig)
sigh=sf.BesselHighPass(+sig,500,2)
mh=sf.Magnitude(+sigh)
sigh=sf.Power(sigh,power)
sigh=sf.Clean(sigh)
sigh=sf.BesselHighPass(sigh,1000,2)
nh=sf.Magnitude(+sigh)
sigh=sf.NumericVolume(sigh,mh/nh)
sig=sf.Mix(sf.NumericVolume(sigh,mix),sf.NumericVolume(sig,1.0-mix))
n=sf.Magnitude(+sig)
return sf.Realise(sf.NumericVolume(sig,m/n))
return sf_do(exciteInner)
####################################
#
# Load the file and clean
#
####################################
(left,right)=sf.ReadFile("temp/pulse-passa-2.wav")
left =sf.Multiply(sf.NumericShape((0,0),(64,1),(sf.Length(+left ),1)),left )
right=sf.Multiply(sf.NumericShape((0,0),(64,1),(sf.Length(+right),1)),right)
left =sf.Concatenate(sf.Silence(1024),left)
right=sf.Concatenate(sf.Silence(1024),right)
####################################
#
# Room Size And Nature Controls
#
####################################
bright = True
vBright = False
church = False
ambient = False
post = True
spring = False
bboost = False
if ambient:
(convoll,convolr)=sf.ReadFile("temp/v-grand-l.wav")
(convorl,convorr)=sf.ReadFile("temp/v-grand-r.wav")
elif church:
(convoll,convolr)=sf.ReadFile("temp/bh-l.wav")
(convorl,convorr)=sf.ReadFile("temp/bh-r.wav")
else:
(convoll,convolr)=sf.ReadFile("temp/Vocal-Chamber-L.wav")
(convorl,convorr)=sf.ReadFile("temp/Vocal-Chamber-R.wav")
if spring:
spring=sf.ReadFile("temp/classic-fs2a.wav")[0]
convoll=sf.Mix(
convoll,
+spring
)
convorr=sf.Mix(
convorr,
sf.Invert(spring)
)
if bboost:
left =sf.RBJLowShelf(left,256,1,6)
right=sf.RBJLowShelf(right,256,1,6)
convoll=excite(convoll,0.75,2.0)
convolr=excite(convolr,0.75,2.0)
convorl=excite(convorl,0.75,2.0)
convorr=excite(convorr,0.75,2.0)
ll = reverberate(+left ,convoll)
lr = reverberate(+left ,convolr)
rl = reverberate(+right,convorl)
rr = reverberate(+right,convorr)
wleft =sf.FixSize(sf.Mix(ll,rl))
wright=sf.FixSize(sf.Mix(rr,lr))
wright = excite(wright,0.15,1.11)
wleft = excite(wleft ,0.15,1.11)
if bright:
right = excite(right,0.15,1.05)
left = excite(left ,0.15,1.05)
if vBright:
right = excite(right,0.25,1.15)
left = excite(left ,0.25,1.15)
sf.WriteFile32((sf.FixSize(+wleft),sf.FixSize(+wright)),"temp/wet.wav")
wleft =sf.FixSize(sf.Mix(sf.Pcnt15(+left),sf.Pcnt85(wleft)))
wright =sf.FixSize(sf.Mix(sf.Pcnt15(+right),sf.Pcnt85(wright)))
sf.WriteFile32((+wleft,+wright),"temp/mix.wav")
if ambient:
(convoll,convolr)=sf.ReadFile("temp/ultra-l.wav")
(convorl,convorr)=sf.ReadFile("temp/ultra-r.wav")
elif church:
(convoll,convolr)=sf.ReadFile("temp/v-grand-l.wav")
(convorl,convorr)=sf.ReadFile("temp/v-grand-r.wav")
else:
(convoll,convolr)=sf.ReadFile("temp/bh-l.wav")
(convorl,convorr)=sf.ReadFile("temp/bh-r.wav")
left = sf.BesselLowPass(left ,392,1)
right = sf.BesselLowPass(right,392,1)
ll = reverberate(+left ,convoll)
lr = reverberate( left ,convolr)
rl = reverberate(+right,convorl)
rr = reverberate( right,convorr)
vwleft =sf.FixSize(sf.Mix(ll,rl))
vwright=sf.FixSize(sf.Mix(rr,lr))
sf.WriteFile32((sf.FixSize(+vwleft),sf.FixSize(+vwright)),"temp/vwet.wav")
wleft =sf.FixSize(sf.Mix(wleft ,sf.Pcnt20(vwleft )))
wright=sf.FixSize(sf.Mix(wright,sf.Pcnt20(vwright)))
sf.WriteSignal(+wleft ,"temp/grand-l.sig")
sf.WriteSignal(+wright,"temp/grand-r.sig")
wleft = sf.Normalise(wleft)
wright = sf.Normalise(wright)
sf.WriteFile32((wleft,wright),"temp/grand.wav")
if post:
print "Warming"
left = sf.ReadSignal("temp/grand-l.sig")
right = sf.ReadSignal("temp/grand-r.sig")
def highDamp(sig,freq,fact):
hfq=sf.BesselHighPass(+sig,freq,4)
ctr=sf.FixSize(sf.Follow(sf.FixSize(+hfq),0.25,0.5))
ctr=sf.Clean(ctr)
ctr=sf.RBJLowPass(ctr,8,1)
ctr=sf.DirectMix(
1,
sf.NumericVolume(
sf.FixSize(sf.Invert(ctr)),
fact
)
)
hfq=sf.Multiply(hfq,ctr)
return sf.Mix(hfq,sf.BesselLowPass(sig,freq,4))
def filter(sig_):
def filterInner():
sig=sig_
q=0.5
sig=sf.Mix(
sf.Pcnt10(sf.FixSize(sf.WaveShaper(-0.03*q,0.2*q,0,-1.0*q,0.2*q,2.0*q,+sig))),
sig
)
sig=sf.RBJPeaking(sig,64,2,2)
damp=sf.BesselLowPass(+sig,2000,1)
sig=sf.FixSize(sf.Mix(damp,sig))
low=sf.BesselLowPass(+sig,256,4)
m1=sf.Magnitude(+low)
low=sf.FixSize(low)
low=sf.Saturate(low)
m2=sf.Magnitude(+low)
low=sf.NumericVolume(low,m1/m2)
sig=sf.BesselHighPass(sig,256,4)
sig=sf.Mix(low,sig)
sig=highDamp(sig,5000,0.66)
return sf.FixSize(sf.Clean(sig))
return sf_do(filterInner)
left = filter(left)
right = filter(right)
sf.WriteFile32((left,right),"temp/proc.wav")