Sunday, 8 January 2017

Modelling 'Analogue' Delays

The bucket brigade delay lines are great sounding; here I discuss learning from their implementation and trying to model them in Sonic Field.

'Pattern Buffer' uses this analogue delay
modelling algorithm

What is an analogue delay? True analogue delays are things like oil tank and tape devices. These record audio to some analogue medium and then replay it a short time later. Whist such technology produces all sorts of interesting effects it is also 'electro-mechanical' and hence complex and hard to use live. Because of these limitations, the invention of the buck-brigade delay was a huge boon. It is something of a digital forerunner though; don't shout all at once - it really is part digital.

Digital audio is in its modern 'pure' form is represented as samples and amplitudes. The amplitudes are a number and by placing together a series of amplitudes in sequence (samples) we represent a wave form.

Audacity showing individual samples from a digital signal.

What this to do with a bucket brigade delay line one might ask? Well, to quote Wikipedia

'''
Despite being analog in their representation of individual signal voltage samples, these devices are discrete in the time domain and thus are limited by the Nyquist–Shannon sampling theorem; both the input and output signals are generallylow-pass filtered. The input must be low-pass filtered to avoid aliasing effects, while the output is low-pass filtered for reconstruction. (A low-pass is used as an approximation to the Whittaker–Shannon interpolation formula.)
'''

I.e. their sample representation is analogue, continuous and thus non digital. However, their temporal (time) representation is discrete and thus identical to the way fully digital signals are encoded (give or take). 

If I am honest, I am not entirely sure why I am explaining this; I guess I find the division between digital and analogue interesting because, like all human taxonomies it is entirely artificial and there will always be cases which fall somewhere between two taxonomic categories.

Nevertheless, there is unmistakable differences between a pure digital sample delay effect and a bucket-brigade delay. This is ever more so when feedback is employed (where by some of the output of the delay is feed back into the input). 

Why the difference? Simply put, it is because the signal coming out of the delay is different from the signal going in, not just a delayed version. When feed back, the signal continually changes on each pass through the delay and does not interact with in original in pure ways. A feed back digital sample delay will produce a pure comb filter and that is all; the sound is interesting but 'cold'. The bucket brigade delay tends to 'saturate' the signal by having a non linear amplitude response. What is more, the feed back circuits low pass filter the signal ( to avoid Nyquist aliasing) and in so doing (will normally, unless they use balanced FIR circuits on which I am happy to be corrected) cause group delay (phase rotation) of some frequencies compared to others. So, our simple delay turns into a low pass filtering, distorting,  phasing system. Sample delay turns into a rich and sonically unpredictable thing which can be really terrible (think cheesy sci-fi effects) or amazing (think massive guitar leads).

The final piece in all of this is delay modulation. Bucket Brigade delays are (yes, still are) used for chorus and flanger effects. This can be done because the device is clocked. It consists of a chain of capacitors. Each clock plus makes one capacitor load its charge into the next. The input signal charges the first capacitor and the output is taken from the last. Thus, by modulating the frequency of the clock pulses one modules the length of the delay. Modulating the length of the delay causes the frequency of the output signal to change proportionally to the first differential of the change in the delay. I.e. if we slowly decrease the length of the delay we shift the frequency up. If we modulate the delay clock with a sine wave we modulate the output frequency by a cosine wave. Also, very interestingly, if we modulate the clock by a triangle wave we modulate the output frequency by q square wave!

After all that chat I am finally getting to the body of this post; I have been working on a Sonic Field processor which mimics the behaviour of a modulated bucket brigade delay. Now, you might see why the sampling nature of a bucket brigade line is relevant; the effect lends itself to a fully digital model much more easily than other analogue techniques. To model a tape delay or an oil can delay means managing very many more complex effects; a bucket brigade delay is a saturation effect and a filter effect but the time and sampling side do not need any special modelling. Again, do not shout; I am sure the real world is more complex than this, but then I am sure you can also see my point.

So - let me introduce my first attempt:

/* For Copyright and License see LICENSE.txt and COPYING.txt in the root directory */
package com.nerdscentral.audio.time;

import java.util.List;
import java.util.ArrayList;

import com.nerdscentral.audio.SFConstants;
import com.nerdscentral.audio.SFSignal;
import com.nerdscentral.audio.SFData;
import com.nerdscentral.sython.Caster;
import com.nerdscentral.sython.SFPL_Context;
import com.nerdscentral.sython.SFPL_Operator;
import com.nerdscentral.sython.SFPL_RuntimeException;
import com.nerdscentral.audio.pitch.algorithm.SFRBJFilter;
import com.nerdscentral.audio.pitch.algorithm.SFRBJFilter.FilterType;

public class SF_AnalogueChorus implements SFPL_Operator
{

    private static final long serialVersionUID = 1L;

    @Override
    public String Word()
    {
        return Messages.getString("SF_AnalogueChorus.0"); //$NON-NLS-1$
    }

    private static double sat(double x)
    {
        double y = x >= 0 ? x / (x + 1) : x / (1 - x);
        return y;
    }

    @Override
    public Object Interpret(Object input, SFPL_Context context) throws SFPL_RuntimeException
    {
        List<Object> lin = Caster.makeBunch(input);
        try (
      SFSignal inR = Caster.makeSFSignal(lin.get(0));
      SFSignal mod = Caster.makeSFSignal(lin.get(2))
 ){
     try(SFData in = SFData.realise(inR))
     {
  if(inR instanceof SFData)
  {
      in.__pos__();
  } 
  int delay = (int)((Caster.makeDouble(lin.get(1))) * SFConstants.SAMPLE_RATE_MS);
  double feedBack = Caster.makeDouble(lin.get(3));
  double drive    = Caster.makeDouble(lin.get(4));
  double r = in.getLength();
  double feedForward = 1.0 - feedBack;
  FilterType type = FilterType.LOWPASS;
         SFRBJFilter filter = new SFRBJFilter();
  filter.calc_filter_coeffs(type, 5000.0, 1.0, 0);

  try (
       SFSignal buff = in.replicateEmpty();
       SFSignal outL = in.replicateEmpty();
       SFSignal outR = in.replicateEmpty();
  ){
      for (int n = 0; n < r; ++n)
      {
   buff.setSample(n,filter.filter(in.getSample(n)));
      }
      // reboot the filter
      filter = new SFRBJFilter();
      filter.calc_filter_coeffs(type, 5000.0, 1.0, 0);
      int last = 0;
      for (int n = 0; n < r; ++n)
      {
   double mix=0;
   int delayGet=n-delay-((int)(mod.getSample(n)*SFConstants.SAMPLE_RATE_MS));
   if(delayGet<r && delayGet>-1)
   {
       mix=buff.getSample(delayGet);
   }
   double q=in.getSample(n);
   outL.setSample(n,(q+mix)/2.0);
   outR.setSample(n,(q-mix)/2.0);
   buff.setSample(n,drive*(buff.getSample(n)*feedForward+filter.filter(sat(mix))*feedBack));
      }
      Caster.prep4Ret(outL);
      Caster.prep4Ret(outR);
      List<SFSignal> ret=new ArrayList<>(2);
      ret.add(outL);
      ret.add(outR);
      return ret;
  }
            }
        }
    }
}

catThe key to the whole thing sounding analogue is the saturation as highlighted above. This means that the feadback is distorted with respect to the dry signal. Rather than getting a simple modulated comb filter (basically, a flanger) we get a progressive warming of the delays which produces much more organic sounds.

Here is an example of the effects. I use two modulated delays out of phase with one another to produce this effect of drifting sounds:

Modelling The EH Polychorus


/* For Copyright and License see LICENSE.txt and COPYING.txt in the root directory */
package com.nerdscentral.audio.time;

import java.util.List;
import java.util.ArrayList;

import com.nerdscentral.audio.SFConstants;
import com.nerdscentral.audio.SFSignal;
import com.nerdscentral.audio.SFData;
import com.nerdscentral.sython.Caster;
import com.nerdscentral.sython.SFPL_Context;
import com.nerdscentral.sython.SFPL_Operator;
import com.nerdscentral.sython.SFPL_RuntimeException;
import com.nerdscentral.audio.pitch.algorithm.SFRBJFilter;
import com.nerdscentral.audio.pitch.algorithm.SFRBJFilter.FilterType;

public class SF_AnalogueChorus implements SFPL_Operator
{

    private static final long serialVersionUID = 1L;

    @Override
    public String Word()
    {
        return Messages.getString("SF_AnalogueChorus.0"); //$NON-NLS-1$
    }

    private static double sat(double x)
    {
        double y = x >= 0 ? x / (x + 1) : x / (1 - x);
        return y;
    }

    @Override
    public Object Interpret(Object input, SFPL_Context context) throws SFPL_RuntimeException
    {
        List<Object> lin = Caster.makeBunch(input);
        try (
      SFSignal inR = Caster.makeSFSignal(lin.get(0));
      SFSignal mod = Caster.makeSFSignal(lin.get(2))
 ){
     try(SFData in = SFData.realise(inR))
     {
  if(inR instanceof SFData)
  {
      in.__pos__();
  } 
  int delay = (int)((Caster.makeDouble(lin.get(1))) * 
                                   SFConstants.SAMPLE_RATE_MS);
  double feedBack = Caster.makeDouble(lin.get(3));
  double drive    = Caster.makeDouble(lin.get(4));
  double r = in.getLength();
  double feedForward = 1.0 - feedBack;
  FilterType type = FilterType.LOWPASS;
         SFRBJFilter filter = new SFRBJFilter();
  filter.calc_filter_coeffs(type, 5000.0, 1.0, 0);

  try (
       SFSignal buff = in.replicateEmpty();
       SFSignal outL = in.replicateEmpty();
       SFSignal outR = in.replicateEmpty();
  ){
      for (int n = 0; n < r; ++n)
      {
   buff.setSample(n,filter.filter(in.getSample(n)));
      }
      // reboot the filter
      filter = new SFRBJFilter();
      filter.calc_filter_coeffs(type, 5000.0, 1.0, 0);
      int last = 0;
      for (int n = 0; n < r; ++n)
      {
   double mix=0;
   int delayGet=n-delay-((int)(mod.getSample(n)*
                                              SFConstants.SAMPLE_RATE_MS));
   if(delayGet<r && delayGet>-1)
   {
       mix=buff.getSample(delayGet);
   }
   double q=in.getSample(n);
   outL.setSample(n,(q+mix)/2.0);
   outR.setSample(n,(q-mix)/2.0);
   buff.setSample(n,drive*(buff.getSample(n)*
                                       feedForward+filter.filter(sat(mix))*feedBack));
      }
      Caster.prep4Ret(outL);
      Caster.prep4Ret(outR);
      List<SFSignal> ret=new ArrayList<>(2);
      ret.add(outL);
      ret.add(outR);
      return ret;
  }
            }
        }
    }
}

For an example of this effect using two out of phase polychorus models see this video: