Sunday, 26 January 2014

Further Experiments In Extreme Synthesis - The Album

Further Experiments pushes sound so hard, the only way to really hear what is going on is with flac:


Please find below all the tracks for Further Experiments In Extreme Synthesis. I am please to release all of these under Creative Common 3.0 Attribution Non-Commercial Use. The music the tracks are based on is either original or public domain, so feel free us use for anything within the terms of the license:

Thoughts On Aliasing In Geometric Wave Form Generation (part 1)

Generating good digital sound is a never ending struggle against sampling issues. Even generating a simple waveform is not as easy as one might expect.
The spectrogram of the naive Audacity sawtooth and a band width limited Sonic Field generated saw tooth.
As a comment on my video 'The Sound Of Just And Equal Temperament' briankav made an interesting comment about aliasing in the generation of sawtooth waves. By generating a saw tooth at say 96000 samples per second (sps) some of the frequencies will be above the Nyquist limit of 48000sps. These frequencies will show up as negative frequencies which turn up in the signal as lower pitches with inverted phase.

OK - let me put that in a more straight forward way. If we generate a 4900Hz sawtooth then the 10th harmonic will be 49000Hz which is 1000Hz above the Nyquist limit for 96000sps. Just like the spokes on a wagon wheel in an old western movie, the frequency will actually turn up backwards (do you remember the wheels looking like they turned a reverse?). Instead of 49000Hz we get -1000Hz which is 1000Hz with reverse phase.

Ouch - so we make a sawtooth at 4.9KHz and we get a 1KHz frequency! That is not so good. Fortunately, most of the time the magnitude of these aliased frequencies is so low they are not a problem. They can however accumulate and produce 'mush' in the sound (for want of a better description).

Below we have a naive 96000sps saw tooth and one which is band width limited to be below the Nquist limit. Both are at 440Hz.
A band width limited sawtooth (top) and a naive sawtooth (bottom).
In the above image I show a true sawtooth as generated from additive synthesis in Sonic Field. It has a maximum harmonic of just over 20KHz. It does not actually look like a very good sawtooth! It is all wiggly and each cycle is slightly different from the previous one; nevertheless, it is the correct shape. The lower sawtooth looks much better (it was generated from the sawtooth tone function in Audacity). However, it is just wrong. We can see that from the spectra below.
The bandwidth limited sawtooth from Sonic Field 
The naive sawtooth from Audacity
A different form of naive sawtooth created from the MakeSawTooth processor in Sonic Field
A third, and also not so good, approach to making a sawtooth is to convert a sine wave into one using a cross over detector. That does not produce so many aliased frequencies but in instead it smears noise throughout the the signal. I _think_ this is due to frequency modulation of the sawtooth by the sample frequency. However, please don't hold me to that explanation!

Anyhow - here is the Sonic Field patch to make a pure sawtooth. The trick is to keep adding in harmonics until we get over 20KHz and then stop. That way it is just impossible to get any aliasing if the sample rate is high enough to be more than double the highest included harmonic.
{
    Bunch !signals
    ?pitch !o-pitch
    1      !cut
    {
        {
            (?length,?pitch)SinWave Invert   !signal
            (>signal,(1,?cut)/)NumericVolume !signal
        }Do !signal
        (>pitch ,?o-pitch)+       !pitch
        (>cut,1)+                 !cut
        (?signal,>signals)AddEnd  !signals
        ?pitch Println
        (?pitch,20000)lt        
    }
    loop
    ?signals Mix Normalise !signal
}!alias-free-saw

[
   Parameters
   ==========
   Length
   Pitch
]

...

440 !pitch
60000 !length
?alias-free-saw Do !sig
((>sig),"temp/temp.wav")WriteFile32
(((?length,?pitch)SinWave MakeSawTooth Normalise),"temp/x.wav")WriteFile32

Tuesday, 7 January 2014

What Is Phase And How It Is Imporant

Hopefully a simple explanation without any mathematics :)


Playing Audio With Java

...Train related comment goes here....
For some reason, finding out how to play arbitrary audio using Java took ages. To save you the effort - here is the code.

All Sonic Field source code is AGPL 3.0 licensed.

The trick is to write a set of 16 bit samples into a byte array and then 'write' that into an SourceDataLine. The sequence is something like this:
  1. Get a source line (the default normally).
  2. Open it.
  3. Write to it.
  4. Drain it (wait till it has stopped playing).
  5. Stop it.
  6. Close it.
/* For Copyright and License see LICENSE.txt and COPYING.txt in the root directory */
package com.nerdscentral.audio;

import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.SourceDataLine;

import com.nerdscentral.audio.volume.SF_Normalise;
import com.nerdscentral.sfpl.Caster;
import com.nerdscentral.sfpl.SFPL_Context;
import com.nerdscentral.sfpl.SFPL_Operator;
import com.nerdscentral.sfpl.SFPL_RuntimeException;

public class SF_Monitor implements SFPL_Operator
{

    private static final long serialVersionUID = 1L;

    @Override
    public String Word()
    {
        return Messages.getString("SF_Monitor.0"); //$NON-NLS-1$
    }

    @Override
    public Object Interpret(Object input, SFPL_Context context) throws SFPL_RuntimeException
    {
        SFData dataIn = Caster.makeSFData(input).replicate();
        SF_Normalise.doNormalisation(dataIn);
        try
        {
            AudioFormat af = new AudioFormat((float) SFConstants.SAMPLE_RATE, 16, 1, true, true);
            DataLine.Info info = new DataLine.Info(SourceDataLine.class, af);
            SourceDataLine source = (SourceDataLine) AudioSystem.getLine(info);
            source.open(af);
            source.start();
            byte[] buf = new byte[dataIn.getLength() * 2];
            for (int i = 0; i < buf.length; ++i)
            {
                short sample = (short) (dataIn.getSample(i / 2) * 32767.0);
                buf[i] = (byte) (sample >> 8);
                buf[++i] = (byte) (sample & 0xFF);
            }
            source.write(buf, 0, buf.length);
            source.drain();
            source.stop();
            source.close();
            return input;
        }
        catch (Exception e)
        {
            throw new SFPL_RuntimeException(Messages.getString("SF_Monitor.1"), e); //$NON-NLS-1$
        }
    }
}


Friday, 3 January 2014

Satie Feast

Fredrick Satie's music is so far ahead of his time that I find it hard to get my head around.

As part of 'Further Experiments In Extreme Synthesis' I have rendered 3 of his pieces. They all have the same sound generator and similar post processing but bring out different subtle aspects of these patches and are very different pieces of music. I will publish the generator patches soon. The compressor has already been posted.

My tribute to Satie:

Simple But Very Effective Envelope Compressor Patch

Further Experiments In Extreme Synthesis
Sometimes over engineering is a killer - strip something down and it works better.
I have worked a lot on compression (dynamic range compression) in Sonic Field. I have tried granular compression and wave limiting. These can be quite exciting things but they have not been as musical as I would like. It turns out that a simple 3 band envelope compressor is just the ticket.
This section splits the signal into three bands. The roll off does not need to be steep so I use Bessel filters to help avoid too great an amount of phase rotation:
{
    {(?signal ,200,2000,4)BesselBandPass}Do !signal-m 
    {(?signal     ,2000,4)BesselHighPass}Do !signal-h 
    {(?signal     , 200,4)BesselLowPass}Do  !signal-l 
    >signal-m !signal
    ?envelope-compress Do !signal-m
    >signal-h !signal
    ?envelope-compress Do !signal-h
    >signal-l !signal
    ?envelope-compress Do !signal-l
    (
        ({>signal-l pcnt+50}Do,10),
        ({>signal-m pcnt+30}Do,0),
        ({>signal-h pcnt+20}Do,0)
    )MixAt Normalise 
}!do-it 
On recombination I delay the bass a little which is a psychoacoustic trick to make the bass sound more present and sit better in the rest of the mix. Now - the usual number is around 2.5 ms but I am using 10 - the project does have the word 'extreme' in it.
Then the compressor it self uses an envelope generated from follow the signal in both the forward and reverse directions. This avoids 'pumping' due to note attacks. Pumping can be cool - it that is what you want. But when you need a sharp compression attack (think piano) and don't want everything to go quiet as a distinct dip after transients - the reverse trick is just great:
(
        {(?signal Reverse,1,50)Follow Reverse},
        {(?signal        ,1,50)Follow}
    )DoAll Mix !shape   
I do the compression twice. Once with the above sharp attack and slower decay then a second smoothing run which uses symmetrical attack and decay.
(
        {(?signal Normalise,25,25)Follow},
        {(?signal Normalise Reverse,25,25)Follow Reverse}
    )DoAll Mix !shape
And here is the whole thing.
{

    (1,?compress)- !offset
    (
        {(?signal Reverse,1,50)Follow Reverse},
        {(?signal        ,1,50)Follow}
    )DoAll Mix !shape   

    {(?shape,(1,?shape MaxValue)/)NumericVolume}Do          !shape
    {(?offset,(>shape, ?compress)NumericVolume)DirectMix}Do !shape
    {
        (
            >signal,
            >shape 
        )Divide Normalise
    }Do !signal 
    
    (
        {(?signal Normalise,25,25)Follow},
        {(?signal Normalise Reverse,25,25)Follow Reverse}
    )DoAll Mix !shape
    
    {(?shape,(1,?shape MaxValue)/)NumericVolume}Do          !shape
    {(?offset,(>shape, ?compress)NumericVolume)DirectMix}Do !shape
    
    {
        (
            >signal,
            >shape 
        )Divide Normalise
    }Do 
}!envelope-compress

{
    {(?signal ,200,2000,4)BesselBandPass}Do !signal-m 
    {(?signal     ,2000,4)BesselHighPass}Do !signal-h 
    {(?signal     , 200,4)BesselLowPass}Do  !signal-l 
    >signal-m !signal
    ?envelope-compress Do !signal-m
    >signal-h !signal
    ?envelope-compress Do !signal-h
    >signal-l !signal
    ?envelope-compress Do !signal-l
    (
        ({>signal-l pcnt+50}Do,10),
        ({>signal-m pcnt+30}Do,0),
        ({>signal-h pcnt+20}Do,0)
    )MixAt Normalise 
}!do-it 

"temp/input.wav" ReadFile ^left ^right

[
    As compression happens in two phases - 0.5 is actually
    a lot - more gets silly - 0.9 makes stuff sound all crunchy
    and old school.
]
0.5 !compress

>left  !signal
?do-it Do !left
>right !signal
?do-it Do !right

((>left Done,>right Done),"temp/done.wav")WriteFile32

To demonstrate, here are two pieces. The first does not use the compressor and the second does (at a setting of 0.5). The first has had some work done on the base to bring it out which was not needed on the second as the compressor did it.