takumar's perma-audio webpage

Endless drone and noise from pure ISO C.

Table o' Contents

  1. Who
  2. What
  3. Why
  4. How to listen
  5. How it works
  6. Sketches
  7. Projects
  8. Community

Who

Hi, I'm @takumar! Some of you might also know me as, uhh, Oldersayunkpay. I'm mostly a lurker at Merveilles, but I'm really trying to be more active, and to do more creative and artistic programming. This page is a part of that effort!

What

I am experimenting with primitive software audio synthesis. I'm interested in using minimalistic code to produce soft, warm, organic sounds that you might not expect were generated by a computer with less than 100 lines of code, as opposed to stereotypical bleepy bloopy chiptune/keygen sounds (which I do appreciate, they're just not my focus here).

I write programs which run forever. You can listen to them for as long as you find it enjoyable. I try to find sounds which are constantly changing and moving, so that they don't get boring too quickly even though in some sense “nothing ever happens”. I find they work well as background music for thinking or working. Maybe some of them could work as sleep aids for some folks, or for meditating.

You might have heard some of my early results on the Elmet Brae 01: The Land Compilation, where it bookends the work of a bunch of actual real musicians!

All of the code on this page and the audio it generates is released to the public domain. I think of what I'm doing here as more like exploration and discovery, rather than designing or composing. I don't have the right to stop you using sine waves! If you use this code or audio in any projects of your own, I would appreciate hearing about it and being credited, thanks!

Why

One motivation for working in this very minimalist DIY-from-scratch fashion is to avoid my audio projects being in any way fundamentally tied to any particular computing platform, and especially not to any particular audio framework or file format which might be ubiquitous today but obscure in 10 or 20 years. No JACK, no PulseAudio, no ALSA, no OSS, no ESD, no MP3, no Ogg Vorbis, no WAV, no AU, nothing like any of that.

Of course you need to use some kind of audio framework to listen to my projects, but the idea is that they all work, they all work equally well, it is extremely fast and easy to switch from one to the other, and they are all invisible in the synthesis code itself. They can (and will!) come and go without consequence, and it will take me no time at all to adapt my work to whatever comes after them.

My programs use nothing but the C standard library and output audio samples directly to stdout, and that's it.

Not only do they avoid using transient frameworks, my projects also require very little CPU power or memory. They run fine on 10 year old machines with 32-bit CPUs. I haven't had the opportunity to test them yet on 20 year old machines, but I'm pretty confident they would run fine there, too.

In this way my projects are both “future proof” and “past proof”. They push back against the ever increasing pace of needless obsolescence which characterises just about all of modern computing, both software and hardware. This is part of a broader movement known by the term permacomputing.

How to listen

Compilation

You can compile any of my projects like this with gcc:

gcc project.c -lm -o project

(any other standard-compliant C compiler like clang or tcc will work just as well!)

This creates a binary named project which emits audio samples to stdout. You won't hear anything if you just run it by itself. It will just spew garbage over your terminal. You need to redirect the output to something which will actually play it. There are a great many options here for various platforms. I've listed some below, but there are certainly a whole lot more. If you manage to find other ways that work, please let me know and I'll update the list.

Playing

Most of my projects use a sample rate of 16000 samples per second with signed 16-bit integer samples and a single channel of audio, so the example commands below show how to play that kind of audio, but some projects might differ and need a little tweaking. I'll try to always make this information clear in the individual project listings.

SoX

SoX is a cross-platform package of audio tools which includes a play command which can be used to listen to my projects. SoX runs on a lot of different platforms, including Linux, *BSD and MacOS / OS X. At the moment it seems to be the best way to listen to my projects on a Mac.

You can play a project like so:

./project | play --ignore-length -t raw -e signed-integer -b 16 -r 16000 -c 1 -S -

Linux

You can use SoX on Linux (see above), but there are other options too which might already been installed by default.

If your system uses Pulse Audio (like most modern distros), you can use the pacat command like so:

./project | pacat --format=s16ne --rate=16000 --channels=1

If your system does not use Pulse Audio but uses Alsa you can use the aplay command like so:

./project | aplay -t raw -f S16_LE -r 16000

I think on very old pre-ALSA Linux systems you ought to be able to redirect the output to /dev/dsp and it will work, but I'm not sure how you tell the system what sample rate and format to use...

BSD

Of course, SoX remains an option here, too.

On OpenBSD you can use the aucat command to play a project like so:

./project | aucat -h raw -e s16le -r 16000 -i -

I've tested this and it works!

NetBSD doesn't have aucat, but it has an audioplaycommand which looks like it will do the trick. I haven't personally tested this yet, but the following should work:

./project | audioplay -f -e linear_le -P 16 -s 16000 -c 1

FreeBSD doesn't seem to have a native command line this? It seems like you can install OpenBSD's sndio system from ports and then presumably you'll get the aucat command which should work as above. Apparently you can even install ALSA on FreeBSD in which case I guess the aplay command will work?

Recording

It's also really easy to record the output of any of these programs to standard audio files, so that you can e.g. listen to them on an MP3 player or any other device that can't run them directly. You just need to use ctrl-C to stop recording once the file is long enough. Someday I'll write a little utility program to allow easy termination of programs after a fixed number of minutes, perhaps with a nice fade-in and fade-out.

Recording to MP3 with ffmpeg

./project | ffmpeg -y -f s16le -ar 16000 -ac 1 -i - -codec:a libmp3lame project.mp3

Recording to OGG with ffmpeg

./project | ffmpeg -y -f s16le -ar 16000 -ac 1 -i - -codec:a libvorbis project.ogg

How it works

So far I've just played around with sine waves, produced via Numerically controlled oscilators. Each oscillator has an unsigned integer “phasor”. Once per tick of the audio sample rate, a fixed constant term is added to the phasor. The 11 highest order bytes of the result are used as an index into a 2048 element look up table containing a single cycle of a sine wave, and the indexed value is that oscillator's output for that audio sample. The fixed constant value which is added to the phasor determines the oscillator frequency. That's it.

There's surprisingly little time-sensitive code involved. The program looks like it just spits samples out to stdout in a tight loop as fast as it possibly can, and the samples will come out faster or slower depending on the CPU speed. What actually happens when you pipe the output to a program which plays the audio data is that the other program's buffer will quickly fill up and blocking IO will effectively put the synthesis program to sleep while the player program “sips” samples from the buffer at the specified rate. As a result, what looks like an infinite tight loop in the synthesis program actually puts only a very low load on the CPU.

The only place that the audio sample rate actually enters into the synthesis code is for converting between actual audio pitches in Hertz and the constant step value which is added to an oscillator's phasor. Note that this means that by telling the player program to render a synthesis program's output at a higher or lower sample rate, you can shift the output up or down by an octave without changing the synthesis program at all. This also means it is easy to write synthesis programs which can be recompiled at any sample rate you like to suit a particular platform - just define the sample rate once as a constant. Be aware, though, that if you set the sample rate too low you will hear aliasing artifacts unless you only play deep bass.

The sine wave values in the oscillator look up tables are stored as floating point values between -1 and 1, and in fact all computations are done in floating point right up until the last moment when an audio sample is converted to an integer output and output with fwrite(). This has a few advantages. For one, just like you can easily change the sample rate, you can easily change the output format by changing just one small section of code. Depending upon the combination of sample rate and bit depth you can easily switch the same program between a very clean and pure sounding sine wave output or something noticeably buzzier and crunchier for more of a chiptune vibe. Using floating point oscillator outputs also gives you plenty of fine grained resolution for e.g. multiplying one oscillator's output by another's (shifted and scaled from [-1, 1] to [0,1]) to use it as a sinusoidal envolope, and then use that amplitude modulated oscillator's output to frequency modulate yet another oscillator by adding it to the tuning constant. All of this could sound quite clunky and discrete if the look up tables just used unsigned 8-bit integers.

Sketches

Short audio “sketches” which succinctly demonstrate techniques.

White noise with swept low-pass filter (May 2026)

C code (1.4 KB)

This file emits signed 16 bit audio at 16000 samples per second (unlike Major Sunrise which used unsigned).

This sketch uses the Box-Muller transform to sample from a Gaussian distribution, yielding something like white noise. Two simple first-order Finite impulse response filters perform low-pass filtering to shape the noise. The filter-cutoff frequency is swept by a sinusoidal LFO, causing the noise's spectrum to alternately cycle through roughly white, brown and pink noise sounds (probably not meeting the technical definition of any of these, though).

Projects

Major Sunrise (October 2023)

My debut piece, hastily prepared for the Elmet Brae 01 release.

C code (2.3 KB)

This file emits unsigned 16 bit little endian audio at 16000 samples per second.

You can run it on Linux after compiling it with the following command:

./major_sunrise | aplay --rate 16000 -format u16_le

Unlike the released version, the program just runs forever until you kill it. How long can you last?

The basic idea behind the code is this:

Gettin' Cheby wit it (May 2026)

Somehow it took me close to three years to pick this practice up again...

C code (1.6 KB)

This file emits signed 16 bit audio at 16000 samples per second (unlike Major Sunrise which used unsigned).

The setup here is 16 oscillators in four groups of four. The first oscillator in each group is tuned to one of the notes in an augmented major chord with a root of A2 (110, 138.59, 164.81 and 207.65 Hz). The other three oscillators in each group are slightly detuned from their group's frequency by some small fixed quantity (always less than 1Hz). The oscillators within each group beat against one another, fading in and out as constructive and deconstructive interference alternately dominate the interaction.

The waveform used for all these oscillators is created by “waveshaping” a pure sine wave by passing it through a sum of two Chebyshev polynomials. Chebyshev polynomials are great for waveshaping because they don't introduce any harmonics higher than the order of the polynomial, so it's easy to avoid aliasing. All those harmonics fade in and out due to detuning as well, giving the overall result a surprisingly rich and constantly changing spectrum. It's hard to believe there's no modulation with LFOs going on here at all!

Community

“Community” is a bit of a grandiose term at this point, but there is at least one other person playing with these ideas and techniques! Fellow Merveillite Caffeine's Heir has dedicated their December Adventure to a project using my Major Sunrise as a jumping off point. They've even taken things mobile with a USB powerbank a Raspberry Pi Zero!

ichi