The video documents building a self‑generating electronic sound sculpture and uses it to explain practical algorithmic composition, synthesis, and hybrid analog–digital design.[youtube]
Concept and composition
-
The piece is a sound sculpture that generates music in real time from about 800 lines of microcontroller code, with no external performance input.[youtube]
-
He frames this as algorithmic composition: sets of rules and stochastic processes that produce non‑deterministic music within constraints defined by the composer.[youtube]
-
He contrasts historical rule‑based counterpoint (four‑part harmony built from strict rules) with modern computer music, where the computer itself performs the algorithm rather than humans interpreting a score.[youtube]
Sound and synthesis fundamentals
-
He breaks a musical sound into pitch, duration, amplitude, and timbre, grouping them as spectral, temporal, and dynamic dimensions that an algorithm can control.[youtube]
-
He briefly critiques traditional Western notation as a “lattice” that cannot fully represent complex timbres and gestures, referencing Trevor Wishart’s work on sonic morphology.[youtube]
-
He outlines subtractive synthesis (filtering partials from rich waveforms) versus additive synthesis (building sounds from series of partials), noting additive/FFT resynthesis is beyond this project’s scope.[youtube]
Voices and algorithm design
-
A “voice” is defined as everything triggered by a note event; in typical synths this means oscillators, envelopes, VCA, and filter, and multiple voices give polyphony.[youtube]
-
His system uses two analog and three digital voices: analog voices carry melodic lines, while digital voices provide auxiliary harmonies tied to the analog behavior.[youtube]
-
Two simultaneous melodies are created by filling arrays with random values quantized to notes; values are gradually replaced over time, creating evolving melody and harmony without a fixed metronome, emphasizing temporal fluidity.[youtube]
Hardware and hybrid analog–digital details
-
The analog section uses two CEM3340‑type oscillator ICs, a 3360 VCA, and a 3320 low‑pass filter, with one shared filter for both analog voices to save parts.[youtube]
-
A 12‑bit MCP4822 DAC outputs control voltages for pitch, envelopes, cutoff, and resonance, while the Daisy Seed microcontroller’s onboard DACs provide PWM to drive analog square waves.[youtube]
-
He adopts a 480 mV per octave standard (instead of 1 V/octave) to fit within the DAC’s ~4 V range, then stores scales as linear millivolt values (e.g., semitone 40, whole tone 80, minor third 120 … octave 480) in arrays.[youtube]
-
Digital oscillators use the same scale arrays but convert the linear values to exponential frequency in code, while analog oscillators rely on their internal lin‑to‑exp converters.[youtube]
Signal flow and finishing
-
The combined analog signal is attenuated, digitized by the microcontroller, processed with overdrive and a stereo “wobbly” delay (also compensating for the single‑oscillator voices), then sent to the amplifier and stereo outputs.[youtube]
-
Visually, the sculpture is built with clean, open‑air, “3D” soldering, emphasizing its role as both circuitry and physical artwork.[youtube]
-
The video ends with a showcase of the completed piece sounding and a brief invitation to his Patreon for code and schematics.[youtube]