🔊 FDN Reverb - Ripples of Space and Time
This post was inspired by a conversation between me and a colleague about the difference between reverb pedals and room impulse responses. That discussion led me to revisit how reverb is represented, simulated, and implemented.
The effort also results in an high-performance tool at FDN-Reverb on GitHub.
1. What Is Reverb?
Reverb is what happens when sound reflects repeatedly in a space and gradually decays.
In real life, it comes from reflections off:
- floors, walls, and ceilings
- multiple bounces that build up echo density
- the merging of reflections into a smooth “tail”
Digital reverberation aims to simulate or create the sense of space.
Demo Music:
Here is a MIDI-rendered version of “Prelude” from the Final Fantasy Games, which does not use any reverb.
And below is the output processed by FDN-Reverb.
2. RIR-based Reverb
A Room Impulse Response (RIR) captures how a room responds to a short impulse (e.g., a clap or finger snap). It is the acoustic fingerprint of the space.
If you clap in a room and record the response, the waveform you capture — the direct sound plus all reflections — is that room’s RIR.
RIR can be used to “play back” the sound of a specific space by time domain convolution.
Mathematical:
1
2
y[n] = Σ h[k] * x[n - k]
k = 0 → len(h) - 1
Where:
x[n]= dry input signalh[n]= impulse response (RIR)y[n]= output with reverb
The reverb tail length equals the RIR length. The sound quality depends entirely on the recorded RIR.
2.1. Efficient FFT Implementation
By the convolution theorem:
1
2
FFT(y) = FFT(x) * FFT(h)
y = IFFT( FFT(x) * FFT(h) )
Real-time convolution reverb engines implement this using Overlap-Add or Partitioned Convolution to process audio in blocks with minimal latency.
✅ Key points:
- Uses a fixed RIR (recorded from a real space)
- Reverb length = RIR length
- Latency = block size / sample rate (typically 1–5 ms)
3. Algorithmic Reverb
Core idea: Algorithmic Reverb simulates or synthesizes a space in real time, without using recorded RIRs.
Characteristics:
- 🎛️ Fully parametric — you can control size, diffusion, damping, modulation
- 🧊 Reverb tail can be infinite or frozen (IIR feedback)
- ✨ Can sound natural or creative depending on tuning
- 🕙 Real-time adjustable — that’s why it’s used in pedals and synths
- 🧙 Not tied to any real room — it creates imaginary acoustics
3.1. Feedback Delay Network (FDN)
Modern algorithmic reverbs almost universally rely on the Feedback Delay Network (FDN) architecture. FDN is the mathematical heart of most reverb pedals (e.g., Strymon BigSky, Eventide, HX Stomp) and plugins.
FDN models how energy recirculates inside a virtual acoustic space through multiple delay paths and feedback coupling.
Specifically, FDN is an interconnected network of delay lines with feedback through a mixing matrix A.
Mathematically: y[n] = A * y[n - delay] + g_in * x[n]
Expanded form: y[n] = g_fb * H * y[n - delay] + g_in * x[n]
Where:
y[n]: Output from delay lines at sample nA: Feedback matrix (A = g_fb * H)H: Orthogonal Hadamard matrix (satisfiesH^T * H = I)g_fb: Feedback gain, a scaling factor (0~1) that controls reverb tail length and stabilityy[n - delay]: Previous samples from delay lines (with different delay lengths)g_in: Input gain (distributed across delay lines,g_in = 1/N)x[n]: Input signal- Assuming there are 8 delay lines:
1 2 3 4 5 6 7 8 9 10 11
+--------------------------------+ | | x[n] → +→ [delay1] → y1[n] →------------+ | | +→ [delay2] → y2[n] →------------+ | | ... | +→ [delay8] → y8[n] →------------+ ↑ | feedback matrix AProcessing Logic:
- Each delay line produces its own output y_i[n].
- These individual outputs are combined into a vector y[n].
- The feedback matrix A mixes these outputs to generate the new feedback input.
- The current external input x[n] is added to this feedback signal to form the next delay-line input samples.
- All delay lines are updated synchronously.
- The final output is a linear combination of the elements of y[n].
Output Mixing:
The audio signal we actually hear (e.g., through speakers or saved to a file) must be a one-dimensional time series. However, inside the FDN, we have an N-dimensional state vector y[n] representing the outputs of all delay lines.
To obtain a single (or stereo) audio output, we define an output mixing vector c that linearly combines the internal states into scalar output by a dot product:
3.2. FDN as a Synthetic RIR Generator
FDN do not play back a recorded room. Instead, they generate a virtual acoustic field in real time — controllable in size, diffusion, and modulation.
If you fix all FDN parameters and do not use modulation:
- delay lengths
- feedback matrix A
- damping
- feedback gain
then the FDN becomes a Linear Time-Invariant (LTI) system.
For a fixed configuration, the FDN produces a deterministic impulse response (
h[n]) — a synthetic RIR that can be used just like a recorded one.
You could, in principle, convolve any dry signal with that impulse response:
1
y[n] = (x * h)[n]
and get the same output as running it through the fixed FDN.
3.3. 🧊 The Stable “Freeze” Configuration
To maintain a constant total energy (neither growing nor decaying), the FDN must satisfy: g_fb = 1 and g_in = 0. This effectively closes the system into a lossless feedback loop:
1
y[n] = A * y[n - D] = A * A y[n-2D]
In this state:
- The feedback matrix A is orthogonal perfectly preserves energy.
- No new input x[n] is injected (g_in = 0).
- The energy already inside the delay lines continues to circulate indefinitely.
One step further, if delay, D, is a constant:
1
y[n]=A^k * y[n−kD]
3.3.1. What does the power of A result in?
The answer is: this operation never converges to a fixed matrix — it keeps rotating the y[n] vector across time (n), preserving total energy (magnitude). Which defines a N-dimensional hypersphere, where N is the number of delay lines.
So in fact, y[n] does not equal to y[n - m] for any m. Which means the signal doesn’t freeze as a constant vector. And the output scalar value for playback is not a constant value. It is the sum of all coordinates of a vector on a hypersphere with fixed radius. That’s why in freeze mode, the reverb tail sustains forever but still feels slightly “alive”. And since the vector’s magnitude is not decaying (decreasing), there will always be non-zero values in output wavform. So the tail of reverb rings forever.
3.4. Modulated FDN: Evolving Spaces in Time
🎛 What “modulation” means in a reverb context In FDN-based (or Schroeder-type) algorithmic reverbs, modulation refers to slow, time-varying changes in delay times or filter coefficients, typically driven by low-frequency oscillators (LFOs). These micro-variations break up the static interference patterns (comb-filter resonances) that would otherwise produce metallic ringing or tonal coloration.
In other words: Modulation makes the internal delay lines breathe — introducing slight fluctuations that smear phase relationships and create a richer, more natural tail.
Which means: The result is a diffuse, lush, chorused reverb tail — like what you hear in professional-grade reverb hardware and pedals (Lexicon, Strymon, Eventide).
3.4.1 📖 Theoretic view
When delay times and feedback matrix are time-varying, the FDN becomes linear time-varying (LTV).
Therefore, it no longer has a fixed impulse response h[n]; its effective “RIR” evolves continuously.
Mathematically: y[n]=A(t) * y[n − D(t)] + g * [n] and the impulse response (h[n,t]) depends on time.
One example of time-varying delay could be: For each delay line i with base delay D_i
1
D_i(t) = D_i + ΔD_i * sin(2π * f_i * t + ϕ_i)
where
- ΔD_i = modulation depth (samples or ms)
- f_i = modulation rate (Hz)
- ϕ_i = phase offset
3.4.2 🎵 Musical view
- Small modulation → smooth, animated tail without pitch wobble.
- Large modulation → chorused or dreamy “cloud” effects (e.g., Strymon Cloudburst).
- Randomized modulation → “air movement” without obvious periodicity.
This is why many reverb pedals expose mod_depth and mod_rate — they directly control how “alive” or “static” the virtual space feels.
They’re not just replaying a space, they’re continuously reshaping one.
4. Convolution vs FDN — Summary
| Property | Convolution Reverb | FDN Algorithmic Reverb |
|---|---|---|
| Model type | FIR | IIR + feedback matrix |
| Tail length | Fixed (RIR) | Adjustable / infinite |
| Control freedom | Very limited | Highly dynamic |
| Sound character | Recreates real rooms | Creates imaginary rooms |
| Typical use | Studio post-production | Pedals, synths, real-time FX |
One-Sentence Summary
Convolution reverb plays a real room.
FDN reverb creates a virtual one.
Reverb pedals are real-time, parameterized FDNs that let musicians sculpt entire spaces on stage.
🔗 Code Implementation
📘 A high performance python implementation with c++ optimized:
FDN-Reverb – High-Performance Feedback Delay Network Reverb (PyTorch + C++)
Processes a 2-minute stereo track in ≈ 2.4 seconds on an M4 MacBook Pro, with adjustable feedback gain, damping, wet/dry mix, and modulation depth.