Ferroelectric Memristor Synapses & Crossbar Learning
Hardware-emergent unsupervised learning via STDP · Boyn et al. (2017), Nat. Commun. 8 : 14736
© 2026 Theodore P. Pavlic · MIT License

A ferroelectric tunnel junction (FTJ) acts as a memristor: its conductance (synaptic strength) changes based on the relative timing (Δt) of spikes from connected neurons. When a synapse fires on a neuron just before that neuron fires, the superimposed voltages briefly exceed the switching threshold, strengthening the synapse. A synapse firing in the reverse order — just after the neuron has already fired — weakens it. This Spike-Timing-Dependent Plasticity (STDP) emerges from ferroelectric domain physics — no explicit learning rule, no external controller.

FTJ Crossbar Array — full structure
A crossbar array places FTJ memristors at every column–row intersection. Columns carry pre-synaptic spike inputs (one per pixel); rows are the output neurons — each neuron fires when the sum of currents through its row's FTJ synapses reaches threshold, producing a spike output to the right. Each FTJ junction is one synapse — 45 synapses total in the 9×5 array of Tab 2.
One FTJ Junction — where STDP acts
The highlighted FTJ is the synapse through which the input on that column reaches the neuron on the highlighted row. When enough current accumulates along the row, that neuron fires — producing an output spike to the right. The FTJ conductance G controls how much that particular input contributes to the neuron's activation (its synaptic weight). G changes by ΔG based on timing: ΔG > 0 if the input spike arrives before the neuron fires (that input helped trigger it — potentiation), and ΔG < 0 if the neuron had already fired when the input spike arrives shortly after (the input was too late to contribute — depression). These changes make the neuron more or less sensitive to that input in the future. The lateral inhibition that prevents different row neurons from converging on the same input pattern requires additional circuitry not shown in this simple array.
Synapse Diagram  —  drag Δt slider to explore
output first (Δt < 0) Δt = +200 ns input first (Δt > 0)
Δt
↓ resulting change in FTJ conductance (ΔG):
ΔG = +1.29 µS — Synapse Strengthened
Drag Δt slider to explore how timing determines synaptic change
STDP Learning Curve  (after Boyn et al. 2017)
Voltage Waveforms at FTJ Junction
Vinput Voutput VFTJ = Vinput−Voutput  · · ±Vth

A 9×5 crossbar of FTJ memristors connects 9 input neurons (one per pixel, labeled 1–9 in reading order) to 5 output neurons. Repeated presentation of noisy versions of 3 patterns drives STDP weight updates. Combined with winner-take-all lateral inhibition, different outputs spontaneously specialize for different input patterns — unsupervised clustering emerging entirely from hardware physics. With 5 output neurons but only 3 patterns, the competitive dynamics cause 3 neurons to specialize (one per pattern) while the remaining 2 either duplicate a specialization or stay unspecialized, replicating Fig. 4b of Boyn et al. (2017).

Three Input Patterns to Learn  —  pixel labels 1–9 mark reading-order position in the 3×3 image
Weights persist — watch the network relearn
The 9 labeled pixels are unrolled row-by-row (reading order) into the 9 input columns of the crossbar below. Pixel 1 (top-left) → column 1; … pixel 9 (bottom-right) → column 9. The same labels appear above the crossbar columns and inside the learned weight images so you can track exactly which pixel each weight corresponds to.
×5
0.35
Stopped 0 presentations
Crossbar Array  —  9 input columns × 5 output rows = 45 FTJ memristors  ·  output activity bars aligned right
Now
Presenting
pixels 1–9
reading order
→ columns below
● darker circle = higher FTJ conductance (stronger synaptic weight) ● arc color = winning output neuron — matches current input pattern (lateral inhibition)
Learned Receptive Fields  —  weight image per output neuron
Each panel shows the 9 synaptic weights of one output neuron as a 3×3 image (pixel labels match the column labels above). Dark = strong connection. As training converges, each panel should resemble one of the 3 input patterns. With 5 neurons and only 3 patterns, 2 neurons are "surplus" — they may duplicate a pattern or remain unspecialized, which is exactly the competitive equilibrium reported in Fig. 4b of Boyn et al. (2017). Colored cells = confirmed specialization. Border = current winner.
Pattern Specialization  —  cumulative wins per output neuron, by input pattern presented
Each column of bars shows how often that output neuron fired when each pattern was presented. Early training: all three colors mixed (no specialization). At convergence: one color dominates per column, and the pattern letter appears below it.
Recognition Rate Over Training