Hebbian Learning & Competitive Clustering
Unsupervised cluster discovery via correlation-based weight updates
© 2026 Theodore P. Pavlic · MIT License

Hebbian learning ("neurons that fire together, wire together") strengthens the connection between two neurons proportionally to their simultaneous activity. Applied to a competitive winner-take-all output layer it produces unsupervised clustering: the winning neuron's weights drift toward whichever input pattern triggered it, eventually specializing for one cluster. But pure Hebbian updating is unstable — one neuron monopolizes all patterns unless a stabilizing constraint is added. Two routes to that fix are compared below. The 2D explorer uses the same three modes as the 9-pixel demo in Tab ②.

① Hebbian Updating with Lateral Inhibition

For the output neuron j with the largest dot product between its weights and the inputs (the "winning" neuron), update each weight wij from input i to neuron j by the Hebbian rule:

wij[k+1] ← wij[k] + η · xi[k] · Nj[k]

where η is the learning rate and output Nj[k] will be 1 if the winning dot product is positive.

To implement lateral inhibition (LI), no other output neurons will be updated and each "loser" neuron ℓ will have N=0.

No constraint on weight growth — collapse: the first neuron to win a slight majority starts winning everything. In the explorer, vectors are soft-capped at magnitude 1.6 to keep them visible.

② Hebbian + LI + Normalization

As in Mode ①, the winning output neuron will be updated (and the others will not), but the result will then be normalized. That is:

wij[k+1] ← wij[k] + η · xi[k] · Nj[k]  (Hebbian step, all i∈{1,…,n})
wij[k+1] ← wij[k+1] / ‖(w1j,…,wnj)‖  (normalize to unit length)

Dividing by the L2 norm ensures a unit magnitude after every update, placing the winner on the unit sphere. Strengthening weights toward one cluster's direction forces other directions to shrink — a fixed angular budget, not arbitrary growth. Arrows converge to point at distinct cluster centers.

As with the implementation of LI in Mode ①, each loser neuron ℓ will have N=0.

③ Hebbian + LI + Anti-Hebbian Losers

As in Mode ①, the winning output neuron will be updated (and the others will not), but losers will also be updated using an anti-Hebbian rule. That is, for the winner neuron j:

wij[k+1] ← wij[k] + η · xi[k] · Nj[k]  (Hebbian step, all i∈{1,…,n})

and for each "loser" neuron ℓ:

wiℓ[k+1] ← wiℓ[k] − α · η · xi[k]  (anti-Hebbian step, all i∈{1,…,n})

where α is the anti-Hebbian rate (0 < α ≤ 1; here α=0.4), which controls how strongly losers are depressed relative to the winner's potentiation. Note that the depression is applied to the loser regardless of whether it would have been activated without the LI. As in Modes ① and ②, each loser neuron ℓ will have N=0.

Where Mode ② limits growth by rescaling, Mode ③ provides direct competitive depression — losers are weakened proportionally to how strongly each input activated them, as described by Rumelhart & Zipser (1985). This method is less likely to result in output specialization on input clusters than Mode ② normalization because depression can eliminate a loser's ability to compete before it specializes.

The stabilizing constraint in Modes ② and ③ serves the same functional role as the synaptic normalization used in the memristor widget — preventing any one neuron from claiming unlimited weight budget. Modes ② and ③ are two physically different routes to the same outcome.
2D Geometric Explorer — 3 clusters · 3 output neurons
Stopped 0 steps
Colored arrows = weight vectors (N1–N3). Dots = a fixed sample from 3 Gaussian clusters — each step one dot is randomly selected (highlighted) and used to update the winning neuron's weights. The dashed circle = unit circle (‖w‖=1) — in Mode ②, each vector is normalized to land exactly on it after every win.
Weight Vectors — current values
Network Architecture — 2 inputs, 3 output neurons, 6 weights
wij = weight from input xi to output neuron Nj. Each step the winning neuron (highest w·x) updates all its incoming weights; losers do not (or are anti-Hebbian depressed in Mode ③).

The same 9-pixel patterns used in the memristor widget are clustered here with a conventional ANN and Hebbian weight updates — no spike timing, no ferroelectric physics. Selecting Mode ① demonstrates the collapse that motivates the other two modes. Modes ② and ③ both converge, through different mechanisms. Switching between modes and comparing to the memristor widget illustrates that the same clustering behavior can arise from very different physical substrates.

Learning Rule
Hebbian + normalization: winner's weight vector rescaled to conserve total synaptic budget. Specialization emerges from the trade-off — strengthening one pattern's connections weakens the rest.
Input Patterns — pixel labels 1–9 in reading order
Weights persist — watch relearning
Pixels unrolled row-by-row: pixel 1 (top-left) → input 1  …  pixel 9 (bottom-right) → input 9. Same labels appear in the weight matrix column headers below.
×5
0.35
Stopped 0 presentations
Weight Matrix — 9 inputs × 5 output neurons  ·  cell intensity = synaptic weight  ·  highlighted row = current winner
Now
Presenting
■ darker cell = stronger synaptic weight  ·  cell value = weight (0–1) ● arc = current output activity  ·  colored cells = confirmed specialization
Learned Receptive Fields — weight image per output neuron
Each panel shows the 9 synaptic weights of one output neuron as a 3×3 image (pixel labels match the column labels above). Dark = strong connection. As training converges, each panel should resemble one of the 3 input patterns. With 5 neurons and only 3 patterns, 2 neurons are "surplus" — they may duplicate a pattern or remain unspecialized. Colored cells = confirmed specialization. Border = current winner.
Pattern Specialization — cumulative wins per output neuron, by pattern
Letter appears when one pattern clearly dominates a neuron (>45% wins, unique owner). In Mode ①, no letter will appear — all wins concentrate on one neuron.
Recognition Rate Over Training