Training loop
input S = antigen set
output M = memory cells
▷ one slot in M per ag ∈ S (canonical CLONALG)
generate random B-cell population
▷ one generation = one sweep over all ag ∈ S
G = ∞ (runs continuously) here
repeat for G generations :
for all antigens ag ∈ S :
▷ Phase 1: evaluate, select, clone, mutate
compute f(b, ag) ∀ b ∈ B
uses evolved (b, rB) — see affinity card
Ab* ← top-k B cells by f(·, ag) ← k by user
for rank i, parent bi ∈ Ab* :
ni ← round(β × Nab / i) ← rank-proportional cloning
σi ← σ0 · exp(−f(bi, ag)) ← inv. proportional mutation
σr,i ← σ0 · rB · exp(−f(bi, ag)) ← scales with current rB
Ci ← ni clones of bi; for each clone :
b ← b + N(0, σi2) ← additive zero-mean Gaussian
rB ← rB + N(0, σr,i2) ← rB mutated independently
re-evaluate f(c, ag) ∀ c ∈ Ci
▷ Phase 2: update B, memory, and diversity
B ∪ C (all clones) compete; top-Nab survive
B ← top-Nab of (B ∪ C1 ∪ … ∪ Ck) ← full affinity competition
if f(best c, ag) > f(M[ag], ag) :
M[ag] ← best c ← M only ever improves
replace d lowest-f(B) cells with random ← d by user; diversity
Recognition (deployment)
for new sample x :
m* ← argmaxm ∈ M f(x, m)
s.t. dist(x, m) ≤ m.rB
if m* found
label x as class(m*)
else
return NO MATCH
What this visualizer shows
Each generation sweeps through all antigens in sequence.
Phase 1 (first Step click): the active antigen glows;
top-k parents are selected and their clone cloud appears.
Phase 2 (second Step click): best clones replace parents;
memory M is updated; d lowest-affinity cells are replaced with randoms.
B cells are coloured by nearest antigen during the pass only
— at rest all cells render neutrally, since CLONALG tracks no ownership.
Relationship to GAs & ES — key similarities and differences
Affinity = fitness — same role, different name; both drive selection toward better solutions
Clonal expansion ≈ fitness-proportionate selection + replication — more clones for higher-affinity cells, analogous to roulette-wheel; unlike GA, clones are copies of a single parent (no crossover)
No crossover — purely mutation-driven; structurally similar to (μ+λ)-ES but run independently per antigen each generation
Hypermutation rate ∝ 1/f — mutation tightens as affinity grows, the inverse of early GA intuition; compare to self-adaptive ES where σ also shrinks near the optimum via a separate strategy parameter
Per-antigen loop = multi-modal structure — each antigen is a separate peak to cover; the outer loop over S is analogous to Unit 4 niching, maintaining diversity across multiple targets simultaneously
Memory set M = elitism per antigen — M is the output, not just the population; analogous to an archive in multi-objective EA (Unit 3) but indexed by class rather than by objective trade-off
Random replacement = diversity maintenance — prevents full convergence; analogous to random immigrants in GAs
Can follow NSA — NSA learns the self/non-self boundary (one-class); CLONALG then learns to distinguish which non-self class, using NSA’s rejected regions as the anomaly search space