Artificial Immune Systems

© 2026 Theodore P. Pavlic · MIT License
Feature Space · 2D Projection READY
feature dimension 1
feature dimension 2
self data self-tolerance zone candidate (evaluating) rejected (overlaps self) mature detector test point (color = zone)
Parameters
self points 8
self-tolerance radius rs 0.070
detector radius min rmin 0.040
detector radius max rmax 0.120
max live detectors (drag to ∞) 25
lifespan (steps — drag to ∞) 250
Speed
Controls
Display
self-tolerance halos
detector age fade
axis labels
Status
Press ▶ Run to begin generating candidate detectors, or use Step to advance one at a time. Drag a test point onto the canvas to classify a new observation.
0
Live Detectors
0
Censored
0
Generated
Non-Self Coverage
Detector Representation & Censoring Criterion

Like in the CLONALG demo, anomaly classifiers are modelled here as abstract circular detectors parameterised by a center b and a recognition radius rB. However, NSA does not evolve these parameters: candidates are generated randomly and accepted or rejected by a binary censoring test. There is no affinity function, no selection pressure, and no hypermutation — only the constraint that a mature detector must not overlap any self-tolerance zone.

Censoring geometry
self r s accepted r B rejected overlap
Censoring criterion
accept  (b, rB)  if  ∀ s ∈ S : dist(b, s) ≥ rs + rB
reject  (b, rB)  if  ∃ s ∈ S : dist(b, s) < rs + rB
rs = self-tolerance radius (same for all self points in this demo)  ·  rB drawn from [rmin, rmax] uniformly at random
Relationship to Evolutionary Strategies (ES without positive selection)
Mutation-only ES under negative selection — each new candidate is a random draw (mutation without a parent), and the censoring test is a binary selection step. Over time, especially with long detector lifetimes, the surviving population reflects the selection pressure exerted by the self-string distribution — large rB are penalised by their higher probability of hitting a self-tolerance zone; small and intermediate rB are shaped by their local context near self clusters.
Negative selection, not positive — unlike CLONALG, there is no fitness gradient pulling detectors toward a target. The population structure emerges entirely from what survives censoring, not from what is rewarded. This is analogous to purifying selection in population genetics — variation is generated blindly; the environment removes the unfit.
Long lifetimes ≈ sessile clonal accumulation — with lifespan set to ∞ and max detectors set to ∞, detectors accumulate indefinitely without dispersal. The surviving population tiles the non-self space in a way shaped entirely by the geometry of the self distribution — a “mutation-only” process whose output distribution is carved by negative selection.
b and rB are both constrained — centers b are sampled uniformly within the feature space bounds; rB is drawn from [rmin, rmax]. These bounds are the only free parameters the user controls — everything else is determined by the censoring geometry.
Algorithm Reference — Negative Selection
Training (online/continuous form)
given self set S, tolerance radius rs output detector set D, capacity Nd ▷ G = 1: classical offline NSA generate Nd detectors once, then deploy G = : continuous form (implemented here) G > 1 forces turnover via the Nd cap → coverage shifts each iteration ("random patrol") no other finite G is typically useful repeat for G iterations : draw random candidate (c, rB) if ∀ s ∈ S : dist(c, s) > rs + rB add (c, rB) to D ← mature detector if |D| > Nd : evict oldest ← circular buffer else discard ← censored (overlaps self) optionally expire after τ steps ← additional turnover
Detection (online)
for new sample x : if ∃ d ∈ D : ‖x − d.c‖ < d.r flag x as NON-SELF / ANOMALY else accept x as SELF / NORMAL
Relationship to GAs & NSA
No fitness function — selection is binary (pass/fail censoring), not graded; there is no objective to optimize
No crossover, no population — detectors are generated independently; there is no recombination or generational loop
Negative selection — keeps what does not match a template, the inverse of GA positive/fitness-proportionate selection
One-class learning — trains only on normal (self) data; anomaly labels are never required

Note on dimensionality: This 2D projection is a pedagogical simplification. In practice, feature spaces are high-dimensional (dozens to hundreds of features), and random detector coverage becomes exponentially sparse — a fundamental challenge for real-valued immune algorithms that motivates structured detector generation strategies.