Class A — inner Class B — outer
RBFNN Explorer
© 2026 Theodore P. Pavlic · MIT License
Classifier mode
Classifier accuracy
RBF parameters

Latent space view
Background shading shows a linear classifier fit on this 2D projection. In RBFNN mode the classes become linearly separable after the RBF transformation — a direct illustration of what the kernel trick exploits. In linear mode the raw input space is shown: same interlocked structure, same failure.

Concepts

Single Neuron Perceptron
No hidden layer — direct weighted sum
w₁ w₂ b x₁ x₂ +1 f(·) ŷ Input Output ŷ = f(w₁x₁ + w₂x₂ + b) © 2026 Theodore P. Pavlic · MIT License
  • f(z) = sign(z)  for classification
  • f(z) = z  (identity) for regression
Radial Basis Function Network
Gaussian hidden layer lifts data to separable space
w₁ w₂ w₃ x₁ x₂ φ₁ Gaussian φ₂ Gaussian φ₃ Gaussian f(·) ŷ Input RBF hidden Output φₖ(x) = exp(−‖x − cₖ‖² / 2σ²) © 2026 Theodore P. Pavlic · MIT License
  • f(z) = sign(z)  for classification
  • f(z) = z  (identity) for regression