RBFNN Explorer
© 2026 Theodore P. Pavlic · MIT License
Class A — inner Class B — outer
Classifier mode
Classifier accuracy
RBF parameters

Latent space view
In RBFNN mode, the K pre-fixed centers are projected to 2D via PCA of the K-dimensional activation space, and the boundary uses a least-squares solver. In SVM mode, all N=90 training points are considered as candidate centers; the max-margin solver retains only the S support vectors (K snaps to S in the slider), discarding the rest. The feature-space canvas and accuracy bar always reflect the active mode.

Concepts

Single Layer Perceptron (SLP)
No hidden layer — direct weighted sum
w₁ w₂ b x₁ x₂ −1 f(·) ŷ Input Output ŷ = f(w₁x₁ + w₂x₂ − b) © 2026 Theodore P. Pavlic · MIT License
  • f(z) = sign(z)  binary classification (hard threshold; non-differentiable)
  • f(z) = 1/(1+e⁻ᶻ)  Bernoulli / binary probability (differentiable — use for training)
  • f(z) = exp(z)  Poisson / Gamma regression (log link)
  • f(z) = z  linear (Gaussian) regression
Radial Basis Function Network (RBFNN)
Gaussian hidden layer lifts data to separable space
w₁ w₂ w₃ w₀ x₁ x₂ φ₁ Gaussian φ₂ Gaussian φ₃ Gaussian −1 f(·) ŷ Input RBF hidden Output φₖ(x) = exp(−‖x − cₖ‖² / 2σ²) © 2026 Theodore P. Pavlic · MIT License
  • f(z) = sign(z)  binary classification (hard threshold; non-differentiable)
  • f(z) = 1/(1+e⁻ᶻ)  Bernoulli / binary probability (differentiable — use for training)
  • f(z) = exp(z)  Poisson / Gamma regression (log link)
  • f(z) = z  linear (Gaussian) regression