Background shading shows a linear classifier fit on this 2D projection. In RBFNN mode the classes become linearly separable after the RBF transformation — a direct illustration of what the kernel trick exploits. In linear mode the raw input space is shown: same interlocked structure, same failure.