How I built a real-time neural network visualization with handwriting recognition, and the mathematical insights that emerged along the way
I wanted to build something that would make neural networks intuitive rather than just mathematically correct. Most explanations focus on equations without showing how networks actually learn and adapt in practice.
Neural networks are dynamic systems that evolve and adapt. Seeing a network learn to distinguish between a circle and spiral, or recognize handwritten digits, demonstrates how complex behavior emerges from simple computational rules.
This playground started with two classic problems that demonstrate non-linear classification. As I built it, I realized I needed a more direct connection between human input and machine learning. That's when I added handwriting recognition.
The mathematics behind neural networks is straightforward. At its core, we're doing repeated matrix multiplications with non-linear transformations. The complexity emerges from how these simple operations combine.
Each layer transforms its input through a simple formula. For layer :
This simple operation, repeated across layers, can approximate any continuous function. The activation function is crucial. Without it, we'd just have a linear transformation, regardless of how many layers we stack.
I implemented three activation functions, each with different characteristics:
Smooth, bounded, but suffers from vanishing gradients. Great for understanding, less great for deep networks.
Zero-centered, which helps with convergence. My go-to choice for these problems.
Simple, fast, and surprisingly effective. The workhorse of modern deep learning.
Backpropagation is the chain rule applied systematically. The network learns by computing how each weight contributes to the final error and adjusting accordingly.
Each weight learns by computing how its change affects the final error. The error signal propagates backward through the network, telling each parameter how to improve.
I started with two problems that demonstrate why neural networks are necessary. Linear classifiers fail on these, but neural networks can learn the complex decision boundaries.
The circle problem is simple: classify points as inside or outside a circle. This requires a curved decision boundary, which is impossible with linear models.
1circle: {
2 generateData(): TrainingData[] {
3 const data: TrainingData[] = [];
4 for (let i = 0; i < 100; i++) {
5 const x = Math.random() * 2 - 1; // Random point in [-1, 1]
6 const y = Math.random() * 2 - 1;
7 // The magic: classify based on distance from origin
8 const target = x * x + y * y < 0.5 ? [1] : [0];
9 data.push({ input: [x, y], target });
10 }
11 return data;
12 }
13}
A linear classifier can only draw straight lines. But the optimal decision boundary for this problem is a circle—a fundamentally curved shape. The network needs to learn that the relationship defines the boundary.
The spiral problem is more challenging. Two spirals wind around each other, creating a complex decision boundary that requires multiple hidden layers to learn.
1spiral: {
2 generateData(): TrainingData[] {
3 const data: TrainingData[] = [];
4 const n = 100;
5 const maxRadius = 1;
6
7 for (let i = 0; i < n; i++) {
8 for (let j = 0; j < 2; j++) { // Two spirals
9 const r = (i / n) * maxRadius;
10 const theta = (i / n) * 4 * Math.PI + j * Math.PI; // Offset by π
11
12 const x = r * Math.cos(theta);
13 const y = r * Math.sin(theta);
14
15 data.push({
16 input: [x, y],
17 target: [j] // Spiral 0 or Spiral 1
18 });
19 }
20 }
21 return data;
22 }
23}
The two classes are completely intertwined. There's no simple geometric shape that can separate them. The network must learn to recognize the spiral pattern itself, which requires sophisticated pattern recognition.
Training on these problems shows how the decision boundary evolves from random noise to the correct shape, sometimes oscillating before converging to the right pattern.
The circle and spiral problems were useful for demonstrating concepts, but I wanted something more interactive that would connect the mathematics to direct human input. That's when I decided to add handwriting recognition.
Moving from 2-input problems to handwriting meant jumping from 2 dimensions to 784 (28×28 pixels). This is a qualitative change that brings new challenges like the curse of dimensionality and much larger parameter spaces.
For handwriting, I needed a network that could handle 784 inputs and output 3 classes (digits 1, 2, and 3). After experimentation, I settled on:
Building the drawing canvas required capturing mouse and touch events, converting them to a 28×28 pixel grid, and providing real-time feedback. I added anti-aliasing to make the drawings look more natural and improve recognition accuracy.
1const drawPixel = (x: number, y: number, intensity: number = 1.0) => {
2 const newPixels = [...pixels];
3 const index = y * 28 + x;
4
5 // Set the main pixel
6 newPixels[index] = Math.min(1.0, newPixels[index] + intensity);
7
8 // Add anti-aliasing to neighboring pixels
9 const neighbors = [
10 { dx: -1, dy: 0, factor: 0.3 },
11 { dx: 1, dy: 0, factor: 0.3 },
12 { dx: 0, dy: -1, factor: 0.3 },
13 { dx: 0, dy: 1, factor: 0.3 },
14 // Diagonal neighbors with less influence
15 { dx: -1, dy: -1, factor: 0.1 },
16 { dx: 1, dy: -1, factor: 0.1 },
17 { dx: -1, dy: 1, factor: 0.1 },
18 { dx: 1, dy: 1, factor: 0.1 },
19 ];
20
21 neighbors.forEach(({ dx, dy, factor }) => {
22 const nx = x + dx;
23 const ny = y + dy;
24 if (nx >= 0 && nx < 28 && ny >= 0 && ny < 28) {
25 const nIndex = ny * 28 + nx;
26 newPixels[nIndex] = Math.min(1.0, newPixels[nIndex] + intensity * factor);
27 }
28 });
29
30 setPixels(newPixels);
31}
The network predicts what you're drawing in real-time. Every stroke updates the prediction, creating immediate feedback between human input and the machine learning model.
I started with synthetic training data, but quickly realized that real handwriting is much more varied and messy. So I built a data collection system that lets users draw their own digits and contribute to the training set. The network learns from actual human handwriting, not idealized examples.
Making neural network training visual and interactive presented unique challenges. I needed to balance educational value with performance, creating something that was both informative and smooth to interact with.
Running neural network training in the browser while maintaining 60fps visualization required careful optimization. Every training step needs to update multiple canvases, recalculate decision boundaries, and maintain responsive user interaction.
I use requestAnimationFrame
to create a smooth training loop. Each frame performs one training step, updates the visualizations, and schedules the next frame. This creates the illusion of continuous learning while keeping the browser responsive.
Each visualization component uses HTML5 Canvas with careful attention to performance. The key insight was to handle device pixel ratio properly and minimize unnecessary redraws.
1const render = () => {
2 const rect = canvas.getBoundingClientRect();
3 const dpr = window.devicePixelRatio || 1;
4
5 // Set actual canvas size accounting for device pixel ratio
6 canvas.width = rect.width * dpr;
7 canvas.height = rect.height * dpr;
8 ctx.scale(dpr, dpr);
9
10 // Clear with background color
11 ctx.fillStyle = '#0a0a0a';
12 ctx.fillRect(0, 0, rect.width, rect.height);
13
14 // Draw connections with weight-based opacity
15 for (let i = 0; i < network.layers.length - 1; i++) {
16 for (let j = 0; j < network.layers[i]; j++) {
17 for (let k = 0; k < network.layers[i + 1]; k++) {
18 const weight = network.weights[i][k][j];
19 const opacity = Math.min(Math.abs(weight), 1);
20
21 ctx.strokeStyle = weight > 0
22 ? `rgba(255, 255, 255, ${opacity * 0.5})`
23 : `rgba(115, 115, 115, ${opacity * 0.5})`;
24 ctx.lineWidth = Math.min(Math.abs(weight) * 3, 4);
25
26 // Draw connection line
27 ctx.beginPath();
28 ctx.moveTo(x1, y1);
29 ctx.lineTo(x2, y2);
30 ctx.stroke();
31 }
32 }
33 }
34};
For the 2D problems, I render the decision boundary by sampling the network's output across a grid of points. This creates a beautiful visualization of how the network "sees" the input space, but it's computationally expensive. I had to find the right balance between resolution and performance.
Building this playground taught me several important lessons about neural networks, web performance, and the intersection of education and technology.
Immediate feedback significantly improves learning. When you can see the network's decision boundary evolving in real-time, or watch your handwriting being recognized as you draw, abstract concepts become concrete and understandable.
Implementing neural networks in TypeScript was surprisingly pleasant. The type system caught many bugs early, especially around matrix dimensions and data flow. The performance was adequate for educational purposes, though I wouldn't recommend it for production ML workloads.
Making the hyperparameters adjustable revealed how sensitive neural networks can be. A learning rate that's too high causes wild oscillations. Too few neurons and the network can't learn complex patterns. Too many and it overfits quickly.
Running neural networks in the browser taught me about the delicate balance between educational value and performance. Some key insights:
Building this neural network playground was both a technical challenge and a journey of discovery. Here are the key insights that emerged from the process.
Interactive learning is more effective than passive reading. Seeing a decision boundary evolve, or watching your handwriting get recognized in real-time, creates intuitive understanding that equations alone cannot provide.
When someone sees the spiral decision boundary form, or watches their handwritten digit get correctly classified, neural networks stop being mysterious black boxes and become understandable tools.
If I were to rebuild this, I'd consider using WebGL for better performance, implement more sophisticated data augmentation, and add support for convolutional layers. The core insight that immediate visual feedback improves learning would remain central to the design.
The best educational tools let you experience concepts rather than just explaining them. Neural networks are about pattern recognition and learning from data. Making that process visible and interactive builds intuition that goes beyond memorizing formulas.
Experience the neural network playground firsthand. Train networks on different problems, draw your own digits, and watch the learning process unfold in real-time.
Launch Neural Network PlaygroundBuilding this neural network playground was a journey of discovery, both technical and educational. The intersection of mathematics, visualization, and human interaction continues to be an interesting area to explore, and I hope this documentation provides useful insights.