← Back to Projects

Neural Network Playground Documentation

Overview

The Neural Network Playground is an interactive tool for visualizing and understanding how neural networks learn. It provides real-time visualization of network architecture, weights, and training progress.

Features

Parameters

Problem Type

Choose between different classification problems:

Hidden Layers

Number of layers between input and output (1-5)

More layers allow the network to learn more complex patterns but may be harder to train.

Neurons per Layer

Number of neurons in each hidden layer (2-16)

More neurons increase the network's capacity but require more training data.

Learning Rate

Controls how much the network adjusts its weights during training (0.001-1.0)

Lower values provide more stable training but slower convergence.

Activation Function

Choose between different neuron activation functions:

Interactive Features

Tips

Neural Network Implementation Details

Network Architecture

The network is implemented as a series of fully-connected layers. Each layer contains neurons that connect to every neuron in the subsequent layer. The basic structure is:


class NeuralNetwork {
    constructor(layers) {
        this.weights = [];
        this.biases = [];
        
        // Initialize weights and biases between layers
        for (let i = 0; i < layers.length - 1; i++) {
            const weightsMatrix = new Matrix(layers[i + 1], layers[i]);
            weightsMatrix.randomize(); // Initialize with random values
            this.weights.push(weightsMatrix);
            
            const biasMatrix = new Matrix(layers[i + 1], 1);
            biasMatrix.randomize();
            this.biases.push(biasMatrix);
        }
    }
}
        

Forward Propagation

During forward propagation, input data flows through the network layer by layer. Each neuron computes a weighted sum of its inputs plus a bias, then applies an activation function:


forward(input_array) {
    let current = Matrix.fromArray(input_array);
    
    for (let i = 0; i < this.weights.length; i++) {
        // Weighted sum: weights * inputs + bias
        current = Matrix.multiply(this.weights[i], current);
        current.add(this.biases[i]);
        
        // Apply activation function
        current.map(this.activation_function);
    }
    
    return current.toArray();
}
        

Backpropagation

The backpropagation process updates weights and biases to minimize error. It follows these steps:

  1. Calculate output error
  2. Compute gradients
  3. Update weights and biases

train(inputs, targets) {
    // Forward pass
    let outputs = this.forward(inputs);
    
    // Calculate error
    let output_errors = Matrix.subtract(targets, outputs);
    
    // Backpropagate error
    for (let i = this.weights.length - 1; i >= 0; i--) {
        // Calculate gradients
        let gradients = outputs.map(this.activation_function_derivative);
        gradients.multiply(output_errors);
        gradients.multiply(this.learning_rate);
        
        // Update weights and biases
        let weight_deltas = Matrix.multiply(gradients, Matrix.transpose(inputs));
        this.weights[i].add(weight_deltas);
        this.biases[i].add(gradients);
    }
}
        

Problem Generators

Training data is generated using problem-specific generators. For example, the circle problem:


class CircleProblem {
    generatePoint() {
        const r = Math.random() * 2 - 1;  // [-1, 1]
        const theta = Math.random() * 2 * Math.PI;
        
        return {
            x: r * Math.cos(theta),
            y: r * Math.sin(theta),
            label: r < 0.5 ? 1 : 0  // Inside circle = 1, Outside = 0
        };
    }
}
        

Visualization System

The visualization system uses two HTML5 canvases:


class NetworkVisualizer {
    constructor(networkCanvas, resultCanvas) {
        this.networkCtx = networkCanvas.getContext('2d');
        this.resultCtx = resultCanvas.getContext('2d');
    }
    
    drawNetwork(network) {
        // Draw neurons as circles
        for (let layer = 0; layer < network.layers.length; layer++) {
            for (let neuron = 0; neuron < network.layers[layer]; neuron++) {
                this.drawNeuron(layer, neuron);
            }
        }
        
        // Draw connections with weight-based thickness
        for (let i = 0; i < network.weights.length; i++) {
            this.drawConnections(network.weights[i], i);
        }
    }
    
    drawDecisionBoundary(network) {
        // Sample points across canvas
        for (let x = 0; x < width; x += resolution) {
            for (let y = 0; y < height; y += resolution) {
                const input = normalizeCoordinates(x, y);
                const output = network.forward(input);
                
                // Color based on network output
                const color = getHeatmapColor(output);
                this.resultCtx.fillStyle = color;
                this.resultCtx.fillRect(x, y, resolution, resolution);
            }
        }
    }
}
        

Real-time Training Loop

The training process runs in an animation loop for smooth visualization:


function trainingLoop() {
    if (isTraining) {
        // Generate batch of training data
        const batch = problem.generateBatch(batchSize);
        
        // Train on batch
        for (const point of batch) {
            network.train(point.input, point.target);
        }
        
        // Update visualization
        visualizer.drawNetwork(network);
        visualizer.drawDecisionBoundary(network);
        
        // Update stats
        updateStats(network.getError(), network.getAccuracy());
        
        // Continue loop
        requestAnimationFrame(trainingLoop);
    }
}
        

Interactive Features

User interactions are handled through event listeners:


resultCanvas.addEventListener('click', (event) => {
    const rect = resultCanvas.getBoundingClientRect();
    const x = event.clientX - rect.left;
    const y = event.clientY - rect.top;
    
    // Add training point at click location
    const point = {
        x: normalizeCoordinate(x),
        y: normalizeCoordinate(y),
        label: currentLabel  // Based on selected class
    };
    
    trainingData.push(point);
    visualizer.drawPoint(point);
});
        

Optimization Techniques

Several optimization techniques are implemented to improve training:


function initializeWeights(inputSize, outputSize) {
    const variance = 2.0 / (inputSize + outputSize);
    return Matrix.random(outputSize, inputSize)
        .multiply(Math.sqrt(variance));
}

function adaptLearningRate(error_history) {
    if (error_history.length < 2) return;
    
    const current_error = error_history[error_history.length - 1];
    const previous_error = error_history[error_history.length - 2];
    
    if (current_error > previous_error) {
        learning_rate *= 0.95;  // Decrease if error increases
    } else {
        learning_rate *= 1.05;  // Increase if error decreases
    }
    
    learning_rate = Math.min(Math.max(learning_rate, 0.001), 1.0);
}