MNIST Demo

Description

This demo trains a Convolutional Neural Network on the MNIST Dataset (Mixed National Institute of Standards and Technology database) digits dataset and shows the training process in your browser. We will use the architecture known as LeNet, which is a deep convolutional neural network known to work well on handwritten digit classification tasks.

More precisely, we will use mlpack's neural network architecture, to build a modified version of the network by replacing the sigmoid activation functions with Rectified Learning Unit (ReLU) activation functions. The basic network structure consists of a convolution layer followed by a pooling layer, and then another convolution followed by a pooling layer. After that, one densely connected layer is added.

pred 0
pred 1
pred 2
pred 3
pred 4
pred 5
pred 6
pred 7
pred 8
pred 9
TPR
act 0
0
0
0
0
0
0
0
0
0
0
0
act 1
0
0
0
0
0
0
0
0
0
0
0
act 2
0
0
0
0
0
0
0
0
0
0
0
act 3
0
0
0
0
0
0
0
0
0
0
0
act 4
0
0
0
0
0
0
0
0
0
0
0
act 5
0
0
0
0
0
0
0
0
0
0
0
act 6
0
0
0
0
0
0
0
0
0
0
0
act 7
0
0
0
0
0
0
0
0
0
0
0
act 8
0
0
0
0
0
0
0
0
0
0
0
act 9
0
0
0
0
0
0
0
0
0
0
0
ACC
PPV
0
0
0
0
0
0
0
0
0
0
0
Confusion Samples
Example predictions
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
input (24x24x1):
max activation: , min:
Activation:
convolution (24x24x8)
max activation: , min:
max weight: , min:
max gradient: , min:
Activations:
Weights:
Gradients:
pooling (24x24x1):
max activation: , min:
Activations:
convolution (24x24x1):
max activation: , min:
max weight: , min:
max gradient: , min:
Activations:
Weights:
Gradients:
pooling (24x24x1):
max activation: , min:
Activations:
softmax (24x24x1):
max activation: , min:
Activations: