Backpropagation
In this notebook, we’ll implement a quick representation of the backpropagation algorithm for the simple two node network.
Steps to backpropagation
We outlined 4 steps to perform backpropagation,
- Choose random initial weights.
- Train the neural network on given input and output data.
- Update the weights.
- Repeat steps 2 & 3 many times.
Let’s now implement these steps in an example data set.
Load example data
The training data is backpropagation_example_data.csv. Get these data, and load them:
Here we acquire two variables:
in_true
: the true input to the hidden two-node neural network
out_true
: the true output of the hidden two-node neural network
The two-node neural network is hidden because we don’t know the weights (w[0]
and w[1]
).
Instead, we only observe the pairs of inputs and outputs to this hidden neural network.
Let’s look at some of these data:
These data were created by sending the inputs (in_true
, the first column above) into a two-node neural network to produce the outputs (out_true
, the second column above).
Again, we do not know the weights of this network … that’s what we’d like to find.
To do so, we’ll use these data to train a neural network through back propagation.
For training, first define two useful functions:
Now, train the neural network with these (in_true
, out_true
) data.
Challenges
- Use the chain rule to verify the expression for \(\dfrac{dC}{dw_0} = (out-target)s_2(1-s_2) w_1 s_1 (1-s_1) s_0\).
- Complete the code above to determine the weights (
w[0]
andw[1]
) of the hidden two-node neural network.