- Start with neural network (single or multiple layer).
- Present the training sample, compute the activation values of all the units until we compute the activation values of output layer units.
- Compare the output that we computed with the training sample output to compute the error.
- Having computed the error, our objective is to re-adjust the weight so the error decreases so that our computed values is closer to the real values.
- Done with one training sample, we’ll continue with the next training sample.
- Use learning rate to choose slow or fast learning. Slow learning is better in general, because of gradient descent.
- Continue with this learning until ideally we reach state where the weight doesn’t change too much.
- Freeze the weight, and we have learn the function.
- Compute the error of the output layers, and then aproportion that error to the hidden layers.
- Having the error of the hidden layer, use two layer learning algorithms to readjust the weights (weight adjustment formula). Multi-layer neural networks work better than single layer.
Lec-2 Artificial Neuron Model and Linear Regression
Lec-3 Gradient Descent Algorithm