Browse By

Data Mining Notes – Artificial Neural Network (ANN)

Algorithm outline:
——————–
(from video)

  1. Start with neural network (single or multiple layer).
  2. Present the training sample, compute the activation values of all the units until we compute the activation values of output layer units.
  3. Compare the output that we computed with the training sample output to compute the error.
  4. Having computed the error, our objective is to re-adjust the weight so the error decreases so that our computed values is closer to the real values.
  5. Done with one training sample, we’ll continue with the next training sample.
  6. Use learning rate to choose slow or fast learning. Slow learning is better in general, because of gradient descent.
  7. Continue with this learning until ideally we reach state where the weight doesn’t change too much.
  8. Freeze the weight, and we have learn the function.

Backpropagation

  1. Compute the error of the output layers, and then aproportion that error to the hidden layers.
  2. Having the error of the hidden layer, use two layer learning algorithms to readjust the weights (weight adjustment formula). Multi-layer neural networks work better than single layer.

Video

Lec-2 Artificial Neuron Model and Linear Regression

Lec-3 Gradient Descent Algorithm

Backpropagation

Leave a Reply

Your email address will not be published. Required fields are marked *