Data Mining Notes – Artificial Neural Network (ANN)

Algorithm outline:
——————–
(from video)

  1. Start with neural network (single or multiple layer).
  2. Present the training sample, compute the activation values of all the units until we compute the activation values of output layer units.
  3. Compare the output that we computed with the training sample output to compute the error.
  4. Having computed the error, our objective is to re-adjust the weight so the error decreases so that our computed values is closer to the real values.
  5. Done with one training sample, we’ll continue with the next training sample.
  6. Use learning rate to choose slow or fast learning. Slow learning is better in general, because of gradient descent.
  7. Continue with this learning until ideally we reach state where the weight doesn’t change too much.
  8. Freeze the weight, and we have learn the function.

Backpropagation

  1. Compute the error of the output layers, and then aproportion that error to the hidden layers.
  2. Having the error of the hidden layer, use two layer learning algorithms to readjust the weights (weight adjustment formula). Multi-layer neural networks work better than single layer.

Video

Lec-2 Artificial Neuron Model and Linear Regression

Lec-3 Gradient Descent Algorithm

Backpropagation

Author: Trijito Santoso

I’m Trijito Santoso, a Seventh-Day Adventist, a medical technology and computer science graduate, and a software developer. The reason why I shifted from medical technology to computer science is because I love to create things (design, software, articles, anything), and being a software developer allows me to create things everyday. I graduated from Northeastern University, Boston, with a degree Master of Science in Computer Science. My resume is available on my LinkedIn.

Leave a Reply

Your email address will not be published. Required fields are marked *