Thursday, 8 January 2015

Neural Network Illustrated – Step by Step

I1 and I2 are the inputs scaled to [-1,1] or [0, 1], depending on the activation function used
f()=Activation Function=Tanh(), Sigmoid() or any differential-able function
W=Current neurons input weights, initialized randomly between [-1, 1].
Wb=Bias Weight, connected to nothing, used as a threshold, initialized same as W
N=The output of the current neuron.










Error Back Propagation starts here (Training)

O=Output Neurons Previous Output
E=Error for Current Neuron
T=Output Neurons Desired Output.
f’(N) is the derivative of the activation function, N is the Neurons previous output.

















7 comments:

  1. Thank you! I've been struggling with trying to understand backpropagation for a while... your illustrations made it all clear to me.

    ReplyDelete
  2. I just stumbled upon your blog and wanted to say that I have really enjoyed reading your blog posts. Any way I'll be subscribing to your feed and I hope you post again soon.A fantastic presentation. Very open and informative.You have beautifully presented your thought in this blog post. Tech

    ReplyDelete
  3. The term Lr on the last equations, where is coming from?

    ReplyDelete
  4. The term Lr on the last equations, where is coming from?

    ReplyDelete
  5. Wow, absolutely fantastic blog. I am very glad to have such useful information.

    เย็ดสาว

    ReplyDelete