Thursday, 8 January 2015

Published January 08, 2015 by with 10 comments

Neural Network Illustrated – Step by Step

I1 and I2 are the inputs scaled to [-1,1] or [0, 1], depending on the activation function used
f()=Activation Function=Tanh(), Sigmoid() or any differential-able function
W=Current neurons input weights, initialized randomly between [-1, 1].
Wb=Bias Weight, connected to nothing, used as a threshold, initialized same as W
N=The output of the current neuron.










Error Back Propagation starts here (Training)

O=Output Neurons Previous Output
E=Error for Current Neuron
T=Output Neurons Desired Output.
f’(N) is the derivative of the activation function, N is the Neurons previous output.

















      edit

10 comments:

  1. Thank you! I've been struggling with trying to understand backpropagation for a while... your illustrations made it all clear to me.

    ReplyDelete
  2. I just stumbled upon your blog and wanted to say that I have really enjoyed reading your blog posts. Any way I'll be subscribing to your feed and I hope you post again soon.A fantastic presentation. Very open and informative.You have beautifully presented your thought in this blog post. Tech

    ReplyDelete
  3. The term Lr on the last equations, where is coming from?

    ReplyDelete
  4. The term Lr on the last equations, where is coming from?

    ReplyDelete
  5. Wow, absolutely fantastic blog. I am very glad to have such useful information.

    เย็ดสาว

    ReplyDelete
  6. I’m going to read this. I’ll be sure to come back. thanks for sharing. and also This article gives the light in which we can observe the reality. this is very nice one and gives indepth information. thanks for this nice article... OSI Model

    ReplyDelete
  7. I also wrote an article on a similar subject will find it at write what you think. visit these guys

    ReplyDelete
  8. �� Thank you, me reading of the back-propagathing method as above me saw there are small arrows indicates of %error each node.
    Not only direct from external but since inputs from previous nodes they will response differrent by their defined ( sustainable ratio ) if %error more than previous conditions it creates of new nodes target by ratios.
    ���� Differrent input provide the differrent results but some similarlity ratios of some inputs are closed.
    ���� Sample next Alphabets prediction will return error if starting not correct ( auto-correct ) even the rest are all correct since they need to update back your attention ( in, a, u, un and etc. )❗

    ReplyDelete