I1 and I2 are the inputs scaled to [-1,1] or [0, 1], depending on the activation function used
f()=Activation Function=Tanh(), Sigmoid() or any differential-able functionW=Current neurons input weights, initialized randomly between [-1, 1].
Wb=Bias Weight, connected to nothing, used as a threshold, initialized same as W
N=The output of the current neuron.
O=Output Neurons Previous Output
E=Error for Current Neuron
T=Output Neurons Desired Output.
f’(N) is the derivative of the activation function, N is the Neurons previous output.
Thank you! I've been struggling with trying to understand backpropagation for a while... your illustrations made it all clear to me.
ReplyDeleteI just stumbled upon your blog and wanted to say that I have really enjoyed reading your blog posts. Any way I'll be subscribing to your feed and I hope you post again soon.A fantastic presentation. Very open and informative.You have beautifully presented your thought in this blog post. Tech
ReplyDeleteThe term Lr on the last equations, where is coming from?
ReplyDeleteThe term Lr on the last equations, where is coming from?
ReplyDeleteIt's learning speed, set by yourself
DeleteThx!
DeleteWow, absolutely fantastic blog. I am very glad to have such useful information.
ReplyDeleteเย็ดสาว
I’m going to read this. I’ll be sure to come back. thanks for sharing. and also This article gives the light in which we can observe the reality. this is very nice one and gives indepth information. thanks for this nice article... OSI Model
ReplyDeleteI also wrote an article on a similar subject will find it at write what you think. visit these guys
ReplyDelete�� Thank you, me reading of the back-propagathing method as above me saw there are small arrows indicates of %error each node.
ReplyDeleteNot only direct from external but since inputs from previous nodes they will response differrent by their defined ( sustainable ratio ) if %error more than previous conditions it creates of new nodes target by ratios.
���� Differrent input provide the differrent results but some similarlity ratios of some inputs are closed.
���� Sample next Alphabets prediction will return error if starting not correct ( auto-correct ) even the rest are all correct since they need to update back your attention ( in, a, u, un and etc. )❗