Neural network back propagation

27 Dec 2017

Demonstration of back propagation on neural networks

$x$ input for first hidden layer
$D$ number of features from x
$z$ input for hidden layer
$M$ number of hidden layer of network
$K$ Number of output classification classes
$a$ input of last layer
$y$ output of last layer
$t$ trained classification output [0,1]
$W_{dm}$ Matrix of weights from input to hidden layer $z$
$b$ Bias of input hidden layer $z$
$V_{mk}$ Matrix of weights from hidden layer to output $y$
$c$ Bias of input hidden layer $z$
$f(x)$ is the function of the middle neuron [$sigmoid(x)$, $tanh(x)$, $reLU(x)$]
$g(x)$ is the function of the last neuron [$sigmoid(x)$, $softmax(x)$, $linear(x)$]

basic network example

From forward propagation formulas:

back propagation gradient searching max of Ln (Likelihood):

From derivative softmax:

From forward propagation formulas:

From partial derivatives rule

From derivative of sigmoid



who am i

Engineer in Barcelona, working in BI and Cloud service projects. Very interested in the new wave of Machine-Learning and IA applications

what is this

This is a blog about software, some mathematics and python libraries used in Mathematics and Machine-Learning problems

where am i

github//m-alcu
twitter//alcubierre
linkedin//martinalcubierre
facebook//m.alcubierre
2017 by Martín Alcubierre Arenillas.
Content available under Creative Commons (BY-NC-SA) unless otherwise noted.
This site is hosted at Github Pages and created with Jekyll.