Artificial Neural Network

 

How training happens in ANN?

Training in ML Algorithms:

Consider Linear / Logistic Regression models where the training aimed at finding optimum values of coefficients / weights of the linear model (y =WX+B where W denotes the coefficient and C denotes the intercept of the line)

Gradient Descent is one of the optimization technique for finding optimum weights of a model using iterative approach

        W := W - Learning rate * derivative of Loss w.r.t W

        B :=  B  - Learning rate * derivative of Loss w.r.t B

W = coefficients of the linear model

B = Intercept of the linear model

Learning rate = Controllable parameter for getting optimum solution

Loss function = Vector difference between predicted and actual outputs

once we find optimum W,C value, we can able to perform the predictions in ML through newly trained model.

In case of Neural networks, the above coefficients needs to be calculated for every node in entire layered neural network, training for ANN includes below tasks

  1. Back Propagation - for finding derivatives of the weights
  2. Forward Propagation - for calculating activation functions(outputs) of each node.
Forward + Back Propagation (1 epoch) with Gradient Descent (iterative optimization) completes the training of an ANN model.

in the iterative optimization we also need to loop over through different nodes/layer and different layers of the neural network along with the weight calculation to find the best nodes/layer and layers that gives the optimum prediction.

Comments

Popular posts from this blog

Supervised Machine Learning

What is AI ?