Adaptive linear neurons and the convergence of learning

In this section Adaptive linear neurons and the convergence of learning

We will take a look at  the another type of single-layer neural network: ADAptive Linear NEuron (Adline). Adaline was published only a few years after Frank Rosenblatt’s perceptron algorithm. By Bernard Widrow and his doctoral student Tedd Hoff and can be considered as an improvement on the latter. Number Technical Report 1553-2. Stanford Electron. Labs. Stanford, CA, October 1996) The Adline Algorithm is particularly interesting because it illustrate the key concept of defining and minimizing cost functions. which will lay the groundwork for understanding more advance machine learning algorithm for classification model that we will discused in future blog.

The key difference between the Adaline rule (also known as the Widrow-Hoff rule)

And Rosenblatt’s perceptron is that the weights are updatd based on a linear activation function rather than a unit step function like  in the perceptron. In Adline, this linear activation function o(z) is simply the identity function of the net input so that o(wTx) = wTx.

Whille the linear activation function is used for learning the weight. A quantizer which is similar to the unit step function that we have seen before. Can then be used to predict the class labels as illustrate in the following figure:

Neural Network

If we compare the preceding figure to illustrate of the perceptron algorithm that we will saw future blog. The difference is that we know to use the continunous valued output from the linear activation function. To compute the model error and update the weight rather than the binary class label.

Adaptive linear neurons and the convergence of learning

for more click

Leave a Reply

Your email address will not be published. Required fields are marked *