Perceptron Learning Rule

I am new to Machine Learning and Data Science. By spending some time online, I was able to understand the perceptron learning rule fairly well. But I am still clueless about how to apply it to a set of data. For example we may have the following values of $x_1$, $x_2$ and $d$ respectively:-

\begin{align}(0.6 , 0.9 , 0)\\ (-0.9 , 1.7 , 1)\\ (0.1 , 1.4 , 1)\\ (1.2 , 0.9 , 0)\end{align}

I can't think of how to begin.

I think we need to follow these rules.

$$W_i = W_i + \Delta W_i$$ $$\Delta W_i = \eta(d_i - y_i)$$ $$\text{ If} y_i = \sum w_ix_i \ge 0, y = 1$ \text{ else} y=0$$ $$x_0 (\text{Bias}) = 0 $$

Where $d_i$ is the target value, $y_i$ is the output value $\eta$ is the learning rate and $x_i$ is the input value

Any help is appreciated. Thanks!

Topic supervised-learning deep-learning machine-learning

Category Data Science


You don't want to follow those rules. You have your matrix of observations and you multiply by a matrix of weights, in which, initially al the weights are assigned randomly. A matrix of weights is composed in this way: each column represents a neuron and the rows the weight to which you multiply your features. If you have k layers you'll have k matrices of weights. At each iteration, what the DL framework allows you to do (You don't want to it manually), compares the real target to your predicted and computes a cost function(How far is the predicted from real?). Derivative is then computed respect to the weights, it is demonstrated that the direction of the steepest descent is obtained by updating the weights by a value of $W_i = W_i + \Delta W_i$ where: $\Delta W_i = n(d_i - y_i)$. It goes in this fashion by a number of iterations you defined. I hope I gave you a hint to understand how it works.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.