gradient descent diverges extremely
I have manually created a random data set around some mean value and I have tried to use gradient descent linear regression to predict this simple mean value.
I have done exactly like in the manual and for some reason my predictor coefficients are going to infinity, even though it worked for another case.
Why, in this case, can it not predict a simple 1.4 value?
clear all;
n=10000;
t=1.4;
sigma_R = t*0.001;
min_value_t = t-sigma_R;
max_value_t = t+sigma_R;
y_data = min_value_t + (max_value_t - min_value_t) * rand(n,1);
x_data=[1:10000]';
m=0
c=0
L=0.0001
epochs=1000 %iterations
for i=1:epochs
y_pred=m.*x_data+c;
D_m=(-2/n)*sum(x_data.*(y_data-y_pred));
D_c=(-2/n)*sum((y_data-y_pred));
m=m-L*D_m;
c=c-L*D_c;
end
plot(x_data,y_data,'.')
hold on;
grid;
plot(x_data,y_pred)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
question update: Hello , I have tried to write down your code in the Matlab language i am more femiliar. my feature matrices of the form NX2 [1,X_data] is called Xmat. i followed every step in converting the code, and i get in both Theta NAN. Where did i go wrong?
$ %%start Matlab code n=1000; t=1.4; sigma_R = t*0.001; min_value_t = t-sigma_R; max_value_t = t+sigma_R; y_data = min_value_t + (max_value_t - min_value_t) * rand(n,1); x_data=[1:1000]; L=0.0001; %learning rate %plot(x_data,y_data); itter=1000;
theta_0=0; theta_1=0; theta=[theta_0;theta_1];
itter=1000; for i=1:itter onss=ones(1,1000); x_mat=[onss;x_data]'; pred=x_mat*theta; residuals = (pred-y_data); for k=1:2 %start theta loop partial=2*dot(residuals,x_mat(:,k)); theta(k)=theta(k)-L*partial; end%end theta loop end % end itteration loop %%end matlab code $
Topic matlab gradient-descent predictive-modeling algorithms machine-learning
Category Data Science