Difference in result in every run of Neural network?

I have written a simple neural network (MLP Regressor), to fit simple data frame columns. To have an optimum architecture, I also defined it as a function to see whether it is converging to a pattern. But every time that I run the model, it gives me a different result than the last time that I tried, and I do not know why? Due to the fact that it is fairly difficult to make the question reproducible, I can not post the data but I can post the architecture of the network here:

def MLP(): #After 50
    nn=30
    nl=25
    a=2
    s=0
    learn=2
    learn_in=4.22220046e-05
    max_i=1000
    return nn,nl,a,s,learn,learn_in,max_i#,



def process(df):
    y = df.iloc[:,-1]
    X = df.drop(columns=['col3'])
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=27)
    return X_train, X_test, y_train, y_test

def MLPreg(x_train, y_train):#
    nn,nl,a,s,learn,learn_in,max_i=MLP()#nl,
    act=['identity', 'logistic', 'relu','tanh'] #'identity'=Linear
    activ=act[a]
    sol=['lbfgs', 'sgd', 'adam']
    solv=sol[s]
    l_r=['constant','invscaling','adaptive']
    lr=l_r[learn]
    model = MLPRegressor(hidden_layer_sizes=(nl,nn), activation=activ, solver=solv, alpha=0.00001, batch_size='auto', 
    learning_rate=lr, learning_rate_init=learn_in, power_t=0.5, max_iter=max_i, shuffle=True, random_state=None,
    tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, 
    validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-08, n_iter_no_change=10, max_fun=15000)
#     model = MLPRegressor(max_iter = 7000)
#     param_list = {hidden_layer_sizes: [(10,),(50,)], activation: [identity, tanh, relu], solver: [lbfgs, sgd, adam], alpha: [0.00005,0.0005]}
#     gridsearchcv = GridSearchCV(estimator=model, param_grid=param_list)


    model.fit(x_train, y_train)
    return model
```

Topic rmse scikit-learn neural-network python

Category Data Science


Some difference is expected as you set random_state = None and shuffle=True in your model. This results in weights being initialized randomly and training data to be used in different orders. For reproducible results, you should set it to an integer. See Scikit documentation for random_state variable.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.