Why would one crossvalidate the random state number?
Still learning about machine learning, I've stumbled across a kaggle (link), which I cannot understand.
Here are lines 72 and 73:
parameters = {'solver': ['lbfgs'],
'max_iter': [1000,1100,1200,1300,1400,1500,1600,1700,1800,1900,2000 ],
'alpha': 10.0 ** -np.arange(1, 10),
'hidden_layer_sizes':np.arange(10, 15),
'random_state':[0,1,2,3,4,5,6,7,8,9]}
clf = GridSearchCV(MLPClassifier(), parameters, n_jobs=-1)
As you can see, the random_state
parameter is been tested across 10 values.
What is the point of doing this?
If one model perform better with some random_state
, does it make any sense to use this particular parameter on other models?
Topic mlp randomized-algorithms scikit-learn python
Category Data Science