nnet in caret. Bootstrapping or cross-validation?
I want to train shallow neural network with one hidden layer using nnet in caret. In trainControl, I used method = cv to perform 3-fold cross-validation. The snipped the code and results summary are below.
myControl - trainControl(## 3-fold CV
method = cv,
number = 3)
nnGrid - expand.grid(size = seq(1, 10, 3),
decay = c(0, 0.2, 0.4))
set.seed(1234)
nnetFit - train(choice ~ .,
data = db,
method = nnet,
maxit = 1000,
tuneGrid = nnGrid,
trainControl = myControl)
I have few doubts -
The results (attached below) suggests that it performed bootstrapping (25 reps) not cross validation (I was accepting model to be trained only three times for 3-cv for each set of hyperparameter).
I just want to 100% sure on whether model used the original data for training without any pre-processing such as centering and scaling.
I used verboseIter = FALSE in trainControl, but it still prints all the iterations
Are other libraries such as neuralnet, mxnet better than nnet and I can replace them here.
I wan to be sure whether nnet use sigmoid activation function in hidden layer.
Can someone please advise?
No pre-processing
Resampling: Bootstrapped (25 reps)
Summary of sample sizes: 3492, 3492, 3492, 3492, 3492, 3492, ...
Resampling results across tuning parameters:
size decay Accuracy Kappa
1 0.0 0.4947424 -0.002382083
1 0.2 0.5686601 0.141749447
1 0.4 0.5711497 0.143637446
4 0.0 0.5076199 0.022765002
4 0.2 0.7333516 0.468625768
4 0.4 0.7253675 0.452584882
7 0.0 0.5002912 0.006079340
7 0.2 0.7440360 0.488933678
7 0.4 0.7676500 0.536547080
10 0.0 0.5064281 0.013648966
10 0.2 0.7668795 0.535370693
10 0.4 0.7566465 0.513652332
Accuracy was used to select the optimal model using the largest value.
The final values used for the model were size = 7 and decay = 0.4.
Topic bootstraping cross-validation neural-network r machine-learning
Category Data Science