How to analyse the accuracy and loss graphs of model history?
I want to understand how to analyse the loss and accuracy (any metric) graphs that we plot from the model training history. Here's my graph,
What can we say from the slope of graph? Does it matter? As you can see the validation and training loss-accuracy are pretty much the same for the most part. What does this mean? Usually the val accuracy is higher than training accuracy in the beginning but we don't see that here. Am I doing something wrong? (the validation and training datasets are different)
for the reference I am doing binary classification using a neural net using following code
model = Sequential()
model.add(Dense(batch_size, input_shape = (batch_size, 37)))
model.add(Dense(256, activation= 'relu'))
model.add(Dense(512, activation= 'relu'))
model.add(Dense(512, activation= 'relu'))
model.add(Dense(256, activation= 'relu'))
model.add(Dense(128, activation= 'relu'))
model.add(Dense(1, activation= 'sigmoid'))
model.compile(optimizer= 'adam', loss='binary_crossentropy', metrics= ['accuracy'])
The data contains both categorical and continuous variables.
Topic machine-learning-model tensorflow deep-learning machine-learning
Category Data Science