R: Model results in same test accuracy and ROC - How is this possible?

I've trained a simple mlp with my data set (unfortuanetly I cannot share any details):

model - keras_model_sequential()
model %%
  layer_dense(units = 64, activation = relu, input_shape = c(dim(train_x)[[2]])) %%
  layer_dropout(rate = 0.2) %%
  layer_dense(units = 64, activation = relu) %%
  layer_dropout(rate = 0.2) %%
  layer_dense(units = 1, activation = sigmoid)


model %% compile(
  loss = binary_crossentropy,
  optimizer = optimizer_adam(learning_rate = 0.00005),
  metrics = c(accuracy)
)

history - model %% fit(
  train_x, train_y,
  epochs = 15, batch_size = 64,
  validation_split = 0.2, shuffle  = T
)

test_preds - model %% predict(test_x) %% ``(0.5) %% k_cast(int32) %% data.matrix()
(test_acc - MLmetrics::Accuracy(test_preds, test_y))
(roc - MLmetrics::AUC(test_preds, test_y))

This gives me:

 (test_acc - MLmetrics::Accuracy(test_preds, test_y))
[1] 0.77375
 (roc - MLmetrics::AUC(test_preds, test_y))
[1] 0.77375

How can this be that Accuracy and AUC gives the same value? Might this be an indication for the model predicting values either close to 0 and 1 rather than close to the decision boundary at 0.5 ?

Topic keras r

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.