Improve model accuracy in multi-classification problem

I use a MLP to classify three different classes A, B, C. The loss function I use is categorical cross entropy and the optimiser is adam. To estimate my models performance I use 10-fold Cross Validation. On average i get 60% accuracy score but I need it to be higher. The confusion matrix for the classes A,B,C, I get is the following:

Class A Class B Class C
14440 8118 11229
6045 21863 5879
6207 4264 23315

The amount of data points I have for each class is sufficiently large and equal (the ratio is 1:1:1). I was trying to see how the model would fare, if instead of 3 classes it classified two classes. Each time I remove from the dataset the data points of the class I wont be using. So when I train it to classify data points between class B and C I get around 80% accuracy scoreand the following confusion matrix:

Class B Class C
26456 7331
6255 27531

However when I train the model to classify data points between A and C i get around 69.5% and the following confusion matrix:

Class A Class C
22180 11607
9659 24127

For the classes A and B I get around 72% accuracy score and the following confusion matrix:

Class A Class B
23971 9816
9616 24170

In all cases the precision score, recall score and f1 score are more or less equal to the accuracy score.

  • Could the reason why I get low accuracy when I classify the 3 classes be class A?
  • Maybe some data points from class A are too similar with data points from class B and others are too similar with data points of class C?

    If so what could I do to improve the score provided that I can't improve the dataset.

Topic mlp multiclass-classification accuracy neural-network classification

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.