How to calculate a single accuracy for a model with multiple outputs in Keras?

Consider the following, rather simple, model:

x = Input(shape=(6,), name=input)

x = layers.Dense(128, activation='relu', name=dense_1)(x)
x = layers.Dense(1024, activation='relu', name=dense_2)(x)
x = layers.Dense(5120, activation='relu', name=dense_3)(x)

a_out = layers.Dense(10, activation='softmax', name='a_out')(x)
b_out = layers.Dense(20, activation='softmax', name='b_out')(x)
c_out = layers.Dense(30, activation='softmax', name='c_out')(x)

model = models.Model(input_layer, [a_out, b_out, c_out])

model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.001),
                loss={'a_out': 'sparse_categorical_crossentropy',
                      'b_out': 'sparse_categorical_crossentropy',
                      'c_out': 'sparse_categorical_crossentropy'},
                metrics=['accuracy'])

This model takes in a tensor of shape (6,) and outputs three different categorical values, a_out, b_out, and c_out. The metric that the model reports is accuracy but since there are 3 different outputs, it reports accuracy for each separately. Resulting in 3 different numbers.

Now, I want to wrap this model by keras_tuner and find the best combinations for layer sizes (as an example). But the problem is that kt makes its decision which combination is the best by looking at one single metric. The logical way of aggregating all three accuracies into one is to average them. But I don't see how I can do that!

If I try to introduce my own custom metric to Keras, like this:

def overall_accuracy(y_true, y_pred):
    return tf.keras.metrics.categorical_accuracy(y_true, y_pred)

model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.001),
                loss={'a_out': 'sparse_categorical_crossentropy',
                      'b_out': 'sparse_categorical_crossentropy',
                      'c_out': 'sparse_categorical_crossentropy'},
                metrics=[overall_accuracy])

I'm still bound by the same limitation as before since my overall_accuracy function is called once per each output and it does not get the chance to average the 3 accuracies. And if I introduce my own custom callback to be called at the end of each epoch, while I can calculate the average accuracy then, it is not considered a metric and thus cannot be used by kt (or at least that's what I know).

So, the question is, how to calculate one single accuracy when you model outputs multiple values?

Topic multi-output keras accuracy

Category Data Science


For anyone else who might be facing the same question, here's a solution. Basically, it's answered here. But I'm going to repeat the answer for the sake of completeness.

You can use a callback and combine the already calculated metrics into a new one. This means that the metric for each individual output should already be calculated using an entry in the metrics argument in the compile method`.

class CombinedMetric(Callback):
    def __init__(self):
        super(CombinedMetric, self).__init__()

    def on_epoch_begin(self, epoch, logs={}):
        return

    def on_epoch_end(self, epoch, logs={}):
        logs['accuracy'] = (logs["a_output_accuracy"] + logs["b_output_accuracy"] + logs["c_output_accuracy"]) / 3
        logs['val_accuracy'] = (logs["val_a_output_accuracy"] + logs["val_b_output_accuracy"] + logs["val_c_output_accuracy"]) / 3

The good news here is that the new metric is accessible to keras-tuner and can be used to tune your model.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.