Inverting a matrix using a convolutional neural network

Just for a fun exercise, I am trying to invert a matrix, say size 28x28 (or even 5x5) with a neural network. The way I approached this (quite naively) is as follows:

  1. I built a fully convolutional network with some 8 layers and ReLU activations (not sure if this is the right choice)

  2. I put in an input $X$ and get an output $Y$. Here $Y=NN(X)$ where $X$ and $Y$ are both of same dimension, say $n\times n$. Assume NN is the conv net.

  3. Now, I write a custom loss function, which is the $MSE (YX-I)$, where $I$ is identity, and $YX$ is the matrix multiplication of input and output and MSE is the mean squared error, where mean is taken across the batches.

  4. For training, I generate 1000 random matrices and put them as input. There's no output label since the loss function doesn't need it. Shouldn't this ideally work? I can't seem to get the loss to converge. Is there a math flaw here? Is MSE not the right metric here ?

My custom loss function in TF-keras looks like this :

def custom_loss(I):

  def loss(y_true, y_pred):

   
   return keras.backend.mean(keras.backend.square(tensorflow.matmul(y_true,y_pred) - I ),axis=-1)
  return loss
```

Topic matrix linear-algebra convolutional-neural-network loss-function neural-network

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.