how to take advantage of a known covariance matrix between the y_train variables in a bayesian fcnn network used for regression

I am a newbie with python and I am have facing an issue regarding the application of a Bayesian neural network to fit some data (x,y). I was able to realize a simple Bayesian fully connected neural network with TensorFlow probability

def normal_exp(params): 
  return tfd.Normal(loc=params[:,0:1], scale=tf.math.exp(params[:,1:2]))

def NLL(y, distr): 
  return -distr.log_prob(y) 

inputs = Input(shape=(1,))
hidden = Dense(200,activation=relu)(inputs)
hidden = Dropout(0.1)(hidden, training=True)
hidden = Dense(500,activation=relu)(hidden)
hidden = Dropout(0.1)(hidden, training=True)
hidden = Dense(500,activation=relu)(hidden)
hidden = Dropout(0.1)(hidden, training=True)
hidden = Dense(200,activation=relu)(hidden)
hidden = Dropout(0.1)(hidden, training=True)
params_mc = Dense(2)(hidden)
dist_mc = tfp.layers.DistributionLambda(normal_exp, name='normal_exp')(params_mc) 

model_mc = Model(inputs=inputs, outputs=dist_mc)
model_mc.compile(Adam(learning_rate=0.0002), loss=NLL) 

and it works, but the problem is that it doesn't take advantage of the known errors on the output variable y that i have stored in a third column, nor the covariance matrix between the y that I have. How can I modify this little fcnn to use those data?

Topic bayesian-networks

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.