I have a regression problem where I need to predict three dependent variables ($y$) based on a set of independent variables ($x$): $$ (y_1,y_2,y_3) = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \dots + \beta_n x_n +u. $$ To solve this problem, I would prefer to use tree-based models (i.e. gradient boosting or random forest), since the independent variables ($x$) are correlated and the problem is non-linear with ex-ante unknown parameterization. I know that I could use sklearn's MultiOutputRegressor() …
I am training a DNN with CNN in Keras. Though, I can write an EarlyStopping criteria based on val_loss but due to minor oscillations in the val_loss, I want to monitor the average validation loss over last n epoches and with n patience. How can I do this in Keras?
We are trying to replace an existing physical model (8 inputs/7 outputs) with artificial neural networks. The physics behind the existing model is mainly thermodynamics of humid air for air conditioning, with some turbomachinery involved, which yields most likely complex functions between inputs and outputs. One approach was already done: single output neural networks (10 NN with same # hidden layers but different parameters like batch size, # epochs, optimizer, etc). Then some sort of stacking ensembled was used: every …
I have a sample time-series dataset (23, 14291) a pivot table count for 24hrs for some users. After pre-processing, I have a dataset with (23, 200) shape. I filtered some of the columns/features which don't have a time-series based nature to reach/keep meaningful columns/features by PCA method to keep those with a high amount of data variance or correlation matrix to exclude highly correlated columns/features. I took advantage of MultiOutputRegressor() and predicted all columns for a certain range of time …
I am learning about multi-label multi-classification examples It is when you have a case like this Year Actor Budget | Genre ------------------------------------------------ 2004 Tom C. 40,000,000 | Action, Darama 2016 Mel G. 54,000,000 | Comedy, Action, Family 2021 Eva K. 3,000,000 | Comedy, Romance I saw an example using MultiOutputClassifier but I do not see a value of using this classifier as models still work without it, without any problem. Here is the example, you will see that at line …
I have a model of the following structure. It has 6 outputs. Given an image, the model predicts classes of 6 different components from the image. The metrics I used are: As you can see it outputs an overall combined loss and separate losses for different outputs. But there is no combined accuracy score. What I want is a combined accuracy score ( Which will consider a sample is correct if all the output labels are correct). How can I …
I have a round based game played on a grid map with multiple units that I would like to control in some fashion using neural network (NN). All of the units are moved at once. Each unit can move in any of the grid map direction: $up$, $down$, $left$ and $right$. So if we have $n$ units then output policy vector of NN should have $n^4$ entries that represents probabilities, one for each move. Note that one move represents actions …
Consider the following, rather simple, model: x = Input(shape=(6,), name="input") x = layers.Dense(128, activation='relu', name="dense_1")(x) x = layers.Dense(1024, activation='relu', name="dense_2")(x) x = layers.Dense(5120, activation='relu', name="dense_3")(x) a_out = layers.Dense(10, activation='softmax', name='a_out')(x) b_out = layers.Dense(20, activation='softmax', name='b_out')(x) c_out = layers.Dense(30, activation='softmax', name='c_out')(x) model = models.Model(input_layer, [a_out, b_out, c_out]) model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.001), loss={'a_out': 'sparse_categorical_crossentropy', 'b_out': 'sparse_categorical_crossentropy', 'c_out': 'sparse_categorical_crossentropy'}, metrics=['accuracy']) This model takes in a tensor of shape (6,) and outputs three different categorical values, a_out, b_out, and c_out. The metric that the model reports is …
I'm working with data that has multiple variables which could be predicted, nonetheless I need to predict just one that is directly correlated to all of the others. Would it make sense to have a NN that predicts the others first, creating a bunch of output layers connected to the hidden layers, and then have the desired output connected to them? Or it would be better to just have all of the outputs on the same 'level'?
I am following MultiOutputClassifier technique to predict roles (the data are transformed to numeric so that's not a concern) I want to use .predict_proba() and extract top 4 values along with its classes.(there are 5 output variables role0, role1, role2, role3, role4, role5). I am not sure how to do this. While doing normal analysis we use this to extract top n values for each observation. proba_k=k.predict_proba(xval) best_n=np.argsort(-proba_k, axis=1)[:, :4] My concern is to predict top 4 roles in role0. …
I'm on a psychometric study. It is a survey. All variables are on a scale of 7. So these are considered as continuous variables. I have this dataset: 600 features 100 predicted variables 100 survey answers so far We are stuck running the survey because 700 questions are really way too much. Surprising? We would like to select 100 features over the 600. We ran Cronbach's Alpha, low variance, correlated variables to remove features that were problematic. There are still …
What would an ideal Tensorflow/Keras architecture look like, if the target is a multi-regression with values that add up to one? Toy Example: Tv Channels You work for a big TV-Station and your boss wants you to anticipate the market share of the five biggest channels based on features like The weather (categorical) The temperature (numerical) Holiday (yes/no, binary) The prime-time program of Channel 1 (categorical) The prime-time program of Channel 2 (categorical) ... The prime-time program of Channel 20 …
I am training a model with multiple categorical inputs/outputs. Some samples have partial labels. Currently, I am dropping samples with any missing label but I am wasting a lot of data. I would like to train using all the labels I have. If I pass the partial samples for training, I get an error message. My model looks like this: Model: "model_5" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_6 (InputLayer) [(None, 15654)] 0 __________________________________________________________________________________________________ u_type_manual_1 (Dense) …
I have a vector of length $n \gt 4$ which has exactly 4 targets, so for example [0, 0, 0, 1, 0, 1, 0, 1, 1]. I would like to know how I can modify the softmax function for this case. Usually, it is normalized so that all probabilities sum to 1. In my case, all the probabilities will need to sum to 4 -- is there any way to do this?
I have a model like this with multiple outputs and i want to change it's output names class MyModel(Model): def __init__(self): super(MyModel, self).__init__() self.layer_one = Dense(1, name='output_name_one') self.layer_two = Dense(1, name='output_name_two') def call(self, inputs): output_name_one = self.layer_one(inputs) output_name_two = self.layer_two(inputs) return output_name_one, output_name_two keras automatically set output names to output_1, output_2, ... how can i change the output names to my desired names?
When I train my model it has a two-dimension output - it is (none, 1) - corresponding to the time series I'm trying to predict. But whenever I load the saved model in order to make predictions, it has a three-dimensional output - (none, 40, 1) - where 40 corresponds to the n_steps required to fit the conv1D network. What is wrong? Here is the code: df = np.load('Principal.npy') # Conv1D #model = load_model('ModeloConv1D.h5') model = autoencoder_conv1D((2, 20, 17), n_steps=40) …
I currently have a neural network that takes in 3 numbers as inputs and outputs 3 numbers. I've attached a picture of the network below and my code is accessible through the following link: [Google colab notebook]https://colab.research.google.com/drive/1q0Lvw4p_vxogmAu8QpYn5HRxAojuEtTY?usp=sharing Firstly, as you can see, the network consists of a "main branch" (i.e. the first two layers) that is connected to three "sub-branches" that correspond to the three outputs. My understanding of how multi-output back-propagation works is that the model first does a …
I've been searching for about three hours and I can't find an answer to a very simple question. I have a time series prediction problem. I am trying to use a Keras LSTM model (with a Dense at the end) to predict multiple outputs over multiple timesteps using multiple inputs and a moving window. I want to do sequence-to-sequence prediction, where my model is trained on the output of every timestep, not just the last one. What shape should my …
Given a hypothetical dataset {S} with 100 X feature variables and 10 predicted Y variables. X1 ... X100 Y1 .... Y10 1 .. 2 3 .. 4 4 .. 3 2 .. 1 Let's say I want to improve the accuracy of Y1. I am prepared to constraint/remove the input variables in order to increase the accuracy. How would I go about finding the culprits for making Y1 more variable than needed? E.g. I find that X49 adds the biggest …