I have time series data of the following properties: input shape: (num_timesteps, num_features) output shape: (num_timesteps, num_outputs) I reshape it to batch form: input shape: (num_batches, num_timesteps_in_batch, num features) output shape: (num_batches, num_timesteps_in_batch, num outputs) I have a stateful RNN in Keras: modelinput = Input(batch_shape=(num_batches,None,num_features)) prediction = GRU(10,return_sequences=True,stateful=True)(inputs) model = Model(inputs=modelinput,outputs=prediction) After trainig (which works fine) I would like to predict on a sequence without cutting the data, so input shape (num_timesteps, num_features). How can I do that? I thought …
hope you're all doing good ! I am working on Automatic Speech Recognition with Python with the LibriSpeech Dataset. After preprocessing the audios data and applying an "MFCC featurizing" I append everything into a list and get a shape of (14174,). Knowing that each sample has a different length but the same number of features for example : print(X[0].shape) print(X[12000].shape) >> (615, 13) >> (301, 13) Now when I feed the data into my network with an Input layer defined …
I have an input tensor of the shape (32, 256, 256, 256). In this tensor shape, 32 is the batch size. second 256 is the number of channels in the given image of size 256 X 256. I want to do pooling in order to convert the tensor into the shape(32, 32, 256, 256). In PyTorch, if I try to apply the pooling then the last two dimensions of the shape, related to the image, are changing, but not the …
I am building an LSTM autoencoder to denoise signals and will take more than 1 feature as it's input. I have setup the model Encoder part as follows which works for single feature inputs (i.e. sequences with just one feature): class Encoder(nn.Module): def __init__(self, seq_len, n_features, num_layers=1, embedding_dim=64): super(Encoder, self).__init__() self.seq_len = seq_len self.n_features = n_features self.num_layers = num_layers self.embedding_dim = embedding_dim self.hidden_dim = 2 * embedding_dim # input: batch_size, seq_len, features self.lstm1 = nn.LSTM( input_size=self.n_features, hidden_size=self.hidden_dim, num_layers=self.num_layers, batch_first=True ) …
I have been looking through the internet and i am a bit confused. It is doing the same things so why do we need a resize and a reshape and which is better ??
I'm trying to use SVR to predict a certain feature. I create the model with the following code: from sklearn.svm import SVR from sklearn.preprocessing import StandardScaler X = data # this is the outcome variable y = data.iloc[:, 10].values sc_X = StandardScaler() sc_y = StandardScaler() X2 = sc_X.fit_transform(X) y = sc_y.fit_transform(y.reshape(-1, 1)) # my_custom_kernel looks at certain columns of X2 / scaled data my_regressor = SVR(kernel=my_custom_kernel) my_regressor = regressor.fit(X2, y) After creating the model, I want to test it to …
I'm having a problem with reshaping a DataFrame, after doing this train_dane_rnn = np.reshape(train_dane, (train_dane.shape[0], train_dane.shape[1], 1)) test_dane_rnn = np.reshape(test_dane, (test_dane.shape[0], test_dane.shape[1], 1)) I'm getting this error ValueError: Must pass 2-d input. shape=(15129, 10, 1)
I have some problems with layers construction on Keras. I explain the whole problem: I have a feature matrix, with dimensions: 2023 (rows) x 65 (features); I tried to build a CNN, with Conv1D as first layer; My code is: def cnn_model(): model = Sequential() model.add(Conv1D(filters=64, kernel_size=3, activation='relu')) model.add(Dropout(0.25)) model.add(Conv1D(filters=64, kernel_size=3, activation='relu')) model.add(Dropout(0.25)) model.add(MaxPooling1D(pool_size=2)) model.add(Flatten()) model.add(Dense(64, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss='mse', optimizer='adam', metrics=['mse', 'mae']) model.fit(X, Y, epochs=100, batch_size=64, verbose=0) model.evaluate(X, Y) return model scoring = make_scorer(score_func=pearson) # evaluate model with standardized …
My dataset shape is (8968, 1024). In order to use it as an input for LSTM, I converted it to 3D samples = np.asarray(samples).reshape(1,8968,1024) Model: input = layers.Input(shape=(1024,)) model = tf.keras.Sequential() model.add(layers.Bidirectional(LSTM(256, return_sequences=True, activation='relu'), input_shape=(8968,1024))) model.add(layers.Bidirectional(LSTM(128, activation='relu'))) model.add(layers.Dense(32, activation='relu')) model.add(layers.Dense(num_classes, activation=activation)) However, when running the code below I'm getting an error code: def train_model(X, y, fname, # Path where to save the model activation='softmax', epochs=1, optimizer='adam', num_hidden=64, batch_size=128 ): X, labels = shuffle(X, y) X_train, X_test, y_train, y_test = train_test_split(X, …
I have a dataset for cancer and non-cancer patients and would like to prepare it for classification. Each sample has 4 columns and 1298 rows. The total number of samples is 68. So my X_train shape is: (68, 1298, 4) and Y_train shape is (68). Now, if I reshape the data to be in 2D array, how can I tell the model to separate these 68 samples? My question is: how should I reshape the dataset and how should be …
I had read this post panda grouping by month with transpose and it gave me the nearest answer to my question but not the completely solution. How would I get somewhat like the reverse output? My target is: I have a pivoted df with a grouped text variable like above in the second pic and dates are my columns. But I would like to get the dates grouped by type and the text variable values are my new columns. It …
I'm trying to understand how to add my validation data into my LSTM. At the moment I'm loading the train and the test set in the following way: First of all I load my time series from a directory, where they have a 2D shape (#values, #n_features = 30): self.train = np.load(os.path.join("data", "train", "X0train_s30.npy")) self.test = np.load(os.path.join("data", "test", "X0test_s30.npy")) # Shape for LSTM self.shape_data(self.train) self.shape_data(self.test, train=False) Then I proceeded with shaping it for preparing the input for the LSTM. Since …
When I train my model it has a two-dimension output - it is (none, 1) - corresponding to the time series I'm trying to predict. But whenever I load the saved model in order to make predictions, it has a three-dimensional output - (none, 40, 1) - where 40 corresponds to the n_steps required to fit the conv1D network. What is wrong? Here is the code: df = np.load('Principal.npy') # Conv1D #model = load_model('ModeloConv1D.h5') model = autoencoder_conv1D((2, 20, 17), n_steps=40) …
I have a dataset where I have unnecessarily duplicated column variables that I want to condense down. I wish the output wasn't so clumsy and I've already had to do some work to transform it and make it easier to manage. I'm familiar with basic R stuff but not an expert by any means so please be patient! Each R(1-10) corresponds to a response rating for the question (q1-10). The questions are randomised in order for each trial for each …
Let's say that I have image data with shape $(32, 32, 3)$ and $50000$ If I would like to reshape it to $(50000, 3, 32, 32)$ what should I do? I tried np.transpose(0, 3, 1, 2) but it failed. If I would like to print the number $3$ from $(50000, 3, 32, 32)$ what should I do?
This is something I can't achieve with the reshape2 library for R. I have the following data: zone code literal 1: A 14 bicl 2: B 14 bicl 3: B 24 calso 4: A 51 mara 5: B 51 mara 6: A 125 gan 7: A 143 carc 8: B 143 carc i.e.: each zone has 4 codes with its corresponding literal. I would like to transform it to a dataset with one column for each of the four codes …
I give to keras an input of shape input_shape=(500,). For some reasons, I would like to decompose the input vector into to vectors of respective shapes input_shape_1=(300,) and input_shape_2=(200,) I want to do this within the definition of the model, using the Functional API. In a way, I would like to perform slicing on a tf.Tensor object. Help is welcome!
I want to make a predictor using Keras LSTM model. I have a sequence of places visited. The task is to predict the last destination. I went through different examples but it seems I am not able to shape the input properly. I am stuck on how to prepare in my program my data to give them to the LSTM model. Here is a minimum code related to my problem. input_csv ='input.csv' max_features = 6 df = pd.read_csv(input_csv) df.head() #Cafe …
I currently have the following table: user_id --------- HR(segment_name) ------ observations. 123 1 0.9 234 . 0 . 0.78 567 0 0.99 789 1 0.89 Now I would like to convert this table to look like the table below: user_id ---------- segment --------- HR(segment_values) -------- observations 123 HR 1 0.9 234 HR 0 0.78 567 HR 0 0.99 789 HR 1 0.89 I probably need to use the pivote table to convert the first table to the second table. Any …