Stacking models using keras.layers.Concatenate with different input shapes

I have concatenated two models that uses different inputs. The first model uses input of shape (1, 33). The second model uses a feature set of dimension (1, 1024). I have a mapping function that converts (1, 33) data to (1, 1024). My question is what changes I need to make to make this model work. What is the appropriate way to give test input to this stacked model?
Category: Data Science

Feature Selection using Stacking Ensemble?

I want to combine some estimators, such as Logistic Regression, Gaussian NB and K-Nearest Neighbors for Features Selection, I tried to use StackingClassifier() estimator to do that, but there is no feature_importances_ attribute for this estimator. Is there any other method to select features combining those classifiers ?? Thank you in advance :)
Category: Data Science

How to use SMOTE in Stacking in SKLearn?

I have a data set X,y and split them to train and test data. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, stratify = y, random_state=10). To handle imbalanced data, I wanna use SMOTE and then use classification algorithms. However, I am going to use Stacking as my classification method. I would be thankful to know when I should use SMOTE? Should I use them in defining lower-level classifiers or in higher-level classifiers? level0 = list() oversample = …
Category: Data Science

Feature importance difference in two similar machine learning models

Situation 1: I have trained a text classification model (Model 1) which gives me a probability of true class as X. I have also trained a classification model (Model 2) using only the categorical and numeric data. Both the models are used to predict the same true class; just the features differ. I used a random forest classifier on the probabilities returned by Model 1 and Model 2(taking them as input features) and got similar performance metrics(Accuracy, Precision recall). feature …
Category: Data Science

Stacking: How to best treat base learner?

With stacking, several (diverse) base learners are used to predict the dependent variable $\hat{y}_{b,m}=\beta_{b,m} X$ in a hold-out set, where $m$ are base learner models $1,...,n$. These predictions are used in a second step as explanatory variable(s) in a meta learner $y = \beta_1 X + \beta_2 \hat{y}_b + u$. I wonder how to best treat $\hat{y}_{b,m}$ in practice. There are basically two options: Use each base learner's prediction $\hat{y}_{b,m}$ as a separate feature (column) in the meta learner model. …
Category: Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.