Aggregated probability based on multiple predictions on independent samples using the same classifier

i have a understanding question regarding the interpretation of a aggregation of a machine learning classifier. Lets assume i have trained a binary classifier and it was validated with a accuracy of 70% (dataset is always balanced). My question is now, if this probability seems to low for me - and i would search for ways to improve that without any readjustments on the classifier - would the following idea be valid?: The classifier predicts three independent samples (always with probability 0.7 for a correct prediction) and aggregates the predictions in majority manner: If there were at least 2 of 3 predictions for class 1 then the final prediction is: class 1 - else class 0.

Im not shure how this procedure is called in correct data science words can anyone please let me know if the idea is valid and how i can find further informations on that in literature by giving the technical term for that?

thanks in advance

Topic binary-classification prediction probability machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.