(Throughout this, I will assume the classes are balanced. If that is not the case, $0.5$ is likely to be a poor threshold. (As the links in my comments describe, thresholds are even overrated.))
The good news is that this situation would be unusual, so it is unlikely to matter.
If it does come up, you have a few options. First, if the model is so uncertain about class membership, I wonder if you even have any business making a classification. The discrete decision might be to go collect more data. Second, the classifications have some kind of associated cost of misclassification. If it is bad to call a $1$ a $0$ but awful to call a $0$ a $1$, then you would be weary of classifying such a point as a $1$. You can use similar logic when it comes to how much you “profit” from each type of correct classification.
If the misclassification costs are identical and the corrwct-classification profits are identical, then I say that it doesn’t matter how you classify the point. Over the long haul, you will be wrong half the time, and both types of mistakes, which will happen equally often, incur equal costs. Likewise, you will be right half the time, with both ways you can be right happening equally often and resulting in the same profit. Your expected loss or gain from the classification will be the same no matter how you classify the point.
This can be formalized through decision theory and expected loss.