Let's think about this problem with an example. Suppose that you're trying to predict fraudulent credit card transactions. Since genuine transactions are more frequent than fraudulent transactions (imagine 1% of transactions are fraudulent), the probability of a randomly selected transaction being a fraud is going to be really small, right?
Now suppose that we have a set of 10000 transactions and we have way of calculating the probability of fraud in each one of them. If we compute the probabilities and make a density plot of these probabilities, we'll observe a very skewed distribution (lots of transactions will have small fraud probabilities).
That's what happening in you example. Your model is estimating the probability of an event that is probably rare (I take this from your observation that the dataset is unbalanced, 70/30). Therefore, you don't need to "fix" your distribution.
Although, if you're using this probability as a score, for example, it may be easier to work with a less skewed distribuition. In that case, you can apply some monotonic transformation to your probabilities. That type of transformation will change the distribuition of your variable (probability estimate from the model) without changig the ordering, e.g, if an onbservation A has smaller probability than an observation B, the score after the transformation will mantain this A < B ordering.