the mean and standard deviation aren't the same as those of the input data i provided after sampling

I have a log-normal mean and a standard deviation. after i converted them to the underlying normal distribution's parameters mu and sigma, I sampled from the log-normal distribution however when i take the mean and standard deviation of this sampled data i don't get the results i plugged in at first. This only happens when the log-normal mean is way smaller than the log-normal standard deviation otherwise it works. how do i prevent this from happening and get the input parameters i plugged in at first?

import scipy.stats as stats
from statistics import mean
m = 1.46578E-07
siglog = 1.51
sigma= np.sqrt(np.log(1 + (siglog/m)**2))#normal std
mu= np.log(m) - sigma**2 / 2 #normal mean
x = np.random.lognormal(mu,sigma,1000000)
print(mean(x), np.std(x)) 
[out]: 3.867912662470812e-08 1.0677187655685002e-05

Topic distribution probability scipy sampling python

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.