How to generate 12 independent random weights which all add up to one

I'm using Palisade's @Risk software with a triangular distribution to generate 12 random weights which must add up to one, but I get a lot of negative numbers. Is there a straightforward way to set this up?

Topic distribution randomized-algorithms weighted-data

Category Data Science


I am afraid I do not know the software you mention, but I can show you the principles and suggest why maybe you are getting negative numbers.

I will do this in Python, making use of the numerical library, numpy.

I import numpy and generate 12 random integers between 0 and 9 (10 is excluded as upper limit):

In [1]: import numpy as np; samples = np.random.randint(0, 10, 12)              

In [2]: samples                                                                 
Out[2]: array([8, 5, 8, 4, 4, 7, 2, 2, 0, 5, 9, 1])

To scale the values to a range that makes their sum equal to 1, we can do the following. First sum up all values:

In [3]: total = np.sum(samples)                                                 

Now simply divide each value by the sum (the division happens here individually for each element of samples:

In [4]: normalised = samples/total                                              

In [5]: normalised                                                              
Out[5]: 
array([0.14545455, 0.09090909, 0.14545455, 0.07272727, 0.07272727,
       0.12727273, 0.03636364, 0.03636364, 0.        , 0.09090909,
       0.16363636, 0.01818182])

We can see that the result does indeed sum to 1:

In [6]: np.sum(normalised)                                                      
Out[6]: 1.0

What you may have is a set of samples that contains some negative numbers, like the following in samples_neg, with ten integers ranging from -5 to +9:

In [7]: samples_neg = np.random.randint(-5, 10, 10)                             

In [8]: samples_neg                                                             
Out[8]: array([ 4,  7,  0,  4, -3,  0,  9, -3,  9,  3])

We can follow the same recipe as before, summing the values and dividing each value by the sum:

In [9]: total_neg = np.sum(samples_neg)                                        

In [10]: normalised_neg = samples_neg / total_neg                               

We see that the result this time includes negative values, as you mentioned:

In [11]: normalised_neg                                                         
Out[11]: 
array([ 0.13333333,  0.23333333,  0.        ,  0.13333333, -0.1       ,
        0.        ,  0.3       , -0.1       ,  0.3       ,  0.1       ])

However, this does still satisfy the constraint you originally had, which was that they sum to 1:

In [12]: np.sum(normalised_neg)                                                 
Out[12]: 0.9999999999999999        # this is 1, within rounding errors of floating point values

A suggestion would be to first normalise the values in a range of [0, 1] and afterwards, re-weight the values such that their sum is 1.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.