How to extract the sample split (values) of decision tree leaves ( terminal nodes) applying h2o library
Sorry for a long story, but it is a long story. :)
I am using the h2o library for Python to build a decision tree and to extract the decision rules out of it. I am using some data for training where labels get TRUE and FALSE values. My final goal is to extract the significant path (leaf) of the tree where the number of TRUE cases significantly exceeds that of FALSE ones.
treemodel=H2OGradientBoostingEstimator(ntrees = 3, max_depth = maxDepth, distribution=bernoulli)
treemodel.train(x=somedata.names[1:],y=somelabel.names[0], training_frame=somedata)
dtree = H2OTree(model = treemodel, tree_number = 0, tree_class = False)
def predict_leaf_node_assignment(self, test_data, type=Path):
if not isinstance(test_data, h2o.H2OFrame): raise ValueError(test_data
must be an instance of H2OFrame)
assert_is_type(type, None, Enum(Path, Node_ID))
j = h2o.api(POST /3/Predictions/models/%s/frames/%s % (self.model_id,
test_data.frame_id),
data={leaf_node_assignment: True, leaf_node_assignment_type:
type})
return h2o.get_frame(j[predictions_frame][name])
dfResLabH2O.leafs = predict_leaf_node_assignment( dtree,test_data=dfResLabH2O , type=Path)
In scikit-learn there is an option to explore the leaves by using tree.values
. I understand there is no such option for h2o. Instead of that, there is an option in h2o to return predictions on leaves.
When I run dtree.predictions
, I am getting pretty weird results:
dtree.predictions
Out[32]: [0.0, -0.020934915, 0.0832189, -0.0151052615, -0.13453846, -0.0039859135, 0.2931017, 0.0836743, -0.008562919, -0.12405087, -0.02181114, 0.06444048, -0.01736593, 0.13912177, 0.10727943]***
My questions (and somebody has already asked it, but no clear answer was provided so far)
What's the meaning of negative predictions? I expect to get a proportions p of TRUE to ALL or FALSE to ALL, where 0=p=1. Is there anything wrong with my model? I ran it in scikit-learn and can point out the certain significant paths and extract rules.
For positive values: is it TRUE to ALL or False to ALL proportion? I am guessing it so FALSE as I mentioned Class=False, but I am not sure.
Is there any method or solution for h20 trees to reveal the sample size of the certain leaf and the [n1,n2] for TRUE and FALSE cases respectively in a similar way that scikit-learn provides?
I found in some forums a function def predict_leaf_node_assignment that aims to predict on a dataset and to return the leaf node assignment (only for tree-based models), but it returns no output and I cannot find any example how to implement it.
The bottom line: I'd like to be able to extract the sample size values of the leaf and to extract the specific path to it, implementing [n1,n2] or valid proportions.
I'll appreciate any kind of help and suggestions. Thank you.
Topic h2o prediction data decision-trees python
Category Data Science