p-value and effect size

Is it correct to say that the lower the p-value is the higher is the difference between the two means of the two groups in the t-test?

For example, if I apply the t-test between two groups of measurements A and B and then to two groups of measurements B and C and I find that in the first case the p-value is lower than the second case, could one of the possible interpretations be that the difference between the means of group A and B is greater than the difference between the means of group B and C?

Topic pvalue inference descriptive-statistics statistics

Category Data Science


p-value can't be used to compare t-statistic generated by either two different groups or the specified two-groups over the time. p-value simply tells that computed t-statistic is statistically significant at say , 0.05 or 0.02 or .15 etc. Following a number of misinterpretation, p value is un-necessarily being debated. It is an inference-statistic.


No.

What you refer to (the difference between the means of group A and B) is actually the effect size, and it has absolutely nothing to do with the p-values.

The situation is nicely summarized in the (highly recommended) paper Using Effect Size—or Why the P Value Is Not Enough (emphasis mine):

Why Report Effect Sizes?

The effect size is the main finding of a quantitative study. While a P value can inform the reader whether an effect exists, the P value will not reveal the size of the effect. In reporting and interpreting studies, both the substantive significance (effect size) and statistical significance (P value) are essential results to be reported.

Why Isn't the P Value Enough?

Statistical significance is the probability that the observed difference between two groups is due to chance. If the P value is larger than the alpha level chosen (eg, .05), any observed difference is assumed to be explained by sampling variability. With a sufficiently large sample, a statistical test will almost always demonstrate a significant difference, unless there is no effect whatsoever, that is, when the effect size is exactly zero; yet very small differences, even if significant, are often meaningless. Thus, reporting only the significant P value for an analysis is not adequate for readers to fully understand the results.

In other words, the p-value reflects our confidence that the effect indeed exists (and it's not due to chance), but it says absolutely nothing about its magnitude (size).

In fact, the practice of focusing on the p-values instead of the effect size has been the source of much controversy and the subject of fierce criticism lately; see the (again, highly recommended) book The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives.

The following threads at Cross Validated may also be useful:

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.