Distorted Statistics and Performance Tests: Part 2

Distorted Statistics and Performance Tests: Part 2


The second major problem with statistical tests as performed in finance and economics is a fundamental misunderstanding of inferential statistics. Researchers perform studies based on models. They attribute great importance to variables that are statistically significant. But most researchers completely misstate what the results mean.
A typical statistical analysis in investment analysis uses regression techniques to explore whether one variable is caused by another variable such as whether Federal Reserve announcements affect market returns in the days after they are officially made public. The regression is usually performed at some level of confidence such as 99% (i.e., a significance level of 1%). If a variable is found to be statistically significant it is deemed as being very likely to be important. If a 99% confidence level is used, analysts usually claim that it is 99% likely that the variable is important. That is not true.
For example, assume that a hedged fund analyst performs a test with a 99% level of confidence as to whether or not stock returns in the days after a Fed announcement are higher than in other days of the year. Suppose that the test generates a statistically significant result indicating that the returns are higher in the days immediately after Fed meetings at a confidence level of 99%. The incorrect conclusion that is often formed regarding such a result is that there is a 99% probability that Fed announcements are followed by higher returns. In fact, the hedge fund analyst has no reasonable basis for this conclusion – even though most researchers do so anyway. This interpretation is a false belief based on erroneous statistical interpretations.
To explain this important concept carefully the next part in this series describes two types of errors: Type I errors and Type II errors. It takes some work to understand these two types of errors – but it leads to the important benefit of understanding what these types of statistical tests do and do not mean.
So if being statistically significant at the 99% level does not mean that there is a 99% chance that the relationship is true, what does it mean? It means that if we assume that the model is correctly specified (a big if that we usually have little reason to believe), there is only a 1% chance that the results could have occurred due to chance. The next Part in this series explores this concept in greater detail.

Leave a Reply 0 comments

Leave a Reply: