
How significant is significance?
by John Savin
Anyone reading many corporate PR releases will keep tripping over the word significant. As a great word. Significant implies it is, well, significant – really important, something that supports or increases the company valuation.
Oddly though, these significant events are rarely accompanied by any quantitative analysis, at least from the company. For example, revenues may be significantly higher (although costs are always significantly lower). In a clinical trial, treated patients may do “significantly” better. Disease markers might be “significantly” lower. This may be significantly true, but without quantification how do we know? So significant has become a meaningless PR adjective, alongside “compelling”.
There is another aspect: the significantly tougher concept of statistical significance.
Some companies are incredibly well-versed in the arcane toolbox of mathematical tests of significance. One US-Australian company, for example, regularly announced many statistically significant, and compelling, early findings using many valid methods – sadly, the FDA rarely agreed.
Basically, for a small trial (and most biotech trials are small) of under 500 patients in control and treated groups with a simple yes/no outcome, the Fisher test is robust. It is also easy to check online as there are free calculators. By convention, and it is a convention not a mathematical rule, the value, p, has to be under 0.05 using a two tailed test. Then, there is the “multiple shots on goal” aspect –unearthing a “significant” p value, but, is that digging statistical? That is for another blog.