r - Multiple testing methods -
i want simulate effect of different kinds of multiple testing correction such bonferroni
, fisher's lsd
, duncan
, dunn-sidak newman-keuls
, tukey
, etc... on anova
.
i guess should run regular anova
. , take important p.value
s calculate using p.adjust
. i'm not getting how p.adjust
function works. give me insights p.adjust()
?
when running:
> p.adjust(c(0.05,0.05,0.1),"bonferroni") # [1] 0.15 0.15 0.30
could explain mean?
thank answer. kinda know bit of that. still don't understand output of p.adjust. i'd expect that...
p.adjust(0.08,'bonferroni',n=10)
... returns 0.008 , not 0.8. n=10 doesn't mean i'm doing 10 comparisons. , isn't 0.08 "original alpha" (i mean threshold i'd utilize reject null hypothesis if had 1 simple comparison)
you'll have read each multiple testing correction technique, whether false discovery rate (fdr)
or family-wise error rate (fwer
). (thanks @thelatemail pointing out expand abbreviations).
bonferroni correction controls fwer
setting significance level alpha
alpha/n
n
number of hypotheses tested in typical multiple comparing (here n=3
).
let's testing @ 5% alpha. meaning if p-value < 0.05, reject null. n=3
, then, bonferroni correction, split alpha
3 = 0.05/3 ~ 0.0167 , check if p-values < 0.0167.
equivalently (which straight evident), instead of checking pval < alpha/n
, take n
other side pval * n < alpha
. alpha
remains same value. so, p-values multiplied 3
, checked if < alpha = 0.05 example.
therefore, output obtain fwer controlled p-value
, if < alpha (5% say), reject null, else you'd take null hypothesis.
for each tests, there different procedures command false-positives due multiple testing. wikipedia might start point larn other tests how right controlling false-positives.
however, output of p.adjust
, gives in general multiple-testing corrected p-value
. in case of bonferroni, fwer controlled p-value. in case of bh method, fdr corrected p-value (or otherwise called q-value).
hope helps bit.
r statistics
No comments:
Post a Comment