WWW.DISSERTATION.XLIBX.INFO
FREE ELECTRONIC LIBRARY - Dissertations, online materials
 
<< HOME
CONTACTS



«January 28, 2016 Background Suppose that you had a 20-sided die. Nineteen of the sides are labeled 0 and one of the sides is labeled 1. You roll the ...»

Multiple Testing

Gary W. Oehlert

School of Statistics

University of Minnesota

January 28, 2016

Background

Suppose that you had a 20-sided die. Nineteen of the sides are labeled 0 and one of

the sides is labeled 1.

You roll the die once. What is the chance of getting a 1? Easy, 5%.

Now roll the die 20 times. What is the chance of getting at least one 1?

1 −.9520 =.642

Roll it 100 times, and the probability of at least one 1 is now 1 −.95100 =.994

Doing a 5% level test when the null is true is like rolling the die. You have a 5% chance of rejecting that true null, just like one roll of the die.

Now do 20 tests at the 5% level, with the null true every time. The chance of one or more nulls being rejected is.642. With 100 tests of true nulls, the chance of making at least one false rejection is virtual certainty.

That is the essence of the multiple testing problem: how do you control error rates when you do lots of tests?

Data snooping Things are even worse if you don’t just do lots of tests but instead snoop in the data to find something that looks interesting, and then test that interesting looking thing.

In this case, your chance of rejecting the null in that single test is very high, even if null is true and what you detected is just random variation.

It takes a heavy, blunt instrument powerful procedure to keep error rates under control in that situation.

Notation We have several null hypotheses H01, H02,..., H0k.

H0 is the overall or combined null hypothesis that all of the other nulls are true H0 = H01 ∩ H02 ∩ · · · ∩ H0k Ei is the type I error rate for the ith test; E is the type I error rate for the combined null.

Errors This is errors as in mistakes.

Declaring a true null to be false is a Type I error. This is a false positive, declaring something to be happening when it is not.

Failing to reject a false null1 is a Type II error. This is a false negative, saying something is not happening when, in fact, something is happening.

1 In ye olde days one would say “accept the null,” but I prefer “fail to reject.” Reality/State of nature Decision Null correct Null false Fail to reject True negative False negative Reject False positive True positive Reality/State of nature Decision Null correct Null false Fail to reject Type II error Reject Type I error The general approach in classical statistics is to control the probability of a type I error (E), and among procedures that control that error choose one that makes the type II error rate low.

That’s pretty well defined for a single hypothesis, but working with multiple hypotheses requires a bit more. Consider this table.

Numbers of decisions Reality/State of nature Decision Null corr

–  –  –

In practice, we will never know these counts, but we can work with them theoretically.

Error rates The per comparison error rate ignores the multiple testing issue.

Here you just do a separate test for each null hypothesis ignoring all of the other tests.

Per comparison error control is

–  –  –

In effect, we have k different tables with Ai, Bi, Ci, and Di. Because we assume that all nulls are true, Bi = Di = 0 for all tables (sub-hypotheses). Or,

–  –  –

Let F = C/(C+D) (or zero when C+D=0). This is the false discovery fraction—the fraction of rejections that are incorrect.

Controlling the FDR is making sure

–  –  –

so the expected fraction of false rejections is at most E. Note that the more correct rejections you make, the more false rejections FDR lets you make.

The strong familywise error rate also allows for the possibility that some of the H0i are false, but unlike the FDR it cuts you no slack for making correct rejections. SFER control is P[reject any H0i |H0i true] ≤ E

–  –  –

Compare this carefully with the experimentwise error rate.

If we are forming multiple confidence intervals instead of just testing, then simultaneous confidence intervals satisfy

–  –  –

The coverage rate of individual intervals within a simultaneous confidence interval procedure will typically be larger than 1-E.

(In effect, SFER only requires simultaneous confidence intervals for null values, so this requires more than SFER.) I have described the error rates from weakest (per comparison) to strongest (simultaneous CIs). If a procedure controls one rate, it will also control the weaker rates.

If a procedure controls an error rate at E, it controls the weaker error rates at (something usually less than) E.

The stronger the type I error rate, the harder it is to see differences that are really there.

As you control stronger and stronger type I error rates, you make more and more type II errors.

Review:

Per comparison hardly cares how many incorrect rejections in total.

Per experiment doesn’t want you to make an incorrect rejection, but if you make one correct rejection, then it doesn’t care how many incorrect ones you make.

FDR gives you some slack; for example, for every 19 correct rejection it gives you a pass on one incorrect rejection.





SFER doesn’t care how many correct rejections you make, it still doesn’t want you to make an incorrect rejection.

Simultaneous confidence intervals not only pushes you to get the nulls right and the non-nulls right, you also have to be able to say where all the parameter values are.

Suppose that we have done a genomic assay on 30 women, 15 with breast cancer and 15 without. We have gene expression data on 5,000 genes.

If we just had three genes in mind and didn’t care about the others, we might use a per comparison error rate.

If we were primarily interested in whether there is some genetic influence, but want to cast a wide net for potential genetic markers if there is a genetic component, then we might use an experimentwise method.

If we don’t want to be bombarded with a lot of genes incorrectly identified as active but can work with a limited percentage of false positives, then FDR would do the trick.

If we want to have a controlled probability of making any false statement that a gene is involved in breast cancer, then we control the SFER.

If we want to be able to estimate expression on all of the genes with simultaneous coverage, then we need a simultaneous confidence interval method.

Search your soul to find the weakest type I error rate that is compatible with the kind of inference you wish to make. Then choose a procedure that controls that error rate.

It’s a Goldilocks problem where you need to balance the types of errors.

There are many different procedures, particularly pairwise comparison procedures, and people argue for their favorites. My philosophy is to argue of the type I error rate to be controlled, and then choose the corresponding procedure.

Scheff´ e

–  –  –

The Scheff´ procedure will control the strong familywise error rate for arbitrarily many e contrasts, including contrasts suggested by the data.

–  –  –

and compute the p-value from a F distribution with g-1 and N-g df. (This “F” is the square of the t-test for the contrast divided by g-1.) For a confidence interval use

–  –  –

For example, if g=5, N-g=20, and E=.05, then the usual t-based multiplier for the interval would be 2.08, but the Scheff´-based multiplier is 3.386 (equivalent to a t with e E=.0029).

Bonferroni Our second general procedure is Bonferroni. Bonferroni works for K pre-planned tests, so it does not work for data snooping.

The tests can be of any type, of mixed type, independent or dependent, they just have to be tests.

Bonferroni says divide your overall error E into K parts: E1, E2,..., EK with i Ei = E (usually Ei = E/K ). Run test i of H0i at the Ei error level. This will control the strong familywise error rate.

If you are doing confidence intervals, compute the ith interval with coverage 1 − Ei.

Then you will have simultaneous confidence intervals with coverage 1 − E.

Another way to think of this is do your tests and multiply the p-values by K. If any of them still look small, then reject.

The advantage of Bonferroni is that it is dead easy and widely applicable.

The disadvantage of Bonferroni is that in many special cases there are better procedures that control the same error rate.

Better in this case means fewer type II errors or shorter confidence intervals, all while still controlling the error of interest.

Fiber percent example.

Studentized range Before moving on, we need a new distribution called the Studentized range. Suppose H0 : µ1 = µ2 = · · · = µg (the single mean model) is true. Look at the distribution of

–  –  –

It is possible to replace the F test comparing the separate means model to the single mean model with a test based on the Studentized range. They usually, but not always, agree.

Pairwise comparisons Pairwise comparisons are simple comparisons of the mean of one treatment group to

the mean of another treatment group:

–  –  –

These comparisons are an obvious thing to do, and there are lots of procedures out there to do them. We will work on them according to the error rate that they control.

Introduce new labels on the sample means so that y (1)• is the smallest and y (g )• is the largest.

From y (1)• to y (g )• is a stretch of g means.

From y (2)• to y (g )• is a stretch of g-1 means.

From y (2)• to y (4)• is a stretch of 3 means.

Step-down methods look at pairwise comparisons starting with the most extreme pair and working in. When you get to a pair whose equality of means cannot be rejected, then you do not reject equality for every pair of means included in the stretch.

Step-down methods can only declare a stretch of means significantly different (i.e., the ends are different) if the stretch exceeds its critical minimum and every stretch containing the stretch also exceeds its critical minimum.

So failure to reject the null that the treatments corresponding to y (2)• and y (4)• have equal means implies that we must fail to reject the comparisons between (2) and (3) as well as (3) and (4).

The step-down stopping rule is only needed if the critical minimum difference for rejecting the null gets smaller as the stretches get shorter. If they all stay the same, then failure to reject the endpoints of a stretch of means implies that you will not reject any stretch within.

A couple of the forthcoming methods are real, genuine step-down methods (SNK and REGWR). A couple have constant sized critical minima (LSD and HSD). However, we will talk about them all as step-down because we can frame them together that way.

Consider the difference y (j)• − y (i)• This is a stretch of i − j + 1 means. (Let i − j + 1 = k, i.e., k is the stretch length.) The critical value, often called the “significant difference,” for a comparison is

–  –  –

The mysterious Ek in REGWR is Ek = E for k=g,g-1 and Ek = kE/g for k g − 1.

In general, N-g is replaced by df in the MSE.

LSD√and PLSD are usually formulated using t distributions (i.e., use t and get rid of the 2).

LSD is least significant difference. It protects the per comparison error rate.

PLSD is Protected LSD. Do the ANOVA F test first. If it rejects, then proceed with LSD. If it fails to reject, then say no differences. The F-test protects experimentwise error rate.

SNK is Student-Neuman-Keuls. I am pretty sure that it protects FDR, but I have failed to prove it.

REGWR is Ryan-Einot-Gabriel-Welsch range test. It protects SFER.

HSD is the Honest significant difference (also called the Studentized range procedure or the Tukey W). It produces simultaneous confidence intervals (as difference plus or minus significant difference).

Visualization Write treatment labels so means are in increasing order, then draw a line under treatments that are not significantly different.

CAB

–  –  –

There are many, many other procedures, but beware.

There is a procedure called Duncan’s New Multiple Range test. Some people like it because it finds lots of differences.

It finds lots of differences because it does not control any of our type I error rates including, believe or not, the per comparison error rate.

I keep away.

Cheese inoculants example.

Compare to control Sometimes we have a control treatment, and all we really want to do is compare each treatment to control, but not the non-control treatments to each other.2 Should you want to do this, there is a procedure called Dunnett’s Significant Difference that will give you simultaneous confidence intervals or control SFER. Comparing treatment g to the other treatments, use

–  –  –

You get dE (g − 1, ν) from the two-sided Dunnett’s table.

2 Actually, I almost always want to compare the new treatments with each other as well, so I don’t wind up doing this very often.

For one sided test, say with new yielding higher than control as the alternative, use

–  –  –

If you are really wedded to just comparing new to control, design with √ ng /ni ≈ g − 1. This gives best overall results.

Compare to best Here is something that I think is very useful. We can use Dunnett to identify the group of treatments that distinguishes itself as best.

Best subset (assuming bigger is better) is all i such that for any j = i:

–  –  –

Best subset is all treatments not significantly less than the highest mean using a one-sided Dunnett allowance.

The probability of truly best treatment being in this group is 1-E.



Similar works:

«Dr. YSRHU e-Newsletter Fortnightly e-Newsletter of Dr.Y.S.R.Horticultural University 15th — 30th November, 2015 Volume-I, Issue –13 Dr.YSRHU e-Newsletter Fortnight Focus Events Education Research Extension General Events Indian Constitution Day 125th Indian Constitution Day was celebrated at constituent colleges of Dr.YSRHU on 26-11-2015. During the programme important facts about Indian Constitution and the architect of Indian Constitution Dr. B.R.Ambedhkar were explained to the students....»

«America’s Top Model: The Wisconsin Government Accountability Board Daniel P. Tokaji* Introduction I. The Wisconsin Model II. The Experience of Wisconsin’s Government Accountability Board. 586 A. Voter Registration B. Early and Absentee Voting C. Voter Identification D. Reporting of Election Results E. Recall Elections Conclusion INTRODUCTION The United States is an outlier among democratic countries when it comes to the institutions charged with running our elections. Most other democratic...»

«World Religions Outline: A. Definitions and Intro to the Topic. B. Christianity. What are we comparing these religions to?1. The Christian World View Our outline of world religions: from those most distant in world view from Christianity to those most similar to Christianity. C. Eastern Religions 1. Hinduism 2. Buddhism 3. Jaina, Taoism, Sikkhism, Confucianism, Shinto 4. New Age D. Other Monotheisms: Islam, Judaism, Bahai E. Scientology, Christian Science and other rather strange religions F....»

«Answers from the Crowd: How Credible are Strangers in Social Q&A? Grace YoungJoo Jeon1 and Soo Young Rieh1 School of Information, University of Michigan 1 Abstract Individuals may encounter distinct kinds of challenges in assessing credibility in a social Q&A setting where they interact with strangers. It is necessary to better understand how people make credibility judgments when seeking information using social Q&A services because people increasingly use such services to obtain personalized...»

«Hailing – John Ball Hailing helps avoid collisions and makes racing safer. Hailing reinforces the rules and enhances tactical sailing. Hailing can help you win in a protest and enable you to be successful if you ever need to request Redress. Surprisingly, there are only three places in the Racing Rules of Sailing (RRS) where a hail by a competitor is specified. How many of them can you come up with? Those RRS hails are a) calling for “Room to tack”, under R 20; b) calling a protest under...»

«249 Comechingonia. Revista de Arqueología Número 18, segundo semestre de 2014, pp. 249-261, Córdoba ISSN 0326-7911 USO ANTRÓPICO DE LAGARTOS (TUPINAMBIS SP.) EN EL SITIO BELTRÁN ONOFRE BANEGAS-LAMI HERNÁNDEZ (SANTIAGO DEL ESTERO). ANTHROPIC USE OF LIZARDS (TUPINAMBIS SP.) AT BELTRÁN ONOFRE BANEGAS-LAMI HERNÁNDEZ SITE (SANTIAGO DEL ESTERO PROVINCE). Luis M. del Papa1 y Leda Moro2 1 CONICET. Facultad de Ciencias Naturales y Museo, Universidad Nacional de La Plata. Calle 64 s/n, entre 120...»

«REGLAMENTO DEL CEMENTERIO SACRAMENTAL DE SAN PEDRO, SAN ANDRÉS Y SAN ISIDRO APROBADO EN JUNTA DE GOBIERNO CON FECHA 28 Abril 2009 INDICE: TITULO PRIMERO Capítulo I Disposiciones Generales Capítulo II Organización y Administración del Cementerio TITULO SEGUNDO.Del Título del “Derecho de Enterramiento” Capítulo I Naturaleza y Contenido Capítulo II De la modificación y extinción del “Derecho de Enterramiento” TITULO TERCERO.De los derechos y deberes de los usuarios Capítulo I...»

«Podcast „Więcej niż oszczędzanie pieniędzy” transkrypt WNOP odcinek 070 – 7 lutego 2016 r. Jak przetestować swój pomysł na biznes, aby nie zmarnować czasu i pieniędzy opowiada Pat Flynn Opis odcinka: http://jakoszczedzacpieniadze.pl/070 To jest podcast „Więcej niż oszczędzanie pieniędzy” odcinek siedemdziesiąty. Dzisiaj Michał rozmawia z Patem Flynnem o tym, jak zweryfikować, czy konkretny pomysł na biznes ma sens. If you are an English-speaking listener, just skip...»

«FACTORS EXPLAINING CORPORATE GOVERNANCE DISCLOSURE QUALITY: CANADIAN EVIDENCE Abstract This study investigates the determinants of corporate governance (CG) disclosure quality for a large sample of Canadian listed firms. Our results show a negative relationship between inside ownership, CEO duality and the quality of information about corporate governance practices. We document also a positive relationship between board independence and CG disclosure quality. Consistent with voluntary...»

«WHY POWER PURCHASE AGREEMENTS MAKE SENSE EXECUTIVE SUMMARY Rising energy prices and federal renewable energy goals are driving federal agency demand for renewable energy sources to control costs and meet new requirements. Over the last decade, electricity prices have risen at a rate greater than inflation and have been an unpredictable cost in annual agency budgets. The U.S. Energy Information Administration reported a 42% increase in electricity prices for all sectors from 1997 to 2008. In...»

«United Nations Development Programme Human Development Report Office OCCASIONAL PAPER Background paper for HDR 2004 African Wars and Ethnic Conflicts – Rebuilding Failed States Kwesi Kwaa Prah 2004/10 African Wars and Ethnic Conflicts – Rebuilding Failed States Kwesi Kwaa Prah The Centre for Advanced Studies of African Society Cape Town Africa Regional Background Paper: Human Development Report 2004. UNDP. Introduction There has been a popular misconception that Africa’s numerous wars and...»

«INTERNATIONAL JOURNAL OF CONSERVATION SCIENCE ISSN: 2067-533X Volume 3, Issue 2, April-June 2012: 111-118 www.ijcs.uaic.ro ECO-TRANSFORMATION AND ELECTROCUTION. A MAJOR CONCERN FOR THE DECLINE IN VULTURE POPULATION IN AND AROUND JODHPUR Ram Prakash SARAN*, Ashok PUROHIT Department of Zoology, Faculty of Science, J. N. V. University, Jodhpur, India Abstract Vultures are large sized birds with late maturity and a low reproductive output, making populations especially vulnerable to adult mortality...»





 
<<  HOME   |    CONTACTS
2016 www.dissertation.xlibx.info - Dissertations, online materials

Materials of this site are available for review, all rights belong to their respective owners.
If you do not agree with the fact that your material is placed on this site, please, email us, we will within 1-2 business days delete him.