chap10_false_discoveries.pptx

Data Mining: Avoiding False Discoveries

Lecture Notes for Chapter 10

Introduction to Data Mining, 2nd Edition

by

Tan, Steinbach, Karpatne, Kumar

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Outline

Statistical Background

Significance Testing

Hypothesis Testing

Multiple Hypothesis Testing

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Motivation

An algorithm applied to a set of data will usually produce some result(s)

There have been claims that the results reported in more than 50% of published papers are false. (Ioannidis)

Results may be a result of random variation

Any particular data set is a finite sample from a larger population

Often significant variation among instances in a data set or heterogeneity in the population

Unusual events or coincidences do happen, especially when looking at lots of events

For this and other reasons, results may not replicate, i.e., generalize to other samples of data

Results may not have domain significance

Finding a difference that makes no difference

Data scientists need to help ensure that results of data analysis are not false discoveries, i.e., not meaningful or reproducible

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Statistical Testing

Statistical approaches are used to help avoid many of these problems

Statistics has well-developed procedures for evaluating the results of data analysis

Significance testing

Hypothesis testing

Domain knowledge, careful data collection and preprocessing, and proper methodology are also important

Bias and poor quality data

Fishing for good results

Reporting how analysis was done

Ultimate verification lies in the real world

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Probability and Distributions

Variables are characterized by a set of possible values

Called the domain of the variable

Examples:

True or False for binary variables

Subset of integers for variables that are counts, such as number of students in a class

Range of real numbers for variables such as weight or height

A probability distribution function describes the relative frequency with which the values are observed

Call a variable with a distribution a random variable

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Probability and Distributions ..

For a discrete variable we define a probability distribution by the relative frequency with which each value occurs

Let X be a variable that records the outcome flipping a fair coin: heads (1) or tails (0)

P(X =1) = P(X =0) = 0.5 (P stands for “probability”)

If is the distribution of X,

Probability distribution function has the following properties

Minimum value 0, maximum value 1

Sums to 1, i.e.,

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Binomial Distribution

Number of heads in a sequence of n coin flips

Let R be the number of heads

R has a binomial distribution

What is given n = 10 and =0.5 ?

k P(R= k)
0 0.001
1 0.01
2 0.044
3 0.117
4 0.205
5 0.246
6 0.205
7 0.117
8 0.044
9 0.01
10 0.001

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Probability and Distributions ..

For a continuous variable we define a probability distribution by using density function

Probability of any specific value is 0

Only intervals of values have non-zero probability

Examples: P (X > 3), P(X < -3), P (-1 < X < 1)

If is the distribution of X, P (X > 3)

Probability density has the following properties

Minimum value 0

Integrates to 1, i.e.,

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Gaussian Distribution

The Gaussian (normal) distribution is the most commonly used

Where and are the mean and standard distribution of the distribution

and

= 0 and = 1, i.e., (0,1)

http://www.itl.nist.gov/div898/handbook/eda/section3/eda3661.htm

http://www.itl.nist.gov/div898/handbook/index.htm

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Statistical Testing

Make inferences (decisions) about that validity of a result

For statistical inference (testing), we need two things:

A statement that we want to disprove

Called the null hypothesis (H0)

The null hypothesis is typically a statement that the result is merely due to random variation

It is the opposite of what we would like to show

A random variable, , called a test statistic, for which we can determine a distribution assuming H0 is true.

The distribution of under H0 is called the null distribution

The value of is obtained from the result and is typically numeric

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Examples of Null Hypotheses

A coin or a die is a fair coin or die.

The difference between the means of two samples is 0

The purchase of a particular item in a store is unrelated to the purchase of a second item, e.g., the purchase of bread and milk are unconnected

The accuracy of a classifier is no better than random

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Significance Testing

Significance testing was devised by the statistician Fisher

Only interested in whether null hypothesis is true

Significance testing was intended only for exploratory analyses of the null hypothesis in the preliminary stages of a study

For example, to refine the null hypothesis or modify future experiments

For many years, significance testing has been a key approach for justifying the validity of scientific results

Introduced the concept of p-value, which is widely used and misused

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

How Significance Testing Works

Analyze the data to obtain a result

For example, data could be from flipping a coin 10 times to test its fairness

The result is expressed as a value of the test statistic,

For example, let be the number of heads in 10 flips

Compute the probability of seeing the current value of or something more extreme

This probability is known as the p-value of the test statistic

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

How Significance Testing Works …

If the p-value is sufficiently small, we say that the result is statistically significant

We say we reject the null hypothesis, H0

A threshold on the p-value is called the significance level,

Often the significance level is 0.01 or 0.05

If the p-value is not sufficiently small, we say that we fail to reject the null hypothesis

Sometimes we say that we accept the null hypothesis, but a high p-value does not necessarily imply the null hypothesis is true

p-value

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Example: Testing a coin for fairness

H0: P(X =1) = P(X =0) = 0.5

Define the test statistic to be the number of heads in 10 flips

Set the significance level to be 0.05

The number of heads has a binomial distribution

For which values of would you reject H0?

k P(S = k)
0 0.001
1 0.01
2 0.044
3 0.117
4 0.205
5 0.246
6 0.205
7 0.117
8 0.044
9 0.01
10 0.001

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

One-sided and Two-sided Tests

More extreme can be interpreted in different ways

For example, an observed value of the test statistic, , can be considered extreme if

it is greater than or equal to a certain value, ,

smaller than or equal to a certain value, , or

outside a specified interval, [, ].

The first two cases are “one-sided tests” (right-tailed and left-tailed, respectively),

The last case results in a “two-sided test.”

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

One-sided and Two-sided Tests …

Example of one-tailed and two tailed tests for a test statistic that is normally distributed for a roughly 5% significance level.

s

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Neyman-Pearson Hypothesis Testing

Devised by statisticians Neyman and Pearson in response to perceived shortcomings in significance testing

Explicitly specifies an alternative hypothesis, H1

Significance testing cannot quantify how an observed results supports H1

Define an alternative distribution which is the distribution of the test statistic if H1 is true

We define a critical region for the test statistic

If the value of falls in the critical region, we reject H0

We may or may not accept H1 if H0 is rejected

The significance level, , is the probability of the critical region under H0

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Hypothesis Testing …

Type I Error (): Error of incorrectly rejecting the null hypothesis for a result.

It is equal to the probability of the critical region under H0, i.e., is the same as the significance level, .

Formally, α = P( ∈ Critical Region | H0)

Type II Error (β): Error of falsely calling a result as not significant when the alternative hypothesis is true.

It is equal to the probability of observing test statistic values outside the critical region under H1

Formally, β = P( Critical Region | H1).

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Hypothesis Testing …

Power: which is the probability of the critical region under H1, i.e., 1−β.

Power indicates how effective a test will be at correctly rejecting the null hypothesis.

Low power means that many results that actually show the desired pattern or phenomenon will not be considered significant and thus will be missed.

Thus, if the power of a test is low, then it may not be appropriate to ignore results that fall outside the critical region.

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Example: Classifying Medical Results

The value of a blood test is used as the test statistic, , to identify whether a patient has a particular disease or not.

(40, 5)

(60, 5)

0.023, µ = 40,  = 5

0.023, µ = 60,  = 5

Power = 1 – = 0.977

See figures on the next page

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Hypothesis Testing: Effect Size

Many times we can find a result that is statistically significant but not significant from a domain point of view

A drug that lowers blood pressure by one percent

Effect size measures the magnitude of the effect or characteristic being evaluated, and is often the magnitude of the test statistic.

Brings in domain considerations

The desired effect size impacts the choice of the critical region, and thus the significance level and power of the test

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Effect Size: Example Problem

Consider several new treatments for a rare disease that have a particular probability of success. If we only have a sample size of 10 patients, what effect size will be needed to clearly distinguish a new treatment from the baseline which has is 60 % effective?

R/p(X=1) 0.60 0.70 0.80 0.90
0 0.0001 0.0000 0.0000 0.0000
1 0.0016 0.0001 0.0000 0.0000
2 0.0106 0.0014 0.0001 0.0000
3 0.0425 0.0090 0.0008 0.0000
4 0.1115 0.0368 0.0055 0.0001
5 0.2007 0.1029 0.0264 0.0015
6 0.2508 0.2001 0.0881 0.0112
7 0.2150 0.2668 0.2013 0.0574
8 0.1209 0.2335 0.3020 0.1937
9 0.0403 0.1211 0.2684 0.3874
10 0.0060 0.0282 0.1074 0.3487

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Multiple Hypothesis Testing

Arises when multiple results are produced and multiple statistical tests are performed

The tests studied so far are for assessing the evidence for the null (and perhaps alternative) hypothesis for a single result

A regular statistical test does not suffice

For example, getting 10 heads in a row for a fair coin is unlikely for one such experiment

probability =

But, for 10,000 such experiments we would expect 10 such occurrences

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Summarizing the Results of Multiple Tests

The following confusion table defines how results of multiple tests are summarized

We assume the results fall into two classes, + and –, which, follow the alternative and null hypotheses, respectively.

The focus is typically on the number of false positives (FP), i.e., the results that belong to the null distribution (– class) but are declared significant (+ class).

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Family-wise Error Rate

By family, we mean a collection of related tests

family-wise error rate (FWER) is the probability of observing even a single false positive (type I error) in an entire set of m results.

FWER = P(FP > 0).

Suppose your significance level is 0.05 for a single test

Probability of no error for one test is

Probability of no error for m tests is

FWER = P(FP > 0) = 1 –

If m = 10, FWER = 0.60

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Bonferroni Procedure

Goal of FWER is to ensure that FWER < , where is often 0.05

Bonferroni Procedure:

m results are to be tested

Require FWER < α

set the significance level, for every test to be.

If m = 10 and then

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Example: Bonferroni versus Naïve approach

Naïve approach is to evaluate statistical significance for each result without adjusting the significance level.

The family wise error rate (FWER) curves for the naïve approach and the Bonferroni procedure as a function of the number of results, m.

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

False Discovery Rate

FWER controlling procedures seek a low probability for obtaining any false positives

Not the appropriate tool when the goal is to allow some false positives in order to get more true positives

The false discovery rate (FDR) measures the rate of false positives, which are also called false discoveries

,

where is the number of predicted positives

If we know FP, the number of actual false positives, then FDR = FP.

Typically we don’t know FP in a testing situation

Thus, FDR = P(>0) = E(), the expected value of Q.

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Benjamini-Hochberg Procedure

An algorithm to control the false discovery rate (FDR)

This procedure first orders the p-values from smallest to largest

Then it uses a separate significance level for each test

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

FDR Example: Picking a stockbroker

Suppose we have a test for determining whether a stockbroker makes profitable stock picks. This test, applied to an individual stockbroker, has a significance level, We use the same value for our desired false discovery rate.

Normally, we set the desired FDR rate higher, e.g., 10% or 20%

The following figure compares the naïve approach, Bonferroni, and the BH FDR procedure with respect to the false discovery rate for various numbers of tests, m. 1/3 of the sample were from the alternative distribution.

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

FDR Example: Picking a stockbroker …

The following figure compares the naïve approach, Bonferroni, and the BH FDR procedure with respect to the power for various numbers of tests, m. 1/3 of the sample were from the alternative distribution.

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

Comparison of FWER and FDR

FWER is appropriate when it is important to avoid any error.

But an FWER procedure such as Bonferroni makes many Type II errors and thus, has poor power.

An FWER approach has very a very false discovery rate

FDR is appropriate when it is important to identity positive results, i.e., those belonging to the alternative distribution.

By construction, the false discovery rate is good for an FDR procedure such as the BH approach

An FDR approach also has good power

02/14/2018

‹#›

Introduction to Data Mining, 2nd Edition

0.08

0.07

(Probability Density)0.06

0.05

0.04

0.03

0.02

0.01

0.08

0.07

(Probability Density)0.06

0.05

0.04

0.03

0.02

0.01

0

20 30 40

50 60 70 80

R

0

20 30 40

50 60 70 80

R

Distribution of test statistic for the alternative hypothesis (rightmost density curve) and null hypothesis (leftmost density curve). Shaded region in right subfigure is .

0.08

0.07

(Probability Density)0.06

0.05

0.04

0.03

0.02

0.01

0.08

0.07

(Probability Density)0.06

0.05

0.04

0.03

0.02

0.01

0

20 30 40

50 60 70 80

R

0

20 30 40

50 60 70 80

R

Shaded region in left subfigure is β and shaded region in right subfigure is power.

Confusion table for summarizing multiple hypothesis testing results.

Declared significant

(+ prediction)

Declared not significant

(– prediction)

Total

H1 True

(actual +)

True Positive (TP)

False Negative (FN)

type II error

Positives

(m1 )

H0 True

(actual –)

False Positive (FP)

type I error

True Negative (TN)

Negatives

(m0 )

Positive Predictions

(Ppred)

Negative Predictions

(Npred)

m

1

0.9

(Family−wise Error Rate (FWER))0.8

0.7

Naive Approach

Bonferroni

0.6

0.5

0.4

0.3

0.2

0.1

0.05

0

0 10 20 30 40 50 60 70 80 90 100

Number of Results, m

Benjamini-Hochberg (BH) FDR algorithm.

1: Compute p-values for the m results.

2: Order the p-values from smallest to largest (p1 to pm ).

(m)3: Compute the significance level for pi as αi = i × α .

4: Let k be the largest index such that pk ≤ αk .

5: Reject H0 for all results corresponding to the first k p-values, pi , 1 ≤ i ≤ k.

0.12

(False Discovery Rate (FDR))0.1

0.08

Naive Approach

Bonferroni

BH Procedure

0.06

0.05

0.04

0.02

0

0102030405060708090100

Number of stockbrokers, m

False Discovery Rate as a function of m.

1

(Expected Power (True Positive Rate))0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

Naive Approach

Bonferroni

BH Procedure

0

0102030405060708090100

Number of stockbrokers, m

Expected Power as function of m.