User:Elleyang/sandbox

From Wikipedia, the free encyclopedia
Figure 1. Box plot of data from the Michelson–Morley experiment displaying four outliers in the middle column, as well as one outlier in the first column.

In statistics, an outlier is an observation point that is distant from other observations.[1][2] An outlier may be due to variability in the measurement, or it may indicate experimental error; in the latter case, they are sometimes excluded from the data set.[3] An outlier can cause serious problems in statistical analyses.

Outliers can occur by chance in any distribution, but they often indicate either a measurement error or a heavy-tailed distribution in the population. In the former case, one could discard them or use statistics that are robust to outliers; in the latter case, they indicate that the distribution has high skewness and that one should be very cautious in using tools or intuitions that assume a normal distribution. Outliers are frequently caused by mixing two distributions, which may come from two distinct sub-populations, and may indicate a "correct trial" versus "measurement error", which can be modeled by a mixture model.

In most large samplings of data, some data points will be further away from the sample mean than what is deemed reasonable. This can be due to incidental systematic error, flaws in the theory that generated an assumed family of probability distributions, or some observations that are far from the center of the data. Outlier points can therefore indicate faulty data, erroneous procedures, or areas where certain theories might not be valid. However, in large samples, a small number of outliers is to be expected and not due to any anomalous condition.

Outliers, being the most extreme observations, may include the sample maximum, sample minimum, or both, depending on whether they are extremely high or low. However, the sample maximum and minimum are not always outliers because they may not be unusually far from other observations.

Naive interpretation of statistics derived from data sets that include outliers may be misleading. For example, if one is calculating the average temperature of 10 objects in a room, and nine of them are between 20 and 25 degrees Celsius, but an oven is at 175 °C, the median of the data will be between 20 °C and 25 °C, but the mean temperature will be between 35.5 °C and 40 °C. In this case, the median better reflects the temperature of a randomly sampled object (but not the temperature in the room) than the mean. Naively interpreting the mean as "a typical sample", equivalent to the median, is incorrect. As illustrated in this case, outliers may indicate data points that belong to a different population than the rest of the sample set.

Estimators capable of coping with outliers are said to be robust: the median is a robust statistic of central tendency, while the mean is not.[4] However, the mean is generally more precise estimator.[5]

Occurrence and causes[edit]

In the case of normally distributed data, the three sigma rule means that roughly 1 in 22 observations will differ by twice the standard deviation or more from the mean, and 1 in 370 will deviate by three times the standard deviation.[6] In a sample of 1000 observations, the presence of up to five observations deviating from the mean by more than three times the standard deviation is within the range of what can be expected, being less than twice the expected number and hence within 1 standard deviation of the expected number – see Poisson distribution – and not indicate an anomaly. If the sample size is only 100, however, just three such outliers are already reason for concern, being more than 11 times the expected number.

In general, if the nature of the population distribution is known a priori, it is possible to test if the number of outliers deviate significantly from what can be expected: for a given cutoff (so samples fall beyond the cutoff with probability p) of a given distribution, the number of outliers will follow a binomial distribution with parameter p, which can generally be well-approximated by the Poisson distribution with λ = pn. Thus if one takes a normal distribution with cutoff 3 standard deviations from the mean, p is approximately 0.3%, and thus for 1000 trials one can approximate the number of samples whose deviation exceeds 3 sigmas by a Poisson distribution with λ = 3.

Causes[edit]

Outliers can have many anomalous causes. A physical apparatus for taking measurements may have suffered a transient malfunction. There may have been an error in data transmission or transcription. Outliers arise due to changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. A sample may have been contaminated with elements from outside the population being examined. Alternatively, an outlier could be the result of a flaw in the assumed theory, calling for further investigation by the researcher. Additionally, the pathological appearance of outliers of a certain form appears in a variety of datasets, indicating that the causative mechanism for the data might differ at the extreme end (King effect).

Univariate Detection[edit]

In univariate models, a response variable is fit to one explanatory variable .

There is no rigid mathematical definition of what constitutes an outlier, so determining whether an observation is an outlier is ultimately subjective. There are various methods of outlier detection, through graphical or model based methods.[7][8][9][10]

Graphical-based methods commonly include box plots.

Model-based methods assume that the data are from a normal distribution and identify observations which are deemed "unlikely" based on a measure of mean and standard deviation:

Peirce's criterion[edit]

It is proposed to determine in a series of observations the limit of error, beyond which all observations involving so great an error may be rejected, provided there are as many as such observations. The principle upon which it is proposed to solve this problem is, that the proposed observations should be rejected when the probability of the system of errors obtained by retaining them is less than that of the system of errors obtained by their rejection multiplied by the probability of making so many, and no more, abnormal observations. (Quoted in the editorial note on page 516 to Peirce (1982 edition) from A Manual of Astronomy 2:558 by Chauvenet.)

Tukey's fences[edit]

Other methods flag observations based on measures such as the interquartile range. For example, if and are the lower and upper quartiles respectively, then one could define an outlier to be any observation outside the range:

for some nonnegative constant . John Tukey proposed this test, where indicates an "outlier", and indicates data that is "far out".[11]

In anomaly detection[edit]

In the data mining task of anomaly detection, other approaches are distance-based[12][13] and density-based such as Local Outlier Factor,[14] and most of them use the distance to the k-nearest neighbors to label observations as outliers or non-outliers.[15]

Modified Thompson Tau test[edit]

The modified Thompson Tau test[citation needed] is a method used to determine if an outlier exists in a data set. The strength of this method lies in the fact that it takes into account a data set’s standard deviation, average and provides a statistically determined rejection zone; thus providing an objective method to determine if a data point is an outlier. Note: Although intuitively appealing, this method appears to be unpublished (it is not described in Thompson (1985)[16]) and one should use it with caution.

How it works: First, a data set's average is determined. Next the absolute deviation between each data point and the average are determined. Thirdly, a rejection region is determined using the formula: ; where is the critical value from the Student t distribution, n is the sample size, and s is the sample standard deviation. To determine if a value is an outlier: Calculate δ = |(X - mean(X)) / s|. If δ > Rejection Region, the data point is an outlier. If δ ≤ Rejection Region, the data point is not an outlier.

The modified Thompson Tau test is used to find one outlier at a time (largest value of δ is removed if it is an outlier). Meaning, if a data point is found to be an outlier, it is removed from the data set and the test is applied again with a new average and rejection region. This process is continued until no outliers remain in a data set.

Some work has also examined outliers for nominal (or categorical) data. In the context of a set of examples (or instances) in a data set, instance hardness measures the probability that an instance will be misclassified ( where is the assigned class label and represent the input attribute value for an instance in the training set ).[17] Ideally, instance hardness would be calculated by summing over the set of all possible hypotheses :

Practically, this formulation is unfeasible as is potentially or infinite and calculating is unknown for many algorithms. Thus, instance hardness can be approximated using a diverse subset :

where is the hypothesis induced by learning algorithm trained on training set with hyperparameters . Instance hardness provides a continuous value for determining if an instance is an outlier instance.

Multivariate Detection[edit]

In multivariate models, a response variable is fit to multiple explanatory variables s.

While there is no rigid mathematical definition of what constitutes an outlier, an outlying point is marked by the unusualness of its values or by the unusualness of its value conditional on its values.

The unusualness is quantified by leverage and discrepancy, respectively. In linear regression, leverage measures the unusualness of an observation's values by calculating the distance between its s and the s of the remaining observations. In linear regression, discrepancy measures the unusualness of an observation's value conditional on its values by calculating the observation's residual.

Influence is a value that combines leverage and discrepancy to detect outliers by measuring the "influence" of an observation on the fit of a model. The heuristic formula distinguishing influence, leverage, and discrepancy is: influence = leverage * discrepancy.

<Figure 1>

Cook's Distance[edit]

Cook's distance is commonly used to estimate the influence of a data point while performing a least-squares regression analysis.

In 1977[18], Cook proposed to measure the "distance" between the predicted least-squares estimate and the predicted least-squares estimate when the th subject is removed . His approach produced a measure independent of the scales of the explanatory variables that relies on the Mahalanobis distance, which corresponds to the number of standard deviations a point is away from the mean of its distribution.

Cook's distance is traditionally expressed as follows:

where is the design matrix of size x , is the total number of observations, is the number of explanatory variables, and is the estimated residual standard error.

To directly see Cook's distance as a measure of influence, it can alternatively be expressed as:

or:

where is the leverage, is the number of explanatory variables, and is the standardized residual. Cook's distance is indeed a measure of influence equal to the product of leverage and discrepancy, as describes the leverage and describes the discrepancy.

Cook's distance will be large when either the discrepancy or the leverage is large. A data point may be an outlier if it has a large Cook's distance, especially when compared to the Cook's distance of other points in the data set.

<Figure 2>

Residuals for Diagnosing Outliers in Linear Regression Models[edit]

In linear regression, the observations are modeled according to:

, with ~

where is a vector of the observed response variables, is the x design matrix, is the total number of observations, is the number of explanatory variables, and is the error. The model makes the following assumptions:

  1. Residuals are independent
  2. The expected value of errors is zero
  3. The variance is constant
  4. Errors are normally distributed

Looking at residuals is a common way of detecting outliers. The general formulation of the residual is a common way of quantifying the unusualness of the response variable given the explanatory variable s.

Residual[edit]

The residual is the difference between the observed value of the response variable for the th subject and the predicted value of the th subject.

The residual for the th subject is denoted:

where is observed value of the response variable for the th subject and is the predicted value of the th subject.

Under the linear regression model assumptions, the residuals are normally distributed with a distribution of , where is the design matrix for the linear regression. It is important to note that residuals are correlated and have different variances. Even if the errors under the assumptions of a general linear model have equal variances, the statement is not typically true for residuals. Residuals have a variance of , which is equivalent to , where is the hat matrix containing leverage values along its diagonal. Therefore, the variance of the residual for the th observation is: , where is the residual, is the error variance, and is the leverage.

Thus, observations with high leverage tend to have smaller residuals. This makes sense intuitively, as these observations can pull the regression surface towards them.

Standardized Residuals[edit]

Standardized residuals are also known as studentized residuals. Standardized residuals are used to compare residuals on the same covariance scale.

The standardized residual of the th subject is denoted:

where is the estimated variance of the response variable and is the residual and is the leverage of the th subject.

Note that because the numerator and denominator are not independent, standardized residuals do not follow a -distribution.

Predicted Residuals[edit]

Predicted residuals are used to accommodate the correlation of error estimates with the residuals. It is calculated from leave-one-out analysis, a form of cross-validation in which regression analyses are successively run with an observation left out.

The predicted residual for the th subject is denoted:

where is the th row of the original design matrix and is the linear regression estimate after deleting the th row.

Under the assumptions for a linear model, predicted residuals are normally distributed, with a distribution of ~ .

Externally Studentized Residuals[edit]

Externally studentized residuals are also known as standardized predicted residuals, or simply studentized residuals. The measure accounts for the different covariance scale in predicted residuals.

The studentized residual for the th subject is denoted:

where is the total number of subjects, is the number of explanatory variables, is predicted residual, and is the leverage of the th subject. is the residual sum of squares and is the estimated variance of the response variable after deleting the th subject.

The studentized residuals are -distributed with degrees of freedom.

Working with outliers[edit]

The choice of how to deal with an outlier should depend on the cause. Some estimators are highly sensitive to outliers, notably estimation of covariance matrices.

Residuals[edit]

It only makes sense to work with residuals using a linear regression model in which observations are modeled according to:

, with ~

where is a vector of the observed response variables, is the x design matrix, is the total number of observations, is the number of explanatory variables, and is the error. The model makes the following assumptions:

  1. Residuals are independent
  2. The expected value of errors is zero
  3. The variance is constant
  4. Errors are normally distributed

Under the assumptions stated, the traditional formulation of the residual is a common way of quantifying the unusualness of a response variable given the explanatory variables. However, because the residual are correlated and have different variances, it often makes sense to work with standardized residuals, predicted residuals, or studentized residuals.

Standardized residuals are used to get around the different variances within the residuals. Predicted residuals are used to get around the correlation within the residuals. Studentized residuals are used to get around both the correlation and different variances within the residuals.

Retention[edit]

Even when a normal distribution model is appropriate to the data being analyzed, outliers are expected for large sample sizes and should not automatically be discarded if that is the case. The application should use a classification algorithm that is robust to outliers to model data with naturally occurring outlier points.

Exclusion[edit]

Deletion of outlier data is a controversial practice frowned upon by many scientists and science instructors; while mathematical criteria provide an objective and quantitative method for data rejection, they do not make the practice more scientifically or methodologically sound, especially in small sets or where a normal distribution cannot be assumed. Rejection of outliers is more acceptable in areas of practice where the underlying model of the process being measured and the usual distribution of measurement error are confidently known. An outlier resulting from an instrument reading error may be excluded but it is desirable that the reading is at least verified.

The two common approaches to exclude outliers are truncation (or trimming) and Winsorising. Trimming discards the outliers whereas Winsorising replaces the outliers with the nearest "nonsuspect" data.[19] Exclusion can also be a consequence of the measurement process, such as when an experiment is not entirely capable of measuring such extreme values, resulting in censored data.[20]

In regression problems, an alternative approach may be to only exclude points which exhibit a large degree of influence on the estimated coefficients, using a measure such as Cook's distance.[21]

If a data point (or points) is excluded from the data analysis, this should be clearly stated on any subsequent report.

Non-normal distributions[edit]

The possibility should be considered that the underlying distribution of the data is not approximately normal, having "fat tails". For instance, when sampling from a Cauchy distribution,[22] the sample variance increases with the sample size, the sample mean fails to converge as the sample size increases, and outliers are expected at far larger rates than for a normal distribution. Even a slight difference in the fatness of the tails can make a large difference in the expected number of extreme values.

Set-membership uncertainties[edit]

A set membership approach considers that the uncertainty corresponding to the ith measurement of an unknown random vector x is represented by a set Xi (instead of a probability density function). If no outliers occur, x should belong to the intersection of all Xi's. When outliers occur, this intersection could be empty, and we should relax a small number of the sets Xi (as small as possible) in order to avoid any inconsistency.[23] This can be done using the notion of q-relaxed intersection. As illustrated by the figure, the q-relaxed intersection corresponds to the set of all x which belong to all sets except q of them. Sets Xi that do not intersect the q-relaxed intersection could be suspected to be outliers.

Figure 5. q-relaxed intersection of 6 sets for q=2 (red), q=3 (green), q= 4 (blue), q= 5 (yellow).

Alternative models[edit]

In cases where the cause of the outliers is known, it may be possible to incorporate this effect into the model structure, for example by using a hierarchical Bayes model, or a mixture model.[24][25]

See also[edit]

References[edit]

  1. ^ Grubbs, F. E. (February 1969). "Procedures for detecting outlying observations in samples". Technometrics. 11 (1): 1–21. doi:10.1080/00401706.1969.10490657. An outlying observation, or "outlier," is one that appears to deviate markedly from other members of the sample in which it occurs.
  2. ^ Maddala, G. S. (1992). "Outliers". Introduction to Econometrics (2nd ed.). New York: MacMillan. pp. 88–96 [p. 89]. ISBN 0-02-374545-2. An outlier is an observation that is far removed from the rest of the observations.
  3. ^ Grubbs 1969, p. 1 stating "An outlying observation may be merely an extreme manifestation of the random variability inherent in the data. ... On the other hand, an outlying observation may be the result of gross deviation from prescribed experimental procedure or an error in calculating or recording the numerical value."
  4. ^ Ripley, Brian D. 2004. Robust statistics
  5. ^ Chandan Mukherjee, Howard White, Marc Wuyts, 1998, "Econometrics and Data Analysis for Developing Countries Vol. 1" [1]
  6. ^ Ruan, Da; Chen, Guoqing; Kerre, Etienne (2005). Wets, G. (ed.). Intelligent Data Mining: Techniques and Applications. Studies in Computational Intelligence Vol. 5. Springer. p. 318. ISBN 978-3-540-26256-5.
  7. ^ Rousseeuw, P; Leroy, A. (1996), Robust Regression and Outlier Detection (3rd ed.), John Wiley & Sons
  8. ^ Hodge, Victoria J.; Austin, Jim (2004), "A Survey of Outlier Detection Methodologies" (PDF), Artificial Intelligence Review, 22 (2): 85–126, doi:10.1023/B:AIRE.0000045502.10941.a9, S2CID 3330313
  9. ^ Barnett, Vic; Lewis, Toby (1994) [1978], Outliers in Statistical Data (3 ed.), Wiley, ISBN 0-471-93094-6
  10. ^ Zimek, A.; Schubert, E.; Kriegel, H.-P. (2012). "A survey on unsupervised outlier detection in high-dimensional numerical data". Statistical Analysis and Data Mining. 5 (5): 363–387. doi:10.1002/sam.11161. S2CID 6724536.
  11. ^ * Tukey, John W (1977). Exploratory Data Analysis. Addison-Wesley. ISBN 0-201-07616-0. OCLC 3058187.
  12. ^ Knorr, E. M.; Ng, R. T.; Tucakov, V. (2000). "Distance-based outliers: Algorithms and applications". The VLDB Journal the International Journal on Very Large Data Bases. 8 (3–4): 237. doi:10.1007/s007780050006. S2CID 11707259.
  13. ^ Ramaswamy, S.; Rastogi, R.; Shim, K. (2000). "Efficient algorithms for mining outliers from large data sets". Proceedings of the 2000 ACM SIGMOD international conference on Management of data - SIGMOD '00. Proceedings of the 2000 ACM SIGMOD international conference on Management of data - SIGMOD '00. pp. 427–438. doi:10.1145/342009.335437. ISBN 1581132174.
  14. ^ Breunig, M. M.; Kriegel, H.-P.; Ng, R. T.; Sander, J. (2000). LOF: Identifying Density-based Local Outliers (PDF). Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data. SIGMOD. pp. 93–104. doi:10.1145/335191.335388. ISBN 1-58113-217-4.
  15. ^ Schubert, E.; Zimek, A.; Kriegel, H. -P. (2012). "Local outlier detection reconsidered: A generalized view on locality with applications to spatial, video, and network outlier detection". Data Mining and Knowledge Discovery. 28: 190–237. doi:10.1007/s10618-012-0300-z. S2CID 19036098.
  16. ^ Thompson .R. (1985). "A Note on Restricted Maximum Likelihood Estimation with an Alternative Outlier Model".Journal of the Royal Statistical Society. Series B (Methodological), Vol. 47, No. 1, pp. 53-55
  17. ^ Smith, M.R.; Martinez, T.; Giraud-Carrier, C. (2014). "An Instance Level Analysis of Data Complexity". Machine Learning, 95(2): 225-256.
  18. ^ 1947-, Fox, John ([2016]). Applied regression analysis and generalized linear models (Third ed.). Los Angeles: SAGE. ISBN 9781452205663. OCLC 894301740. {{cite book}}: |last= has numeric name (help); Check date values in: |date= (help)CS1 maint: multiple names: authors list (link)
  19. ^ Wike, Edward L. (2006). Data Analysis: A Statistical Primer for Psychology Students. pp. 24–25. ISBN 9780202365350.
  20. ^ Dixon, W. J. (June 1960). "Simplified estimation from censored normal samples". The Annals of Mathematical Statistics. 31 (2): 385–391. doi:10.1214/aoms/1177705900.
  21. ^ Cook, R. Dennis (Feb 1977). "Detection of Influential Observations in Linear Regression". Technometrics (American Statistical Association) 19 (1): 15–18.
  22. ^ Weisstein, Eric W. Cauchy Distribution. From MathWorld--A Wolfram Web Resource
  23. ^ Jaulin, L. (2010). "Probabilistic set-membership approach for robust regression" (PDF). Journal of Statistical Theory and Practice. 4: 155–167. doi:10.1080/15598608.2010.10411978. S2CID 16500768.
  24. ^ Roberts, S. and Tarassenko, L.: 1995, A probabilistic resource allocating network for novelty detection. Neural Computation 6, 270–284.
  25. ^ Bishop, C. M. (August 1994). "Novelty detection and Neural Network validation". Proceedings of the IEE Conference on Vision, Image and Signal Processing. 141 (4): 217–222. doi:10.1049/ip-vis:19941330.
  • ISO 16269-4, Statistical interpretation of data — Part 4: Detection and treatment of outliers
  • Strutz, Tilo (2010). Data Fitting and Uncertainty - A practical introduction to weighted least squares and beyond. Vieweg+Teubner. ISBN 978-3-8348-1022-9.

External links[edit]