Rank Tests | Vantage Analytics Library - Rank Tests - Vantage Analytics Library

Vantage Analytics Library User Guide

Deployment
VantageCloud
VantageCore
Edition
Enterprise
IntelliFlex
Lake
VMware
Product
Vantage Analytics Library
Release Number
2.2.0
Published
March 2023
Language
English (United States)
Last Update
2024-01-02
dita:mapPath
ibw1595473364329.ditamap
dita:ditavalPath
iup1603985291876.ditaval
dita:id
zyl1473786378775
Product Category
Teradata Vantage

Rank tests calculate statistics on data ranks rather than the data itself. Therefore, the data must have at least an ordinal scale of measurement. Variables can be continuous, discrete, or a mixture of both. If a variable is nonnumeric, the test ranks it by alphanumeric precedence.

For nonnumeric data that is ordinal and ranked, rank tests may be the most powerful tests available.

Rank tests can efficiently analyze numeric variables that meet the requirements of parametric tests, such as independent, randomly distributed normal variables.

The Vantage Analytics Library function ranktest can perform the Mann-Whitney or Kruskal-Wallis test, the Wilcoxen test, or the Friedman test with Kendall's Coefficient of Concordance and Spearmans' Rho.

Mann-Whitney/Kruskal-Wallis Test

If no independent variable has more than two distinct values, the ranktest function performs the Mann-Whitney test.

If any independent variable has more than two distinct values, the function performs the Kruskal-Wallis test for all variables. Because Kruskal-Wallis is a generalization of Mann-Whitney, Kruskal-Wallis results are valid for all variables, including those with only two distinct values. The only difference between the Mann-Whitney and Kruskal-Wallis tests is how they treat the independent variables.

If independent=true, the function performs a separate, independent test for each independent variable and shows the result of each test.

If independent=false (the default), the function uses all independent variable value combinations, using the Kruskal-Wallis test if there are more than two.
  • Mann-Whitney test

    The Mann-Whitney test (also called the Wilcoxon two-sample test) is the nonparametric analog of the two-sample T-test. It tests whether two independent groups of sampled data are from the same population (that is, whether the samples have the same distribution function). The Mann-Whitney test makes no assumptions about data distribution. It is an alternative to the independent group T-test. When the data do not meet the assumptions of normality or equality of variance, the Mann-Whitney test is more powerful than the T-test; when they do, it is less so. Unlike the T-test, the Mann-Whitney test provides the same results under any monotonic transformation of the data, so its results are more generalizable.

    The Mann-Whitney is appropriate when the independent variable is nominal or ordinal and the dependent variable is ordinal (or treated as ordinal). The test assumes that the variable on which to compare the two groups is continuously distributed.

    The null hypothesis is that both samples have the same distribution. The alternative hypotheses are that the distributions differ in either direction (two-tailed test) or in a specific direction (upper-tailed or lower-tailed test). The output is a p-value to compare to the specified threshold to determine whether to reject the null hypothesis.

    Each unique set of values in the groupby columns is called a group-by value set, or GBV set. The function does a separate Mann-Whitney test for each GBV set. The GBV set includes one or more columns (independent variables) whose values define two independent groups of sampled data and a column (dependent variable) whose distribution is of interest.

  • Kruskal-Wallis test

    The Kruskal-Wallis test is the nonparametric analog of the one-way analysis of variance or F-test used to compare three or more independent groups of sampled data. It tests whether multiple samples of data are from the same population (that is, whether the samples have the same distribution function). Unlike the parametric independent group ANOVA (one-way ANOVA), the Kruskal-Wallis test makes no assumptions about the distribution of the data. Therefore, it is less powerful than ANOVA.

    The function does a separate Kruskal-Wallis test for each GBV set, testing whether all populations are identical.

    The null hypothesis is that all samples have the same distribution. The alternative hypothesis is that the distributions differ. The output for each GBV is a statistic H and a p-value to compare to the specified threshold to determine whether to reject the null hypothesis for the GBV.

Wilcoxon Signed Ranks Test

The Wilcoxon Signed Ranks test is an alternative to the T-test for correlated samples, appropriate for data that do not meet these requirements for the T-test:
  • The scale of measurement has the properties of an equal-interval scale (for example, when measures are from a rating scale).
  • Differences between paired values are randomly selected from the source population.
  • The source population has a normal distribution.
For the Wilcoxon Signed Ranks test, data must meet these requirements:
  • The distribution of difference scores is symmetric (which implies an equal-interval scale).
  • Difference scores are mutually independent.
  • Difference scores have the same mean.

The function replaces the original measures with ranks, thereby analyzing only the ordinal relationships. The function computes the sum of the signed ranks, W. When the numbers of positive and negative signs are almost equal (that is, when there is no tendency in either direction), the value of W is near zero and the null hypothesis is accepted. Positive or negative sums indicate a tendency for the ranks to have significance, so there is a difference in the cases in the specified direction.

The Wilcoxon test tests whether two samples come from a population with a specific mean or median. The null hypothesis is that the samples come from populations with the same mean or median. The alternative hypothesis is that the samples come from populations with different means or medians (two-tailed test) or whether the difference is in a specific direction (upper-tailed or lower-tailed test). The output is a p-value to compare to the specified threshold to determine whether to reject the null hypothesis.

Friedman Test with Kendall's Coefficient of Concordance and Spearman's Rho

The Friedman test is an extension of the sign test for several independent samples. It is similar to the two-way analysis of variance, but depends only on the ranks of the observations, so it is like a two-way ANOVA on ranks.

The Friedman test is best for six or more treatments. For three or fewer treatments, it is not powerful enough.

The Friedman test tests for treatment differences in a randomized, complete block design. The input data is a set of k-variate random variables called blocks. The data must meet these requirements:
  • The data in the blocks is mutually independent.
  • Within each block, observations are ordinal rankable according to a criterion of interest.
  • All treatments apply to each block.

    When they do not, the block design is incomplete. Use another test, such as the Durban test.

A Friedman test uses rank scores and the F table. (An alternative implementation called the Friedman Statistic uses the chi-squared table.)

The Friedman test computes the following:
  • Friedman statistics
  • Kendall’s Coefficient of Concordance, W

    W is in the range [0, 1]. The higher the value, the stronger the association. W is 1 if all treatments have the same rank in all blocks. W is 0 if all blocks "disagree perfectly."

  • Spearman's Rho

    Spearman's rho is a measure of the linear relationship between two variables. It differs from Pearson's correlation only in that the computations are done after the variables are converted to ranks. Spearman’s Rho is 1 if there is perfect agreement among rankings. Disagreement causes rho to be less than 1, sometimes negative.