Info

where D(o2) is the matrix of the diagonal elements of X. It follows from eq. 4.10 that:

Significance The theory underlying tests of significance is discussed in Section 1.2. In the case of r of r, inference about the statistical population is in most instances through the null hypothesis Ho: p = 0. Ho may also state that p has some other value than zero, which would be derived from ecological hypotheses. The general formula for testing correlation coefficients is given in Section 4.5 (eq. 4.39). The Pearson correlation coefficient j involves two descriptors (i.e. y and yk, hence m = 2 when testing a coefficient of simple linear correlation using eq. 4.39), so that Vi = 2 - 1 = 1 and V2 = n - 2 = v. The general formula then becomes:

1- r jk where V = n - 2. Statistic F is tested against Fa[1,V].

Since the square root of a statistic F [V V ] is a statistic t [V = V ] when V1 = 1, r may also be tested using: 1 2 2

rjk V

The t statistic is tested against the value ta[V]. In other words, H0 is tested by comparing the F (or t) statistic to the value found in a table of critical values of F (or t). Results of tests with eqs. 4.12 and 4.13 are identical. The number of degrees of freedom is V = (n - 2) because calculating a correlation coefficient requires prior estimation of two parameters, i.e. the means of the two populations (eq. 4.7). Ho is rejected when the probability corresponding to F (or t) is smaller than a predetermined level of significance (a for a two-tailed test, and a/2 for a one-tailed test; the difference between the two types of tests is explained in Section 1.2). In principle, this test requires that the sample of observations be drawn from a population with a bivariate normal distribution (Section 4.3). Testing for normality and multinormality is discussed in Section 4.7, and normalizing transformations in Section 1.5. When the data do not satisfy the condition of normality, t can be tested by randomization, as shown in Section 1.2.

Test of in- It is also possible to test the independence of all variables in a data matrix by dependence considering the set of all correlation coefficients found in matrix R. The null of variables hypothesis here is that the p(p - 1)/2 coefficients are all equal to zero, Ho: R = I (unit matrix). According to Bartlett (1954), R can be transformed into a X2 (chi-square) test statistic:

where ln |R| is the natural logarithm of the determinant of R. This statistic is approximately distributed as %2 with v = p(p - 1)/2 degrees of freedom. When the probability associated with X2 is significantly low, the null hypothesis of complete independence of the p descriptors is rejected. In principle, this test requires the observations to be drawn from a population with a multivariate normal distribution (Section 4.3). If the null hypothesis of independence of all variables is rejected, the p(p - 1)/2 correlation coefficients in matrix R may be tested individually; see Box 1.3 about multiple testing.

Other correlation coefficients are described in Sections 4.5 and 5.2. Wherever the coefficient of linear correlation must be distinguished from other coefficients, it is referred to as Pearson's r. In other instances, r is simply called the coefficient of linear correlation or correlation coefficient. Table 4.5 summarizes the main properties of this coefficient.

Was this article helpful?

0 0

Post a comment