PANEL Procedure

Tests for Serial Correlation and Cross-Sectional Effects

The presence of cross-sectional effects causes serial correlation in the errors. Therefore, serial correlation is often tested jointly with cross-sectional effects. Joint and conditional tests for both serial correlation and cross-sectional effects have been covered extensively in the literature.

Baltagi and Li Joint LM Test for Serial Correlation and Random Cross-Sectional Effects

Baltagi and Li (1991) derive the LM test statistic, which jointly tests for zero first-order serial correlation and random cross-sectional effects under normality and homoscedasticity. The test statistic is independent of the form of serial correlation, so it can be used with either ARleft-parenthesis 1 right-parenthesis or MAleft-parenthesis 1 right-parenthesis error terms. The null hypothesis is a white-noise component: upper H 0 Superscript 1 Baseline colon sigma Subscript gamma Superscript 2 Baseline equals 0 comma theta equals 0 for MAleft-parenthesis 1 right-parenthesis with the MA coefficient theta or upper H 0 squared colon sigma Subscript gamma Superscript 2 Baseline equals 0 comma rho equals 0 for ARleft-parenthesis 1 right-parenthesis with the AR coefficient rho. The alternative is either a one-way random-effects model (cross-sectional) or first-order serial correlation ARleft-parenthesis 1 right-parenthesis or MAleft-parenthesis 1 right-parenthesis in errors, or both. Under the null hypothesis, the model can be estimated by the pooled estimation (OLS). Denote the residuals as ModifyingAbove u With caret Subscript i t. The test statistic is

upper B upper L Baseline 91 equals StartFraction upper N upper T squared Over 2 left-parenthesis upper T minus 1 right-parenthesis left-parenthesis upper T minus 2 right-parenthesis EndFraction left-bracket upper A squared minus 4 upper A upper B plus 2 upper T upper B squared right-bracket right-arrow Overscript upper H 0 Superscript 1 comma 2 Baseline Endscripts chi squared left-parenthesis 2 right-parenthesis

where

upper A equals StartFraction sigma-summation Underscript i equals 1 Overscript upper N Endscripts left-parenthesis sigma-summation Underscript t equals 1 Overscript upper T Endscripts ModifyingAbove u With caret Subscript i t Baseline right-parenthesis squared Over sigma-summation Underscript i equals 1 Overscript upper N Endscripts sigma-summation Underscript t equals 1 Overscript upper T Endscripts ModifyingAbove u With caret Subscript i t Superscript 2 Baseline EndFraction minus 1 comma upper B equals StartFraction sigma-summation Underscript i equals 1 Overscript upper N Endscripts sigma-summation Underscript t equals 2 Overscript upper T Endscripts ModifyingAbove u With caret Subscript i t Baseline ModifyingAbove u With caret Subscript i comma t minus 1 Baseline Over sigma-summation Underscript i equals 1 Overscript upper N Endscripts sigma-summation Underscript t equals 1 Overscript upper T Endscripts ModifyingAbove u With caret Subscript i t Superscript 2 Baseline EndFraction

Wooldridge Test for the Presence of Unobserved Effects

Wooldridge (2002, sec. 10.4.4) suggests a test for the absence of an unobserved effect. Under the null hypothesis upper H 0 colon sigma Subscript gamma Superscript 2 Baseline equals 0, the errors u Subscript i t are serially uncorrelated. To test upper H 0 colon sigma Subscript gamma Superscript 2 Baseline equals 0, Wooldridge (2002) proposes to test for AR(1) serial correlation. The test statistic that he proposes is

upper W equals StartFraction sigma-summation Underscript i equals 1 Overscript upper N Endscripts sigma-summation Underscript t equals 1 Overscript upper T minus 1 Endscripts sigma-summation Underscript s equals t plus 1 Overscript upper T Endscripts ModifyingAbove u With caret Subscript i t Baseline ModifyingAbove u With caret Subscript i s Baseline Over left-bracket sigma-summation Underscript i equals 1 Overscript upper N Endscripts left-parenthesis sigma-summation Underscript t equals 1 Overscript upper T minus 1 Endscripts sigma-summation Underscript s equals t plus 1 Overscript upper T Endscripts ModifyingAbove u With caret Subscript i t Baseline ModifyingAbove u With caret Subscript i s Baseline right-parenthesis squared right-bracket Superscript 1 slash 2 Baseline EndFraction right-arrow script upper N left-parenthesis 0 comma 1 right-parenthesis

where ModifyingAbove u With caret Subscript i t are the pooled OLS residuals. The test statistic W can detect many types of serial correlation in the error term u, so it has power against both the one-way random-effects specification and the serial correlation in error terms.

Bera, Sosa Escudero, and Yoon Modified Rao’s Score Test in the Presence of Local Misspecification

Bera, Sosa Escudero, and Yoon (2001) point out that the standard specification tests, such as the Honda (1985) test described in the section Honda UMP Test and Moulton and Randolph SLM Test, are not valid when they test for either cross-sectional random effects or serial correlation without considering the presence of the other effects. They suggest a modified Rao’s score (RS) test. When A and B are defined as in Baltagi and Li (1991), the test statistic for testing serial correlation under random cross-sectional effects is

upper R upper S Subscript rho Superscript asterisk Baseline equals StartFraction upper N upper T squared left-parenthesis upper B minus upper A slash upper T right-parenthesis squared Over left-parenthesis upper T minus 1 right-parenthesis left-parenthesis 1 minus 2 slash upper T right-parenthesis EndFraction

Baltagi and Li (1991, 1995) derive the conventional RS test when the cross-sectional random effects is assumed to be absent:

upper R upper S Subscript rho Baseline equals StartFraction upper N upper T squared upper B squared Over upper T minus 1 EndFraction

Symmetrically, to test for the cross-sectional random effects in the presence of serial correlation, the modified Rao’s score test statistic is

upper R upper S Subscript mu Superscript asterisk Baseline equals StartFraction upper N upper T left-parenthesis upper A minus 2 upper B right-parenthesis squared Over 2 left-parenthesis upper T minus 1 right-parenthesis left-parenthesis 1 minus 2 slash upper T right-parenthesis EndFraction

and the conventional Rao’s score test statistic is given in Breusch and Pagan (1980). The test statistics are asymptotically distributed as chi squared left-parenthesis 1 right-parenthesis.

Because sigma Subscript gamma Superscript 2 Baseline greater-than 0, the one-sided test is expected to lead to more powerful tests. The one-sided test can be derived by taking the signed square root of the two-sided statistics:

upper R upper S upper O Subscript mu Superscript asterisk Baseline equals StartRoot StartFraction upper N upper T Over 2 left-parenthesis upper T minus 1 right-parenthesis left-parenthesis 1 minus 2 slash upper T right-parenthesis EndFraction EndRoot left-parenthesis upper A minus 2 upper B right-parenthesis right-arrow script upper N left-parenthesis 0 comma 1 right-parenthesis

Baltagi and Li LM Test for First-Order Correlation under Fixed Effects

Baltagi and Li (1995) propose the two-sided LM test statistic for testing a white-noise component in a fixed one-way model (upper H 0 Superscript 5 Baseline colon theta equals 0 or upper H 0 Superscript 6 Baseline colon rho equals 0, given that gamma Subscript i are fixed effects)

upper B upper L Baseline 95 equals StartFraction upper N upper T squared Over upper T minus 1 EndFraction left-parenthesis StartFraction sigma-summation Underscript i equals 1 Overscript upper N Endscripts sigma-summation Underscript t equals 2 Overscript upper T Endscripts ModifyingAbove u With caret Subscript i t Baseline ModifyingAbove u With caret Subscript i comma t minus 1 Baseline Over sigma-summation Underscript i equals 1 Overscript upper N Endscripts sigma-summation Underscript t equals 1 Overscript upper T Endscripts ModifyingAbove u With caret Subscript i t Superscript 2 Baseline EndFraction right-parenthesis squared

where ModifyingAbove u With caret Subscript i t are the residuals from the fixed one-way model (FIXONE). The LM test statistic is asymptotically distributed as chi 1 squared under the null hypothesis. The one-sided LM test with alternative hypothesis rho greater-than 0 is

upper B upper L 95 Subscript 2 Baseline equals StartRoot StartFraction upper N upper T squared Over upper T minus 1 EndFraction EndRoot StartFraction sigma-summation Underscript i equals 1 Overscript upper N Endscripts sigma-summation Underscript t equals 2 Overscript upper T Endscripts ModifyingAbove u With caret Subscript i t Baseline ModifyingAbove u With caret Subscript i comma t minus 1 Baseline Over sigma-summation Underscript i equals 1 Overscript upper N Endscripts sigma-summation Underscript t equals 1 Overscript upper T Endscripts ModifyingAbove u With caret Subscript i t Superscript 2 Baseline EndFraction

which is asymptotically distributed as standard normal.

Durbin-Watson Test

Bhargava, Franzini, and Narendranathan (1982) propose a test of serial correlation by using the Durbin-Watson statistic,

d Subscript rho Baseline equals StartFraction sigma-summation Underscript i equals 1 Overscript upper N Endscripts sigma-summation Underscript t equals 2 Overscript upper T Endscripts left-parenthesis ModifyingAbove e With caret Subscript i t Baseline minus ModifyingAbove e With caret Subscript i comma t minus 1 Baseline right-parenthesis squared Over sigma-summation Underscript i equals 1 Overscript upper N Endscripts sigma-summation Underscript t equals 1 Overscript upper T Endscripts ModifyingAbove e With caret Subscript i t Superscript 2 Baseline EndFraction

where ModifyingAbove e With caret Subscript i t are the residuals from the fixed one-way model (FIXONE).

The test statistic d Subscript rho ranges from 0 to 4, where d Subscript rho Baseline equals 2 indicates no serial correlation. Values closer to 0 indicate positive serial correlation, and values closer to 4 indicate negative serial correlation. A value of 0 indicates a random walk.

The PANEL procedure outputs three Durbin-Watson tests for serial correlation:

  • white noise versus positive correlation: upper H 0 colon rho equals 0 vs. upper H 1 colon rho greater-than 0

  • random walk versus stationary: upper H 0 colon rho equals 1 vs. upper H 1 colon rho less-than 1

  • white noise versus negative correlation: upper H 0 colon rho equals 0 vs. upper H 1 colon rho less-than 0

The first two tests report d Subscript rho as the test statistic, and the third test reports 4 minus d Subscript rho, where values of 4 minus d Subscript rho close to 0 indicate negative correlation. In finite samples, the mechanics of the Durbin-Watson test produce an indeterminate region, which is a region of uncertainty about whether to reject the null hypothesis. Because of this ambiguity, all three tests report two p-values. The first test and the third test produce Pr < DWLower and Pr < DWUpper. The second test produces Pr > DWLower and Pr > DWUpper. For more information about the second test, see the section BFN R Statistics.

For the first and the third test, Pr < DWLower is always greater than or equal to Pr < DWUpper. If Pr < DWLower is less than or equal to the significance level, then the null hypothesis that rho equals 0 is rejected. If Pr < DWUpper is greater than or equal to the significance level, then the null hypothesis is accepted. These two p-values get closer when N increases.

Berenblut-Webb Statistic

Bhargava, Franzini, and Narendranathan (1982) also suggest using the Berenblut-Webb statistic, which is a locally most powerful invariant test in the neighborhood of rho equals 1. The test statistic is

g Subscript rho Baseline equals StartFraction sigma-summation Underscript i equals 1 Overscript upper N Endscripts sigma-summation Underscript t equals 2 Overscript upper T Endscripts normal upper Delta u overTilde Subscript i comma t Superscript 2 Baseline Over sigma-summation Underscript i equals 1 Overscript upper N Endscripts sigma-summation Underscript t equals 1 Overscript upper T Endscripts ModifyingAbove u With caret Subscript i t Superscript 2 Baseline EndFraction

where normal upper Delta u overTilde Subscript i t are the residuals from the first-difference estimation. The tests for the Berenblut-Webb statistic are the same as the three tests that are produced for the Durbin-Watson Statistic. All three tests produce two p-values, and the interpretation of these p-values is the same as that for the Durbin-Watson statistic.

BFN upper R Subscript rho Statistic

Bhargava, Franzini, and Narendranathan (1982) suggest using the upper R Subscript rho statistic to test whether residuals are from a random walk. You can also use the Durbin-Watson and Berenblut-Webb statistics to test the random walk null hypothesis on the basis of the lower bound and upper bound generated by the upper R Subscript rho statistic. The null hypothesis is rho equals 1, and the alternative hypothesis is StartAbsoluteValue rho EndAbsoluteValue less-than 1. Let bold upper F Superscript asterisk Baseline equals upper I Subscript upper N Baseline circled-times bold upper F, where bold upper F is a left-parenthesis upper T minus 1 right-parenthesis left-parenthesis upper T minus 1 right-parenthesis symmetric matrix that has the following elements:

bold upper F Subscript t t Sub Superscript prime Subscript Baseline equals left-parenthesis upper T minus t Superscript prime Baseline right-parenthesis t slash upper T if t prime greater-than-or-equal-to t left-parenthesis t comma t prime equals 1 comma ellipsis comma upper T minus 1 right-parenthesis

The test statistic is

StartLayout 1st Row 1st Column normal upper R Subscript rho 2nd Column equals 3rd Column StartFraction normal upper Delta upper U overTilde prime normal upper Delta upper U overTilde Over normal upper Delta upper U overTilde prime upper F Superscript asterisk Baseline normal upper Delta upper U overTilde EndFraction 2nd Row 1st Column Blank 2nd Column equals 3rd Column StartFraction upper T sigma-summation Underscript i equals 1 Overscript upper N Endscripts sigma-summation Underscript t equals 2 Overscript upper T Endscripts normal upper Delta u overTilde Subscript i comma t Superscript 2 Baseline Over sigma-summation Underscript i equals 1 Overscript upper N Endscripts sigma-summation Underscript t equals 2 Overscript upper T Endscripts left-parenthesis t minus 1 right-parenthesis left-parenthesis upper T minus t plus 1 right-parenthesis normal upper Delta u overTilde Subscript i comma t Superscript 2 Baseline plus 2 sigma-summation Underscript i equals 1 Overscript upper N Endscripts sigma-summation Underscript t equals 2 Overscript upper T minus 1 Endscripts sigma-summation Underscript t prime equals t plus 1 Overscript upper T Endscripts left-parenthesis upper T minus t prime plus 1 right-parenthesis left-parenthesis t minus 1 right-parenthesis normal upper Delta u overTilde Subscript i comma t Baseline normal upper Delta u overTilde Subscript i comma t prime Baseline EndFraction EndLayout

Bhargava, Franzini, and Narendranathan (1982) generate the upper and lower bounds of normal upper R Subscript rho. The statistics gSubscript rho and dSubscript rho can be used with the same bounds. They satisfy normal upper R Subscript rho Baseline less-than-or-equal-to normal g Subscript rho Baseline less-than-or-equal-to normal d Subscript rho, and they are equivalent for large panels. Therefore, you can also use the upper R Subscript rho statistic to test the white noise null hypothesis. PROC PANEL produces two p-values for the random walk test: Pr > BFNLower and Pr > BFNUpper. Pr > BFNLower is always smaller than or equal to Pr > BFNUpper. If Pr > BFNUpper is less than or equal to the significance level, the null hypothesis that rho equals 1 is rejected. If Pr > BFNLower is greater than or equal to the significance level, the null hypothesis is accepted.

The p-values that are reported by the Durbin-Watson statistic, the Berenblut-Webb statistic, and the BFN upper R Subscript rho statistic are generated on the basis of simulation of lower bounds and upper bounds. Bhargava, Franzini, and Narendranathan (1982) use the Imhof routine with numerical integration and provide lower bounds and upper bounds only at the 5% significance level. Modern techniques enable you to simulate lower bounds and upper bounds at different percentiles; therefore, you can test against different significance levels. To conclude, using the bounds and using the p-values to interpret test results are essentially the same.

Last updated: June 19, 2025