The presence of cross-sectional effects causes serial correlation in the errors. Therefore, serial correlation is often tested jointly with cross-sectional effects. Joint and conditional tests for both serial correlation and cross-sectional effects have been covered extensively in the literature.
Baltagi and Li (1991) derive the LM test statistic, which jointly tests for zero first-order serial correlation and random cross-sectional effects under normality and homoscedasticity. The test statistic is independent of the form of serial correlation, so it can be used with either AR or MA
error terms. The null hypothesis is a white-noise component:
for MA
with the MA coefficient
or
for AR
with the AR coefficient
. The alternative is either a one-way random-effects model (cross-sectional) or first-order serial correlation AR
or MA
in errors, or both. Under the null hypothesis, the model can be estimated by the pooled estimation (OLS). Denote the residuals as
. The test statistic is
where
Wooldridge (2002, sec. 10.4.4) suggests a test for the absence of an unobserved effect. Under the null hypothesis , the errors
are serially uncorrelated. To test
, Wooldridge (2002) proposes to test for AR(1) serial correlation. The test statistic that he proposes is
where are the pooled OLS residuals. The test statistic W can detect many types of serial correlation in the error term u, so it has power against both the one-way random-effects specification and the serial correlation in error terms.
Bera, Sosa Escudero, and Yoon (2001) point out that the standard specification tests, such as the Honda (1985) test described in the section Honda UMP Test and Moulton and Randolph SLM Test, are not valid when they test for either cross-sectional random effects or serial correlation without considering the presence of the other effects. They suggest a modified Rao’s score (RS) test. When A and B are defined as in Baltagi and Li (1991), the test statistic for testing serial correlation under random cross-sectional effects is
Baltagi and Li (1991, 1995) derive the conventional RS test when the cross-sectional random effects is assumed to be absent:
Symmetrically, to test for the cross-sectional random effects in the presence of serial correlation, the modified Rao’s score test statistic is
and the conventional Rao’s score test statistic is given in Breusch and Pagan (1980). The test statistics are asymptotically distributed as .
Because , the one-sided test is expected to lead to more powerful tests. The one-sided test can be derived by taking the signed square root of the two-sided statistics:
Baltagi and Li (1995) propose the two-sided LM test statistic for testing a white-noise component in a fixed one-way model ( or
, given that
are fixed effects)
where are the residuals from the fixed one-way model (FIXONE). The LM test statistic is asymptotically distributed as
under the null hypothesis. The one-sided LM test with alternative hypothesis
is
which is asymptotically distributed as standard normal.
Bhargava, Franzini, and Narendranathan (1982) propose a test of serial correlation by using the Durbin-Watson statistic,
where are the residuals from the fixed one-way model (FIXONE).
The test statistic ranges from 0 to 4, where
indicates no serial correlation. Values closer to 0 indicate positive serial correlation, and values closer to 4 indicate negative serial correlation. A value of 0 indicates a random walk.
The PANEL procedure outputs three Durbin-Watson tests for serial correlation:
The first two tests report as the test statistic, and the third test reports
, where values of
close to 0 indicate negative correlation. In finite samples, the mechanics of the Durbin-Watson test produce an indeterminate region, which is a region of uncertainty about whether to reject the null hypothesis. Because of this ambiguity, all three tests report two p-values. The first test and the third test produce Pr < DWLower and Pr < DWUpper. The second test produces Pr > DWLower and Pr > DWUpper. For more information about the second test, see the section BFN R Statistics.
For the first and the third test, Pr < DWLower is always greater than or equal to Pr < DWUpper. If Pr < DWLower is less than or equal to the significance level, then the null hypothesis that is rejected. If Pr < DWUpper is greater than or equal to the significance level, then the null hypothesis is accepted. These two p-values get closer when N increases.
Bhargava, Franzini, and Narendranathan (1982) also suggest using the Berenblut-Webb statistic, which is a locally most powerful invariant test in the neighborhood of . The test statistic is
where are the residuals from the first-difference estimation. The tests for the Berenblut-Webb statistic are the same as the three tests that are produced for the Durbin-Watson Statistic. All three tests produce two p-values, and the interpretation of these p-values is the same as that for the Durbin-Watson statistic.
Bhargava, Franzini, and Narendranathan (1982) suggest using the statistic to test whether residuals are from a random walk. You can also use the Durbin-Watson and Berenblut-Webb statistics to test the random walk null hypothesis on the basis of the lower bound and upper bound generated by the
statistic. The null hypothesis is
, and the alternative hypothesis is
. Let
, where
is a
symmetric matrix that has the following elements:
The test statistic is
Bhargava, Franzini, and Narendranathan (1982) generate the upper and lower bounds of . The statistics g
and d
can be used with the same bounds. They satisfy
, and they are equivalent for large panels. Therefore, you can also use the
statistic to test the white noise null hypothesis. PROC PANEL produces two p-values for the random walk test: Pr > BFNLower and Pr > BFNUpper. Pr > BFNLower is always smaller than or equal to Pr > BFNUpper. If Pr > BFNUpper is less than or equal to the significance level, the null hypothesis that
is rejected. If Pr > BFNLower is greater than or equal to the significance level, the null hypothesis is accepted.
The p-values that are reported by the Durbin-Watson statistic, the Berenblut-Webb statistic, and the BFN statistic are generated on the basis of simulation of lower bounds and upper bounds. Bhargava, Franzini, and Narendranathan (1982) use the Imhof routine with numerical integration and provide lower bounds and upper bounds only at the 5% significance level. Modern techniques enable you to simulate lower bounds and upper bounds at different percentiles; therefore, you can test against different significance levels. To conclude, using the bounds and using the p-values to interpret test results are essentially the same.