Menu
Home
Log in / Register
 
Home arrow Economics arrow Econometrics
< Prev   CONTENTS   Next >

6.2. Estimation of partial regression coefficients

The mathematics behind the estimation of the OLS estimators in the multiple regression case is very similar to the simple model, and the idea is the same. But the formulas for the sample estimators are slightly different. The sample estimators for model (6.2) are given by the following expressions:

where SYi is the sample covariance between Y and X1, r12 is the sample correlation between X1 and X2, rY2 is the sample correlation between Y and X2, SY is the sample standard deviation for Y, and Si is the sample standard deviation for X1. Observe the similarity between the sample estimators of the multiple-regression model and the simple regression model. The intercept is just an extension of the estimator for the simple regression model, incorporating the additional variable. The two partial regression slope coefficients are slightly more involved but possess an interesting property. In case of (6.9) we have that

That is, if the correlation between the two explanatory variables is zero, the multiple regression coefficients coincide with the sample estimators of the simple regression model. However, if the correlation between x and x2 equals one (or minus one), the estimators are not defined, since that would lead to a division by zero, which is meaningless. High correlation between explanatory variables is referred to as a collinearity problem and will be discussed further in chapter 11. Equation (6.8)-(6.10) can be generalized further to include more parameters. When doing that all par wise correlations coefficients are then included in the sample estimators and in order for them to coincide with the simple model, they all have to be zero.

The measure of fit in the multiple regression case follows the same definition as for the simple regression model, with the exception that the coefficient of determination no longer is the square of the simple correlation coefficient, but instead something that is called the multiple-correlation coefficient.

In multiple regression analysis, we have a set of variables X1, X2, ... that is used to explain the variability of the dependent variable Y. The multivariate counterpart of the coefficient of determination R2 is the coefficient of multiple determination. The square root of the coefficient of multiple determination is the coefficient of multiple correlation, R, sometimes just called the multiple R. The multiple r can only take positive values as appose to the simple correlation coefficient that can take both negative and positive values. In practice this statistics is of minor importance, even though it is reported in output generated by soft wares such as Excel.

6.3. The joint hypothesis test

An important application of the multiple regression analysis is the possibility to test several parameters simultaneously. Assume the following multiple-regression model:

Using this model we may test the following hypothesis:

The first hypothesis concerns a single parameter test, and is carried out in the same way here as was done in the simple regression model. We will therefore not go through these steps again but instead focus on the simultaneous tests given by hypothesis b and c.

6.3.1. Testing a subset of coefficients

The hypothesis given by (b) represents the case of testing a subset of coefficients, in a regression model that contains several (more than two) explanatory variables. In this example we choose to test B1 and B2 but it could of course be any other groups of coefficients included in the model. Let us start by rephrasing the hypothesis, with the emphasis on the alternative hypothesis:

It is often believed that in order to reject the null hypothesis, both (all) coefficients need to be different from zero. That is just wrong. It is important to understand that the complement of the null hypothesis in this situation is represented by the case where at least one of the coefficients is different from zero.

Whenever working with test of several parameters simultaneously we cannot use the standard f-test, but instead we should be using an P-test. An P-test is based on a test statistic that follows the P-distribution. We would like to know if the model that we stated corresponds to the null hypothesis, or if the alternative hypothesis is a significant improvement of the fit. So, we are basically testing two specifications against each other, which are given by:

Model according to the null hypothesis: Y = B0 + B3 X3 + U (6.12)

Model according to the alternative hypothesis: Y = B0 + B1X1 + B2 X2 + B3 X3 + U (6.13)

A way to compare these two models is to see how different their RSS (Residual Sum of Squares) are from each other. We know that the better fit a model has, the smaller is the RSS of the model. When looking at specification (6.12) you should think of it as a restricted version of the full model given by (6.13) since two of the parameters are forced to zero. In (6.13) on the other hand, the two parameters are free to take any value the data allows them to take. Hence, the two specifications generate a Restricted RSS (RSSr ) received from (6.12) and an Unrestricted RSS (RSSn) received from (6.13). In practices this means that you have to run each model separately using the same data set and collect RSS-values from each regression and then calculate the test value.

The test value can be received from the test statistic (test function) given by the following formula:

where dfx and df2 refers to the degrees of freedom for the numerator and denominator respectively. The degrees of freedom for the numerator is simply the difference between the degrees of freedom of the two Residual Sum of Squares. Hence, rf/j = (n-kj - (n-k2) = k2-kv kl is the number of parameters in the restricted model, and k2 is the number of parameters in the unrestricted model. In this case we have that k -k =2.

When there is very little difference in fit between the two models the difference given in the numerator will be very small and the P-value will be close to zero. However, if the fit differ extensively, the P-value will be large. Since the test statistic given by (6.14) has a known distribution (if the null hypothesis is true) we will be able to say when the difference is sufficiently large to say that the null hypothesis should be rejected.

Example 6.2

Consider the two specifications given by (6.12) and (6.13), and assume that we have a sample of 1000 observations. Assume further that we would like to test the joint hypothesis discussed above. Running the two specifications on our sample we received the following information given in Table 6.1.

Summary results from the two regressions

Table 6.1 Summary results from the two regressions

Using the information in Table 6.1 we may calculate the test value for our test.

The calculated test value has to be compared with a critical value. In order to find a critical value we need to specify a significance level. We choose the standard level of 5 percentage and find the following value in the table: Fc = 4.61.

Observe that the hypothesis that we are dealing with here is one sided since the restricted RSS never can be lower than the unrestricted RSS. Comparing the critical value with the test value we see that the test value is much larger, which means that we can reject the null hypothesis. That is, the parameters involved in the test have a simultaneous effect on the dependent variable that is significantly different from zero.

6.3.2. Testing the regression equation

This test is often referred to as the test of the over all significance and by performing the test we ask if the included variables has a simultaneous effect on the dependent variable. Alternatively, we ask if the population coefficients (excluding the intercept) are simultaneously equal to zero, or at least one of them are different from zero.

In order to test this hypothesis, we compare the following model specifications against each other: Model according to the null hypothesis: Y = B0 + U (6.15)

Model according to the alternative hypothesis: Y = B0 + B1X1 + B2 X2 + B3 X3 + U (6.16)

The test function that should be used for this test is the same in structure as before, but with some important differences, that makes it sufficient to estimate just one regression for the full model instead of one for each specification. To see this we can rewrite the RSSR in the following way:

Hence the test function can be expressed in sums of squares that could be found in the ANOVA table of the unrestricted model. The test function therefore becomes:

Example 6.3

Assume that we have access to a sample of 1000 observations and that we would like to estimate the parameters in (6.16), and test the over all significance of the model. Running the regression using our sample we received the following ANOVA table:

ANOVA table

Table 6.2 ANOVA table

Using the information from Table 6.2 we can calculate the test value:

This is a very large test value. We can therefore conclude that the included variables explain a significant part of the variation of the dependent variable.

 
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Philosophy
Political science
Psychology
Religion
Sociology
Travel