Başlık:
Basic econometrics
Yazar:
Gujarati, Damodar N.
ISBN:
9780070252141
9780071139632
9780072345759
9780071139649
Ek Yazar:
Edition:
3rd ed.
Yayım Bilgisi:
New York : McGraw-Hill, 1995.
Fiziksel Tanım:
xxiii, 838 pages : illustrations ; 25 cm
General Note:
"International edition"--Title page verso.
Mevcut:*
Library | Materyal Türü | Barkod | Yer Numarası | Durum |
|---|---|---|---|---|
Searching... Pamukkale Merkez Kütüphanesi | Kitap | 0104234 | HB139 G84 1995 | Searching... Unknown |
Bound With These Titles
On Order
Özet
Özet
Gujarati's Basic Econometrics provides an elementary but comprehensive introduction to econometrics without resorting to matrix algebra, calculus, or statistics beyond the elementary level. Because of the way the book is organized, it may be used at a variety of levels of rigor. For example, if matrix algebra is used, theoretical exercises may be omitted. A CD of data sets is provided with the text.
Table of Contents
| Preface | p. xxi |
| Introduction | p. 1 |
| Part 1 Single-Equation Regression Models | |
| 1 The Nature of Regression Analysis | p. 15 |
| 1.1 Historical Origin of the Term "Regression" | p. 15 |
| 1.2 The Modern Interpretation of Regression | p. 16 |
| Examples | p. 16 |
| 1.3 Statistical vs. Deterministic Relationships | p. 19 |
| 1.4 Regression vs. Causation | p. 20 |
| 1.5 Regression vs. Correlation | p. 21 |
| 1.6 Terminology and Notation | p. 22 |
| 1.7 The Nature and Sources of Data for Econometric Analysis | p. 23 |
| Types of Data | p. 23 |
| The Sources of Data | p. 24 |
| The Accuracy of Data | p. 26 |
| 1.8 Summary and Conclusions | p. 27 |
| Exercises | p. 28 |
| Appendix 1A | p. 29 |
| 1A.1 Sources of Economic Data | p. 29 |
| 1A.2 Sources of Financial Data | p. 31 |
| 2 Two-Variable Regression Analysis: Some Basic Ideas | p. 32 |
| 2.1 A Hypothetical Example | p. 32 |
| 2.2 The Concept of Population Regression Function (PRF) | p. 36 |
| 2.3 The Meaning of the Term "Linear" | p. 36 |
| Linearity in the Variables | p. 37 |
| Linearity in the Parameters | p. 37 |
| 2.4 Stochastic Specification of PRF | p. 38 |
| 2.5 The Significance of the Stochastic Disturbance Term | p. 39 |
| 2.6 The Sample Regression Function (SRF) | p. 41 |
| 2.7 Summary and Conclusions | p. 45 |
| Exercises | p. 45 |
| 3 Two-Variable Regression Model: The Problem of Estimation | p. 52 |
| 3.1 The Method of Ordinary Least Squares | p. 52 |
| 3.2 The Classical Linear Regression Model: The Assumptions Underlying the Method of Least Squares | p. 59 |
| How Realistic Are These Assumptions? | p. 68 |
| 3.3 Precision or Standard Errors of Least-Squares Estimates | p. 69 |
| 3.4 Properties of Least-Squares Estimators: The Gauss-Markov Theorem | p. 72 |
| 3.5 The Coefficient of Determination r2: A Measure of "Goodness of Fit" | p. 74 |
| 3.6 A Numerical Example | p. 80 |
| 3.7 Illustrative Examples | p. 83 |
| Coffee Consumption in the United States, 1970-1980 | p. 83 |
| Keynesian Consumption Function for the United States, 1980-1991 | p. 84 |
| 3.8 Computer Output for the Coffee Demand Function | p. 85 |
| 3.9 A Note on Monte Carlo Experiments | p. 85 |
| 3.10 Summary and Conclusions | p. 86 |
| Exercises | p. 87 |
| Questions | p. 87 |
| Problems | p. 89 |
| Appendix 3A | p. 94 |
| 3A.1 Derivation of Least-Squares Estimates | p. 94 |
| 3A.2 Linearity and Unbiasedness Properties of Least-Squares Estimators | p. 94 |
| 3A.3 Variances and Standard Errors of Least-Squares Estimators | p. 95 |
| 3A.4 Covariance between B1 and B2 | p. 96 |
| 3A.5 The Least-Squares Estimator of o2 | p. 96 |
| 3A.6 Minimum-Variance Property of Least-Squares Estimators | p. 97 |
| 3A.7 SAS Output of the Coffee Demand Function (3.7.1) | p. 99 |
| 4 The Normality Assumption: Classical Normal Linear Regression Model (CNLRM) | p. 101 |
| 4.1 The Probability Distribution of Disturbances ui | p. 101 |
| 4.2 The Normality Assumption | p. 102 |
| 4.3 Properties of OLS Estimators under the Normality Assumption | p. 104 |
| 4.4 The Method of Maximum Likelihood (ML) | p. 107 |
| 4.5 Probability Distributions Related to the Normal Distribution: The t, Chi-square (X2), and F Distributions | p. 107 |
| 4.6 Summary and Conclusions | p. 109 |
| Appendix 4A | p. 110 |
| Maximum Likelihood Estimation of Two-Variable Regression Model | p. 110 |
| Maximum Likelihood Estimation of the Consumption-Income Example | p. 113 |
| Appendix 4A Exercises | p. 113 |
| 5 Two-Variable Regression: Interval Estimation and Hypothesis Testing | p. 115 |
| 5.1 Statistical Prerequisites | p. 115 |
| 5.2 Interval Estimation: Some Basic Ideas | p. 116 |
| 5.3 Confidence Intervals for Regression Coefficients B1 and B2 | p. 117 |
| Confidence Interval for B2 | p. 117 |
| Confidence Interval for B1 | p. 119 |
| Confidence Interval for B1 and B2 Simultaneously | p. 120 |
| 5.4 Confidence Interval for o2 | p. 120 |
| 5.5 Hypothesis Testing: General Comments | p. 121 |
| 5.6 Hypothesis Testing: The Confidence-Interval Approach | p. 122 |
| Two-Sided or Two-Tail Test | p. 122 |
| One-Sided or One-Tail Test | p. 124 |
| 5.7 Hypothesis Testing: The Test-of-Significance Approach | p. 124 |
| Testing the Significance of Regression Coefficients: The t-Test | p. 124 |
| Testing the Significance of o2: the X2 Test | p. 128 |
| 5.8 Hypothesis Testing: Some Practical Aspects | p. 129 |
| The Meaning of "Accepting" or "Rejecting" a Hypothesis | p. 129 |
| The "Zero" Null Hypothesis and the "2-t" Rule of Thumb | p. 129 |
| Forming the Null and Alternative Hypotheses | p. 130 |
| Choosing a, the Level of Significance | p. 131 |
| The Exact Level of Significance: The p Value | p. 132 |
| Statistical Significance versus Practical Significance | p. 133 |
| The Choice between Confidence-Interval and Test-of-Significance Approaches to Hypothesis Testing | p. 134 |
| 5.9 Regression Analysis and Analysis of Variance | p. 134 |
| 5.10 Application of Regression Analysis: The Problem of Prediction | p. 137 |
| Mean Prediction | p. 137 |
| Individual Prediction | p. 138 |
| 5.11 Reporting the Results of Regression Analysis | p. 140 |
| 5.12 Evaluating the Results of Regression Analysis | p. 140 |
| Normality Test | p. 141 |
| Other Tests of Model Adequacy | p. 144 |
| 5.13 Summary and Conclusions | p. 144 |
| Exercises | p. 145 |
| Questions | p. 145 |
| Problems | p. 147 |
| Appendix 5A | p. 152 |
| 5A.1 Derivation of Equation (5.3.2) | p. 152 |
| 5A.2 Derivation of Equation (5.9.1) | p. 152 |
| 5A.3 Derivation of Equations (5.10.2) and (5.10.6) | p. 153 |
| Variance of Mean Prediction | p. 153 |
| Variance of Individual Prediction | p. 153 |
| 6 Extensions of the Two-Variable Linear Regression Model | p. 155 |
| 6.1 Regression through the Origin | p. 155 |
| r2 for Regression-through-Origin Model An Illustrative Example: The Characteristic Line of Portfolio Theory | p. 159 |
| 6.2 Scaling and Units of Measurement | p. 161 |
| A Numerical Example: The Relationship between GPDI and GNP, United States, 1974-1983 | p. 163 |
| A Word about Interpretation | p. 164 |
| 6.3 Functional Forms of Regression Models | p. 165 |
| 6.4 How to Measure Elasticity: The Log-Linear Model | p. 165 |
| An Illustrative Example: The Coffee Demand Function Revisited | p. 167 |
| 6.5 Semilog Models: Log-Lin and Lin-Log Models | p. 169 |
| How to Measure the Growth Rate: The Log-Lin Model | p. 169 |
| The Lin-Log Model | p. 172 |
| 6.6 Reciprocal Models | p. 173 |
| An Illustrative Example: The Phillips Curve for the United Kingdom, 1950-1966 | p. 176 |
| 6.7 Summary of Functional Forms | p. 176 |
| 6.8 A Note on the Nature of the Stochastic Error Term: Additive versus Multiplicative Stochastic Error Term | p. 178 |
| 6.9 Summary and Conclusions | p. 179 |
| Exercises | p. 180 |
| Questions | p. 180 |
| Problems | p. 183 |
| Appendix 6A | p. 186 |
| 6A.1 Derivation of Least-Squares Estimators for Regression through the Origin | p. 186 |
| 6A.2 SAS Output of the Characteristic Line (6.1.12) | p. 189 |
| 6A.3 SAS Output of the United Kingdom Phillips Curve Regression (6.6.2) | p. 190 |
| 7 Multiple Regression Analysis: The Problem of Estimation | p. 191 |
| 7.1 The Three-Variable Model: Notation and Assumptions | p. 192 |
| 7.2 Interpretation of Multiple Regression Equation | p. 194 |
| 7.3 The Meaning of Partial Regression Coefficients | p. 195 |
| 7.4 OLS and ML Estimation of the Partial Regression Coefficients | p. 197 |
| OLS Estimators | p. 197 |
| Variances and Standard Errors of OLS Estimators | p. 198 |
| Properties of OLS Estimators | p. 199 |
| Maximum Likelihood Estimators | p. 201 |
| 7.5 The Multiple Coefficient of Determination R2 and the Multiple Coefficient of Correlation R | p. 201 |
| 7.6 Example 7.1: The Expectations-Augmented Phillips Curve for the United States, 1970-1982 | p. 203 |
| 7.7 Simple Regression in the Context of Multiple Regression: Introduction to Specification Bias | p. 204 |
| 7.8 R2 and the Adjusted R2 | p. 207 |
| Comparing Two R2 Values | p. 209 |
| Example 7.2: Coffee Demand Function Revisited | p. 210 |
| The "Game" of Maximizing R2 | p. 211 |
| 7.9 Partial Correlation Coefficients | p. 211 |
| Explanation of Simple and Partial Correlation Coefficients | p. 211 |
| Interpretation of Simple and Partial Correlation Coefficients | p. 213 |
| 7.10 Example 7.3: The Cobb-Douglas Production Function: More on Functional Form | p. 214 |
| 7.11 Polynomial Regression Models | p. 217 |
| Example 7.4: Estimating the Total Cost Function | p. 218 |
| Empirical Results | p. 220 |
| 7.12 Summary and Conclusions | p. 221 |
| Exercises | p. 221 |
| Questions | p. 221 |
| Problems | p. 224 |
| Appendix 7A | p. 231 |
| 7A.1 Derivation of OLS Estimators Given in Equations (7.4.3) and (7.4.5) | p. 231 |
| 7A.2 Equality between a1 of (7.3.5) and B2 of (7.4.7) | p. 232 |
| 7A.3 Derivation of Equation (7.4.19) | p. 232 |
| 7A.4 Maximum Likelihood Estimation of the Multiple Regression Model | p. 233 |
| 7A.5 The Proof that E(b12) = B2 + B3b32 (Equation 7.7.4) | p. 234 |
| 7A.6 SAS Output of the Expectations-Augmented Phillips Curve (7.6.2) | p. 236 |
| 7A.7 SAS Output of the Cobb-Douglas Production Function (7.10.4) | p. 237 |
| 8 Multiple Regression Analysis: The Problem of Inference | p. 238 |
| 8.1 The Normality Assumption Once Again | p. 238 |
| 8.2 Example 8.1: U.S. Personal Consumption and Personal Disposal Income Relation, 1956-1970 | p. 239 |
| 8.3 Hypothesis Testing in Multiple Regression: General Comments | p. 242 |
| 8.4 Hypothesis Testing about Individual Partial Regression Coefficients | p. 242 |
| 8.5 Testing the Overall Significance of the Sample Regression | p. 244 |
| The Analysis of Variance Approach to Testing the Overall Significance of an Observed Multiple Regression: The F Test | p. 245 |
| An Important Relationship between R2 and F | p. 248 |
| The "Incremental," or "Marginal," Contribution of an Explanatory Variable | p. 250 |
| 8.6 Testing the Equality of Two Regression Coefficients | p. 254 |
| Example 8.2: The Cubic Cost Function Revisited | p. 255 |
| 8.7 Restricted Least Squares: Testing Linear Equality Restrictions | p. 256 |
| The t Test Approach | p. 256 |
| The F Test Approach: Restricted Least Squares | p. 257 |
| Example 8.3: The Cobb-Douglas Production Function for Taiwanese Agricultural Sector, 1958-1972 | p. 259 |
| General F Testing | p. 260 |
| 8.8 Comparing Two Regressions: Testing for Structural Stability of Regression Models | p. 262 |
| 8.9 Testing the Functional Form of Regression: Choosing between Linear and Log-Linear Regression Models | p. 265 |
| Example 8.5: The Demand for Roses | p. 266 |
| 8.10 Prediction with Multiple Regression | p. 267 |
| 8.11 The Troika of Hypothesis Tests: The Likelihood Ratio (LR), Wald (W), and Lagrange Multiplier (LM) Tests | p. 268 |
| 8.12 Summary and Conclusions | p. 269 |
| The Road Ahead | p. 269 |
| Exercises | p. 270 |
| Questions | p. 270 |
| Problems | p. 273 |
| Appendix 8A | p. 280 |
| Likelihood Ratio (LR) Test | p. 280 |
| 9 The Matrix Approach to Linear Regression Model | p. 282 |
| 9.1 The k-Variable Linear Regression Model | p. 282 |
| 9.2 Assumptions of the Classical Linear Regression Model in Matrix Notation | p. 284 |
| 9.3 OLS Estimation | p. 287 |
| An Illustration | p. 289 |
| Variance-Covariance Matrix of B | p. 290 |
| Properties of OLS Vector B | p. 291 |
| 9.4 The Coefficient of Determination R2 in Matrix Notation | p. 292 |
| 9.5 The Correlation Matrix | p. 292 |
| 9.6 Hypothesis Testing about Individual Regression Coefficients in Matrix Notation | p. 293 |
| 9.7 Testing the Overall Significance of Regression: Analysis of Variance in Matrix Notation | p. 294 |
| 9.8 Testing Linear Restrictions: General F Testing Using Matrix Notation | p. 295 |
| 9.9 Prediction Using Multiple Regression: Matrix Formulation | p. 296 |
| Mean Prediction | p. 296 |
| Individual Prediction | p. 296 |
| Variance of Mean Prediction | p. 297 |
| Variance of Individual Prediction | p. 298 |
| 9.10 Summary of the Matrix Approach: An Illustrative Example | p. 298 |
| 9.11 Summary and Conclusions | p. 303 |
| Exercises | p. 304 |
| Appendix 9A | p. 309 |
| 9A.1 Derivation of k Normal or Simultaneous Equations | p. 309 |
| 9A.2 Matrix Derivation of Normal Equations | p. 310 |
| 9A.3 Variance-Covariance Matrix of B | p. 310 |
| 9A.4 Blue Property of OLS Estimators | p. 311 |
| Part 2 Relaxing the Assumptions of the Classical Model | |
| 10 Multicollinearity and Micronumerosity | p. 319 |
| 10.1 The Nature of Multicollinearity | p. 320 |
| 10.2 Estimation in the Presence of Perfect Multicollinearity | p. 323 |
| 10.3 Estimation in the Presence of "High" but "Imperfect" Multicollinearity | p. 325 |
| 10.4 Multicollinearity: Much Ado about Nothing? Theoretical Consequences of Multicollinearity | p. 325 |
| 10.5 Practical Consequences of Multicollinearity | p. 327 |
| Large Variances and Covariances of OLS Estimators | p. 328 |
| Wider Confidence Intervals | p. 329 |
| "Insignificant" t Ratios | p. 330 |
| A High R2 but Few Significant t Ratios | p. 330 |
| Sensitivity of OLS Estimators and Their Standard Errors to Small Changes in Data | p. 331 |
| Consequences of Micronumerosity | p. 332 |
| 10.6 An Illustrative Example: Consumption Expenditure in Relation to Income and Wealth | p. 332 |
| 10.7 Detection of Multicollinearity | p. 335 |
| 10.8 Remedial Measures | p. 339 |
| 10.9 Is Multicollinearity Necessarily Bad? Maybe Not If the Objective Is Prediction Only | p. 344 |
| 10.10 Summary and Conclusions | p. 345 |
| Exercises | p. 346 |
| Questions | p. 346 |
| Problems | p. 351 |
| 11 Heteroscedasticity | p. 355 |
| 11.1 The Nature of Heteroscedasticity | p. 355 |
| 11.2 OLS Estimation in the Presence of Heteroscedasticity | p. 359 |
| 11.3 The Method of Generalized Least Squares (GLS) | p. 362 |
| Difference between OLS and GLS | p. 364 |
| 11.4 Consequences of Using OLS in the Presence of Heteroscedasticity | p. 365 |
| OLS Estimation Allowing for Heteroscedasticity | p. 365 |
| OLS Estimation Disregarding Heteroscedasticity | p. 366 |
| 11.5 Detection of Heteroscedasticity | p. 367 |
| Informal Methods | p. 368 |
| Formal Methods | p. 369 |
| 11.6 Remedial Measures | p. 381 |
| When oi2 Is Known: The Method of Weighted Least Squares | p. 381 |
| When o12 Is Not Known | p. 382 |
| 11.7 A Concluding Example | p. 387 |
| 11.8 Summary and Conclusions | p. 389 |
| Exercises | p. 390 |
| Questions | p. 390 |
| Problems | p. 392 |
| Appendix 11A | p. 398 |
| 11A.1 Proof of Equation (11.2.2) | p. 398 |
| 11A.2 The Method of Weighted Least Squares | p. 399 |
| 12 Autocorrelation | p. 400 |
| 12.1 The Nature of the Problem | p. 400 |
| 12.2 OLS Estimation in the Presence of Autocorrelation | p. 406 |
| 12.3 The BLUE Estimator in the Presence of Autocorrelation | p. 409 |
| 12.4 Consequences of Using OLS in the Presence of Autocorrelation | p. 410 |
| OLS Estimation Allowing for Autocorrelation | p. 410 |
| OLS Estimation Disregarding Autocorrelation | p. 411 |
| 12.5 Detecting Autocorrelation | p. 415 |
| Graphical Method | p. 415 |
| The Runs Test | p. 419 |
| Durbin-Watson d Test | p. 420 |
| Additional Tests of Autocorrelation | p. 425 |
| 12.6 Remedial Measures | p. 426 |
| When the Structure of Autocorrelation Is Known | p. 427 |
| When p Is Not Known | p. 428 |
| 12.7 An Illustrative Example: The Relationship between Help-Wanted Index and the Unemployment Rate, United States: Comparison of the Methods | p. 433 |
| 12.8 Autoregressive Conditional Heteroscedasticity (ARCH) Model | p. 436 |
| What to Do If ARCH Is Present? | p. 438 |
| A Word on the Durbin-Watson d Statistic and the ARCH Effect | p. 438 |
| 12.9 Summary and Conclusions | p. 439 |
| Exercises | p. 440 |
| Questions | p. 440 |
| Problems | p. 446 |
| Appendix 12A | p. 449 |
| 12A.1 TSP Output of United States Wages (Y)-Productivity (X) Regression, 1960-1991 | p. 449 |
| 13 Econometric Modeling I: Traditional Econometric Methodology | p. 452 |
| 13.1 The Traditional View of Econometric Modeling: Average Economic Regression (AER) | p. 452 |
| 13.2 Types of Specification Errors | p. 455 |
| 13.3 Consequences of Specification Errors | p. 456 |
| Omitting a Relevant Variable (Underfitting a Model) | p. 456 |
| Inclusion of an Irrelevant Variable (Overfitting a Model) | p. 458 |
| 13.4 Tests of Specification Errors | p. 459 |
| Detecting the Presence of Unnecessary Variables | p. 460 |
| Tests for Omitted Variables and Incorrect Functional Form | p. 461 |
| 13.5 Errors of Measurement | p. 467 |
| Errors of Measurement in the Dependent Variable Y | p. 468 |
| Errors of Measurement in the Explanatory Variable X | p. 469 |
| An Example | p. 470 |
| Measurement Errors in the Dependent Variable Y Only | p. 471 |
| Errors of Measurement in X | p. 472 |
| 13.6 Summary and Conclusions | p. 472 |
| Exercises | p. 473 |
| Questions | p. 473 |
| Problems | p. 476 |
| Appendix 13A | p. 477 |
| 13A.1 The Consequences of Including an Irrelevant Variable: The Unbiasedness Property | p. 477 |
| 13A.2 Proof of (13.5.10) | p. 478 |
| 14 Econometric Modeling II: Alternative Econometric Methodologies | p. 480 |
| 14.1 Learner's Approach to Model Selection | p. 481 |
| 14.2 Hendry's Approach to Model Selection | p. 485 |
| 14.3 Selected Diagnostic Tests: General Comments | p. 486 |
| 14.4 Tests of Nonnested Hypothesis | p. 487 |
| The Discrimination Approach | p. 487 |
| The Discerning Approach | p. 488 |
| 14.5 Summary and Conclusions | p. 494 |
| Exercises | p. 494 |
| Questions | p. 494 |
| Problems | p. 495 |
| Part 3 Topics in Econometrics | |
| 15 Regression on Dummy Variables | p. 499 |
| 15.1 The Nature of Dummy Variables | p. 499 |
| Example 15.1: Professor's Salary by Sex | p. 500 |
| 15.2 Regression on One Quantitative Variable and One Qualitative Variable with Two Classes, or Categories | p. 502 |
| Example 15.2: Are Inventories Sensitive to Interest Rates? | p. 505 |
| 15.3 Regression on One Quantitative Variable and One Qualitative Variable with More than Two Classes | p. 505 |
| 15.4 Regression on One Quantitative Variable and Two Qualitative Variables | p. 507 |
| 15.5 Example 15.3: The Economics of "Moonlighting" | p. 508 |
| 15.6 Testing for Structural Stability of Regression Models: Basic Ideas | p. 509 |
| Example 15.4: Savings and Income, United Kingdom, 1946-1963 | p. 509 |
| 15.7 Comparing Two Regressions: The Dummy Variable Approach | p. 512 |
| 15.8 Comparing Two Regressions: Further Illustration | p. 514 |
| Example 15.5: The Behavior of Unemployment and Unfilled Vacancies: Great Britain, 1958-1971 | p. 514 |
| 15.9 Interaction Effects | p. 516 |
| 15.10 The Use of Dummy Variables in Seasonal Analysis | p. 517 |
| Example 15.6: Profits-Sales Behavior in U.S. Manufacturing | p. 517 |
| 15.11 Piecewise Linear Regression | p. 519 |
| Example 15.7: Total Cost in Relation to Output | p. 521 |
| 15.12 The Use of Dummy Variables in Combining Time Series and Cross-Sectional Data | p. 522 |
| Pooled Regression: Pooling Time Series and Cross-Sectional Data | p. 522 |
| Example 15.8: Investment Functions for General Motors and Westinghouse Companies | p. 524 |
| 15.13 Some Technical Aspects of Dummy Variable Technique | p. 525 |
| The Interpretation of Dummy Variables in Semilogarithmic Regressions | p. 525 |
| Example 15.9: Semilogarithmic Regression with Dummy Variable | p. 525 |
| Another Method of Avoiding the Dummy Variable Trap | p. 526 |
| Dummy Variables and Heteroscedasticity | p. 527 |
| Dummy Variables and Autocorrelation | p. 527 |
| 15.14 Topics for Further Study | p. 528 |
| 15.15 Summary and Conclusions | p. 529 |
| Exercises | p. 530 |
| Questions | p. 530 |
| Problems | p. 535 |
| Appendix 15A | p. 538 |
| 15A.1 Data Matrix for Regression (15.8.2) | p. 538 |
| 15A.2 Data Matrix for Regression (15.10.2) | p. 539 |
| 16 Regression on Dummy Dependent Variable: The LPM, Logit, Probit, and Tobit Models | p. 540 |
| 16.1 Dummy Dependent Variable | p. 540 |
| 16.2 The Linear Probability Model (LPM) | p. 541 |
| 16.3 Problems in Estimation of LPM | p. 542 |
| Nonnormality of the Disturbances ui | p. 542 |
| Heteroscedastic Variances of the Disturbances | p. 543 |
| Nonfulfillment of 0 [= E(Yi\X) [= 1 | p. 544 |
| Questionable Value of R2 as a Measure of Goodness of Fit | p. 545 |
| 16.4 LPM: A Numerical Example | p. 546 |
| 16.5 Applications of LPM | p. 548 |
| Example 16.1: Cohen-Rea-Lerman study | p. 548 |
| Example 16.2: Predicting a Bond Rating | p. 551 |
| Example 16.3: Predicting Bond Defaults | p. 552 |
| 16.6 Alternatives to LPM | p. 552 |
| 16.7 The Logit Model | p. 554 |
| 16.8 Estimation of the Logit Model | p. 556 |
| 16.9 The Logit Model: A Numerical Example | p. 558 |
| 16.10 The Logit Model: Illustrative Examples | p. 561 |
| Example 16.4: "An Application of Logit Analysis to Prediction of Merger Targets" | p. 561 |
| Example 16.5: Predicting a Bond Rating | p. 562 |
| 16.11 The Probit Model | p. 563 |
| 16.12 The Probit Model: A Numerical Example | p. 567 |
| Logit versus Probit | p. 567 |
| Comparing Logit and Probit Estimates | p. 568 |
| The Marginal Effect of a Unit Change in the Value of a Regressor | p. 569 |
| 16.13 The Probit Model: Example 16.5 | p. 569 |
| 16.14 The Tobit Model | p. 570 |
| 16.15 Summary and Conclusions | p. 575 |
| Exercises | p. 576 |
| Questions | p. 576 |
| Problems | p. 578 |
| 17 Dynamic Econometric Model: Autoregressive and Distributed-Lag Models | p. 584 |
| 17.1 The Role of "Time," or "Lag," in Economics | p. 585 |
| 17.2 The Reasons for Lags | p. 589 |
| 17.3 Estimation of Distributed-Lag Models | p. 590 |
| Ad Hoc Estimation of Distributed-Lag Models | p. 590 |
| 17.4 The Koyck Approach to Distributed-Lag Models | p. 592 |
| The Median Lag | p. 595 |
| The Mean Lag | p. 595 |
| 17.5 Rationalization of the Koyck Model: The Adaptive Expectations Model | p. 596 |
| 17.6 Another Rationalization of the Koyck Model: The Stock Adjustment, or Partial Adjustment, Model | p. 599 |
| 17.7 Combination of Adaptive Expectations and Partial Adjustment Models | p. 601 |
| 17.8 Estimation of Autoregressive Models | p. 602 |
| 17.9 The Method of Instrumental Variables (IV) | p. 604 |
| 17.10 Detecting Autocorrelation in Autoregressive Models: Durbin h Test | p. 605 |
| 17.11 A Numerical Example: The Demand for Money in India | p. 607 |
| 17.12 Illustrative Examples | p. 609 |
| Example 17.7: The Fed and the Real Rate of Interest | p. 609 |
| Example 17.8: The Short- and Long-Run Aggregate Consumption Functions for the United States, 1946-1972 | p. 611 |
| 17.13 The Almon Approach to Distributed-Lag Models: The Almon or Polynomial Distributed Lag (PDL) | p. 612 |
| 17.14 Causality in Economics: The Granger Test | p. 620 |
| The Granger Test | p. 620 |
| Empirical Results | p. 622 |
| 17.15 Summary and Conclusions | p. 624 |
| Exercises | p. 624 |
| Questions | p. 624 |
| Problems | p. 630 |
| Part 4 Simultaneous-Equation Models | |
| 18 Simultaneous-Equation Models | p. 635 |
| 18.1 The Nature of Simultaneous-Equation Models | p. 635 |
| 18.2 Examples of Simultaneous-Equation Models | p. 636 |
| Example 18.1: Demand-and-Supply Model | p. 636 |
| Example 18.2: Keynesian Model of Income Determination | p. 638 |
| Example 18.3: Wage-Price Models | p. 639 |
| Example 18.4: The IS Model of Macroeconomics | p. 639 |
| Example 18.5: The LM Model | p. 640 |
| Example 18.6: Econometric Models | p. 641 |
| 18.3 The Simultaneous-Equation Bias: Inconsistency of OLS Estimators | p. 642 |
| 18.4 The Simultaneous-Equation Bias: A Numerical Example | p. 645 |
| 18.5 Summary and Conclusions | p. 647 |
| Exercises | p. 648 |
| Questions | p. 648 |
| Problems | p. 651 |
| 19 The Identification Problem | p. 653 |
| 19.1 Notations and Definitions | p. 653 |
| 19.2 The Identification Problem | p. 657 |
| Underidentification | p. 657 |
| Just, or Exact, Identification | p. 660 |
| Overidentification | p. 663 |
| 19.3 Rules for Identification | p. 664 |
| The Order Condition of Identifiability | p. 665 |
| The Rank Condition of Identifiability | p. 666 |
| 19.4 A Test of Simultaneity | p. 669 |
| Hausman Specification Test | p. 670 |
| Example 19.5: Pindyck-Rubinfeld Model of Public Spending | p. 671 |
| 19.5 Tests for Exogeneity | p. 672 |
| A Note on Causality and Exogeneity | p. 673 |
| 19.6 Summary and Conclusions | p. 673 |
| Exercises | p. 674 |
| 20 Simultaneous-Equation Methods | p. 678 |
| 20.1 Approaches to Estimation | p. 678 |
| 20.2 Recursive Models and Ordinary Least Squares | p. 680 |
| 20.3 Estimation of a Just Identified Equation: The Method of Indirect Least Squares (ILS) | p. 682 |
| An Illustrative Example | p. 683 |
| Properties of ILS Estimators | p. 686 |
| 20.4 Estimation of an Overidentified Equation: The Method of Two-Stage Least Squares (2SLS) | p. 686 |
| 20.5 2SLS: A Numerical Example | p. 690 |
| 20.6 Illustrative Examples | p. 693 |
| Example 20.1: Advertising, Concentration, and Price Margins | p. 693 |
| Example 20.2: Klein's Model I | p. 694 |
| Example 20.3: The Capital Asset Pricing Model Expressed as a Recursive System | p. 694 |
| Example 20.4: Revised Form of St. Louis Model | p. 697 |
| 20.7 Summary and Conclusions | p. 699 |
| Exercises | p. 700 |
| Questions | p. 700 |
| Problems | p. 703 |
| Appendix 20A | p. 704 |
| 20A.1 Bias in the Indirect Least-Squares Estimators | p. 704 |
| 20A.2 Estimation of Standard Errors of 2SLS Estimators | p. 705 |
| Part 5 Time Series Econometrics | |
| 21 Time Series Econometrics I: Stationarity, Unit Roots, and Cointegration | p. 709 |
| 21.1 A Look at Selected U.S. Economic Time Series | p. 710 |
| 21.2 Stationary Stochastic Process | p. 710 |
| 21.3 Test of Stationarity Based on Correlogram | p. 714 |
| 21.4 The Unit Root Test of Stationarity | p. 718 |
| Is the U.S. GDP Time Series Stationary? | p. 720 |
| Is the First-Differenced GDP Series Stationary? | p. 721 |
| 21.5 Trend-Stationary (TS) and Difference-Stationary (DS) Stochastic Process | p. 722 |
| 21.6 Spurious Regression | p. 724 |
| 21.7 Cointegration | p. 725 |
| Engle-Granger (EG) or Augmented Engle-Granger (AEG) Test | p. 726 |
| Cointegrating Regression Durbin-Watson (CRDW) Test | p. 727 |
| 21.8 Cointegration and Error Correction Mechanism (ECM) | p. 728 |
| 21.9 Summary and Conclusions | p. 729 |
| Exercises | p. 730 |
| Questions | p. 730 |
| Problems | p. 731 |
| Appendix 21A | p. 732 |
| 21A.1 A Random Walk Model | p. 732 |
| 22 Time Series Econometrics II: Forecasting with ARIMA and VAR Models | p. 734 |
| 22.1 Approaches to Economic Forecasting | p. 734 |
| 22.2 AR, MA, and ARIMA Modeling of Time Series Data | p. 736 |
| An Autoregressive (AR) Process | p. 736 |
| A Moving Average (MA) Process | p. 737 |
| An Autoregressive and Moving Average (ARMA) Process | p. 737 |
| An Autoregressive Integrated Moving Average (ARIMA) Process | p. 737 |
| 22.3 The Box-Jenkins (BJ) Methodology | p. 738 |
| 22.4 Identification | p. 739 |
| 22.5 Estimation of the ARIMA Model | p. 742 |
| 22.6 Diagnostic Checking | p. 743 |
| 22.7 Forecasting | p. 744 |
| 22.8 Further Aspects of the BJ Methodology | p. 745 |
| 22.9 Vector Autoregression (VAR) | p. 746 |
| Estimation of VAR | p. 746 |
| Forecasting with VAR | p. 747 |
| Some Problems with VAR Modeling | p. 747 |
| An Application of VAR: A VAR Model of the Texas Economy | p. 750 |
| 22.10 Summary and Conclusions | p. 752 |
| Exercises | p. 753 |
| Questions | p. 753 |
| Problems | p. 753 |
| Appendixes | |
| A A Review of Some Statistical Concepts | p. 755 |
| B Rudiments of Matrix Algebra | p. 791 |
| C A List of Statistical Computer Packages | p. 804 |
| D Statistical Tables | p. 807 |
| Table D.1 Areas under the Standardized Normal Distribution | p. 808 |
| Table D.2 Percentage Points of the t Distribution | p. 809 |
| Table D.3 Upper Percentage Points of the F Distribution | p. 810 |
| Table D.4 Upper Percentage Points of the X2 Distribution | p. 816 |
| Table D.5 Durbin-Watson d Statistic: Significant Points of dL and dU at 0.05 and 0.01 Levels of Significance | p. 818 |
| Table D.6 Critical Values of Runs in the Runs Test | p. 822 |
| Selected Bibliography | p. 824 |
| Indexes | |
| Name Index | p. 827 |
| Subject Index | p. 831 |
Select a list
The following items were successfully added.
There was an error while adding the following items. Please try again.
One or more items could not be added because you are not logged in.
