This textbook, now in its second edition, is an introduction to econometrics from the Bayesian viewpoint. It begins with an explanation of the basic ideas of subjective probability and shows how subjective probabilities must obey the usual rules of probability to ensure coherency. It then turns to the definitions of the likelihood function, prior distributions, and posterior distributions. It explains how posterior distributions are the basis for inference and explores their basic properties. The Bernoulli distribution is used as a simple example. Various methods of specifying prior distributions are considered, with special emphasis on subject-matter considerations and exchange ability. The regression model is examined to show how analytical methods may fail in the derivation of marginal posterior distributions, which leads to an explanation of classical and Markov chain Monte Carlo (MCMC) methods of simulation. The latter is proceeded by a brief introduction to Markov chains. The remainder of the book is concerned with applications of the theory to important models that are used in economics, political science, biostatistics, and other applied fields. New to the second edition is a chapter on semiparametric regression and new sections on the ordinal probit, item response, factor analysis, ARCH-GARCH, and stochastic volatility models. The new edition also emphasizes the R programming language, which has become the most widely used environment for Bayesian statistics.

the theorems of probability theory, we derived the fundamental result of Bayesian inference: the posterior distribution of a parameter is proportional to the likelihood function times the prior distribution. P1: KAE 0521858717pre CUNY1077-Greenberg 0 521 87282 0 August 8, 2007 20:46 2.5 Exercises 19 2.4 Further Reading and References Section 2.1.2 Excellent discussions of subjective probability may be found in Howson and Urbach (1993) and Hacking (2001). 2.5 Exercises 2.1 Prove the

regression coefficients have the interpretation βk = ∂E(yi |xi ) , ∂xik if xik is a continuous variable. We may therefore think of βk as the effect on the expected value of yi of a small change in the value of the covariate xik . If xik is a dummy variable, βk is the shift in the intercept associated with a change from xik = 0 to xik = 1. Prior distributions are placed on each of the βk , which should be based on the researcher’s knowledge of how E(yi |xi ) responds to a change in xik . The

output. (It would be good practice to program this algorithm yourself; otherwise the calculations can be done in BACC or one of the other packages discussed in Appendix B.) P1: KAE 0521858717pre CUNY1077-Greenberg 0 521 87282 0 106 August 8, 2007 20:46 Chapter 7. Simulation by MCMC Methods 7.3 Construct a Gibbs algorithm to analyze the Poisson model with unknown switch point. Given the specification in Equations (7.3) and (7.4), show that π (θ1 , θ2 , k|y) ∝ k θ1α10 −1 e−β10 θ1 θ2α20

distribution for the covariance matrix .) With these assumptions, the posterior distribution is π (β, |y) ∝ 1 1 exp− | |J /2 2 −1 (yj − Xj β) (yj − Xj β) j 1 × exp − (β − β0 ) B0−1 (β − β0 ) 2 1 1 tr(R0−1 × exp − | |(ν0 −S−1)/2 2 −1 ) . It is then straightforward to determine the conditional distribution, ¯ B1 ), ∼ NK (β, β|y, where B1 = Xj −1 Xj + B0−1 −1 , j (9.4) β¯ = B1 Xj −1 yj + B0−1 β0 . j To derive the conditional distribution of |y, β we use the properties

model, θ is a parameter and the value of y is the data. Under these assumptions, y is said to have the Bernoulli distribution, written as y ∼ Be(θ ). We are interested in learning about θ from an experiment in which P1: KAE 0521858717pre CUNY1077-Greenberg 0 521 87282 0 August 8, 2007 20:46 2.2 Prior, Likelihood, and Posterior 13 the coin is tossed n times yielding the data y = (y1 , y2 , . . . , yn ), where yi indicates whether the ith toss resulted in a head or tail. From the