Elementary Decision Theory (Dover Books on Mathematics)
Format: PDF / Kindle (mobi) / ePub
This volume is a well-known, well-respected introduction to a lively area of statistics. Professors Chernoff and Moses bring years of professional expertise as classroom teachers to this straightforward approach to statistical problems. And happily, for beginning students, they have by-passed involved computational reasonings which would only confuse the mathematical novice.
Developed from nine years of teaching statistics at Stanford, the book furnishes a simple and clear-cut method of exhibiting the fundamental aspects of a statistical problem. Beginners will find this book a motivating introduction to important mathematical notions such as set, function and convexity. Examples and exercises throughout introduce new topics and ideas.
The first seven chapters are recommended for beginning courses in the basic ideas of statistics and require only a knowledge of high school math. These sections include material on data processing, probability and random variables, utility and descriptive statistics, uncertainty due to ignorance of the state of nature, computing Bayes strategies and an introduction to classical statistics. The last three chapters review mathematical models and summarize terminology and methods of testing hypotheses. Tables and appendixes provide information on notation, shortcut computational formulas, axioms of probability, properties of expectations, likelihood ratio test, game theory, and utility functions.
Authoritative, yet elementary in its approach to statistics and statistical theory, this work is also concise, well-indexed and abundantly equipped with exercise material. Ideal for a beginning course, this modestly priced edition will be especially valuable to those interested in the principles of statistics and scientific method.
in Figure 3.2. For the coin problem the probability distribution is so simple that the cdf represents no gain in the way of conciseness. Note that, at α = 0, 1, and 2, the cdf jumps. The values of the cdf at α = 0, 1, and 2 are consequently marked by heavy dots. Figure 3.2. Cdf F for X equal to the number of heads in the toss of two ideal coins. Figure 3.3. Cdf F for X equal to the sum of the two faces showing in the roll of two ideal dice. The cdf for the ideal dice example is given in Figure
ball which told him what the state of nature was, he could take the proper action. How much should Mr. Nelson be willing to pay for the use of a crystal ball if he had no rain meter but knew that the a priori probability of rain was 0.4? Some further insight may be gained by the following graphical representation of the solution of the no-data problem which we present for the case where there are two possible states of nature.28 If the a priori probabilities are given by , we may regard as the
a smaller hump for θ > 31 than does s31.5. Looking at Figure 7.2, we see that the peaks of the two humps can be equalized for some strategy “close” to 831.25 yielding the minimax strategy. The minimax risk is close to 0.91. There are several points which were brought out by the above discussion. Since tends to be reasonably close to θ, and θ = 31.0 is the break-even point where it does not matter which action is taken, s31.0 is a reasonable strategy. Since the regrets r(θ, a1) are larger (for θ
hypotheses where there are three alternative hypotheses. Exercise 9.16. Referring to Exercise 9.15, evaluate and graph R(θ, s) for s which calls for (α1) if 0.45 ≤ ≤ 0.55, (α2) if > 0.55, and (α3) if < 0.45 where n = 100. In doing so, tabulate r(θ, α1), r(θ, α2), r(θ, α3), αs(θ, α1), αs(θ, α2), and αs(θ, α3) for each 0 considered. Characteristically the two-tailed testing problem involves testing a hypothesis H1: θ2 ≤ θ ≤ θ1 versus the alternative H2: θ < θ2 or θ > θ1. The trilemma can be
described above, the interval is an approximate 95% confidence interval. In the Example 10.9, the approximate 95% confidence interval corresponding to the data turned out to be (0.351, 0.449). In this case, we may say “(0.351, 0.449) covers θ” with confidence 0.95. In the long run, 95% of those statements which are made with confidence 0.95 will be correct. Example 10.10. If X1, X2, � � �, Xn are a large sample of independent observations on a random variable with unknown mean μ and unknown