Amazon cover image
Image from Amazon.com
Image from Google Jackets
Image from OpenLibrary

Introduction to Probability & Statistics

By: Contributor(s): Material type: TextTextPublication details: 2007 McGraw-Hill Education ( India ) New DelhiEdition: 4th edDescription: xvii, 798 pages : illustrationsISBN:
  • 9780070636941
Subject(s): DDC classification:
  • 519.5 MIL-I
Contents:
1. Introduction to probability and counting -- Interpreting probabilities -- Sample spaces and events -- Mutually exclusive events -- Permutations and combinations -- Counting permutations -- Counting combinations -- Permutations of indistinguishable objects -- 2. Some probability laws -- Axioms of probability -- The General Addition Rule -- Conditional probability -- Independence oft he multiplication rule -- The Multiplication Rule -- Bayes' theorem -- 3. Discrete distributions -- Random variables -- Discrete probability densities -- Cumulative distribution -- Expectation and distribution parameters -- Variance and standard deviation -- Geometric distribution and the moment generating function -- Geometric distribution -- Moment generating function -- Binomial distribution -- Negative binomial distribution -- Hypergeometric distribution -- Poisson distribution -- Simulating discrete distribution -- 4. Continuous distributions -- Continuous densities -- Cumulative distribution -- Uniform distribution -- Expectation and distribution parameters -- Gamma, exponential, and chi-squared distributions -- Gamma distribution -- Exponential distribution -- Chi-squared distribution -- Normal distribution -- Standard normal distribution -- Normal probability rule and Chebyshev's inequality -- Normal approximation to the binomial distribution -- Weibull distribution and reliability -- Reliability -- Reliability of series and parallel systems -- Transformation of variables -- Simulating a continuous distribution -- 5. Joint distributions -- Joint densities and independence -- Marginal distributions : discrete -- Joint and marginal distributions : continuous -- Independence -- Expectation and covariance -- Covariance -- Correlation -- Conditional densities and regression -- Curves of regression -- Transformation of variables -- 6. Descriptive statistics -- Random sampling -- Picturing the distribution -- Stem-and-leaf diagram -- Histograms and ogives -- Cumulative distribution plots (ogives) -- Sample statistics -- Location statistics -- Measures of variability -- Boxplots -- 7. Estimation -- Point estimation -- The method of moments and maximum likelihood -- Maximum likelihood estimators -- Functions of random variables -- Distribution of X -- Interval estimation and the central limit theorem -- Confidence interval on the mean : variance known -- Central limit theorem -- 8. Inferences on the mean and variance of a distribution -- Interval estimation of variability -- The T distribution -- Confidence interval on the mean : variance estimated -- Hypothesis testing -- Significance testing -- Hypothesis and significance tests on the mean -- Hypothesis tests on the variance -- Alternative nonparametric methods -- Sign test for median -- Wilcoxon signed-rank test. 9. Inferences on proportions -- Estimating proportions -- Confidence interval on p -- Sample size for estimating p -- Testing hypotheses on a proportion -- Comparing two proportions : estimation -- Comparing two proportions : hypothesis testing -- Pooled proportions -- 10. Comparing two means and two variances -- Point estimation : independent samples -- Comparing variances : the F distribution -- Comparing means : variances equal (pooled test) -- Pooled T test -- Comparing means : variances unequal -- comparing means : paired data -- Paired T test -- Alternative nonparametric methods -- Wilcoxon rank-sum test -- Wilcoxon signed-rank test for paired observations -- 11. Simple linear regression and correlation -- Model and parameter estimation -- Description of the model -- Least-squares estimation -- Properties of least-squares estimators -- Summary of theoretical results -- Confidence interval estimation and hypothesis testing -- Inferences about slope -- Inferences about intercept -- Inferences about estimated mean -- Inferences about single predicted value -- Repeated measures and lack of fit -- Residual analysis -- Residual plots -- Checking for normality : stem-and-leaf plots and boxplots -- Correlation -- Interval estimation and hypothesis tests on p -- Coefficient of determination -- 12. Multiple linear regression models -- Least-squares procedures for model fitting -- Polynomial model of degree p -- Multiple linear regression model -- Matrix approach to least squares -- Normal equations -- Solving the normal equations -- Simple linear regression : matrix formulation -- Polynomial model : matrix formulation -- Properties of the least-squares estimators -- Interval estimation -- Confidence interval on coefficients -- Confidence interval on estimated mean -- Prediction interval on a single predicted response -- Testing hypotheses about model parameters -- Testing a single predictor variable -- Testing for significant regression -- Testing a subset of predictor variables Use of indicator or "dummy" variables -- Criteria for variable selection -- Forward selection method -- Backward elimination procedure -- Stepwise method -- Maximum R2 method -- Mallows Ck statistic -- PRESS statistic -- Model transformation and concluding remarks -- 13. Analysis of variance -- One-way classification fixed-effects model -- Comparing variances -- Pairwise comparisons -- Bonferroni T tests -- Duncan's multiple range test -- Tukey's test -- Testing contrasts -- Randomized complete block design -- Effectiveness of blocking -- Paired comparisons -- Latin squares -- Random-effects model -- One-way classification -- Design models in matrix form -- Alternative nonparametric methods -- Kruskal-Wallis test -- Friedman test -- 14. Factorial experiments -- Two-factor analysis of variance -- Extension to three factors -- Random and mixed model factorial experiments -- Random-effects model -- Mixed-effects model -- 2k factorial experiments -- Computational techniques : Yates method -- 2k factorial experiments in an incomplete block design -- Fractional factorial experiments -- 15. Categorical data -- Multinomial distribution -- Chi-squared goodness of fit tests -- Testing for independence -- r X c test for independence -- Comparing proportions -- r X c test for homogeneity -- Comparing proportions with paired data : McNemar's test -- 16. Statistical quality control -- Properties of control charts -- Monitoring means -- Distribution of run lengths -- Shewhart control charts for measurements -- Mean -- Range -- Shewhart control charts for attributes -- P chart (proportion defective) -- C charts (average number of defects) -- Tolerance limits -- Two-sided tolerance limits -- Assumed normal distribution -- One-sided tolerance limits -- Nonparametric tolerance interval -- Acceptance sampling -- Two-stage acceptance sampling -- Extensions in quality control -- Modifications of control charts -- Parameter design procedures -- Statistics tables.
Summary: This well-respected text is designed for the first course in probability and statistics taken by students majoring in Engineering and the Computing Sciences. The prerequisite is one year of calculus. The text offers a balanced presentation of applications and theory. The authors take care to develop the theoretical foundations for the statistical methods presented at a level that is accessible to students with only a calculus background. They explore the practical implications of the formal results to problem-solving so students gain an understanding of the logic behind the techniques as well as practice in using them. The examples, exercises, and applications were chosen specifically for students in engineering and computer science and include opportunities for real data analysis.
Tags from this library: No tags from this library for this title. Log in to add tags.
No physical items for this record

1. Introduction to probability and counting -- Interpreting probabilities -- Sample spaces and events -- Mutually exclusive events -- Permutations and combinations -- Counting permutations -- Counting combinations -- Permutations of indistinguishable objects -- 2. Some probability laws -- Axioms of probability -- The General Addition Rule -- Conditional probability -- Independence oft he multiplication rule -- The Multiplication Rule -- Bayes' theorem -- 3. Discrete distributions -- Random variables -- Discrete probability densities -- Cumulative distribution -- Expectation and distribution parameters -- Variance and standard deviation -- Geometric distribution and the moment generating function -- Geometric distribution -- Moment generating function -- Binomial distribution -- Negative binomial distribution -- Hypergeometric distribution -- Poisson distribution -- Simulating discrete distribution -- 4. Continuous distributions -- Continuous densities -- Cumulative distribution -- Uniform distribution -- Expectation and distribution parameters -- Gamma, exponential, and chi-squared distributions -- Gamma distribution -- Exponential distribution -- Chi-squared distribution -- Normal distribution -- Standard normal distribution -- Normal probability rule and Chebyshev's inequality -- Normal approximation to the binomial distribution -- Weibull distribution and reliability -- Reliability -- Reliability of series and parallel systems -- Transformation of variables -- Simulating a continuous distribution -- 5. Joint distributions -- Joint densities and independence -- Marginal distributions : discrete -- Joint and marginal distributions : continuous -- Independence -- Expectation and covariance -- Covariance -- Correlation -- Conditional densities and regression -- Curves of regression -- Transformation of variables -- 6. Descriptive statistics -- Random sampling -- Picturing the distribution -- Stem-and-leaf diagram -- Histograms and ogives -- Cumulative distribution plots (ogives) -- Sample statistics -- Location statistics -- Measures of variability -- Boxplots -- 7. Estimation -- Point estimation -- The method of moments and maximum likelihood -- Maximum likelihood estimators -- Functions of random variables -- Distribution of X -- Interval estimation and the central limit theorem -- Confidence interval on the mean : variance known -- Central limit theorem -- 8. Inferences on the mean and variance of a distribution -- Interval estimation of variability -- The T distribution -- Confidence interval on the mean : variance estimated -- Hypothesis testing -- Significance testing -- Hypothesis and significance tests on the mean -- Hypothesis tests on the variance -- Alternative nonparametric methods -- Sign test for median -- Wilcoxon signed-rank test. 9. Inferences on proportions -- Estimating proportions -- Confidence interval on p -- Sample size for estimating p -- Testing hypotheses on a proportion -- Comparing two proportions : estimation -- Comparing two proportions : hypothesis testing -- Pooled proportions -- 10. Comparing two means and two variances -- Point estimation : independent samples -- Comparing variances : the F distribution -- Comparing means : variances equal (pooled test) -- Pooled T test -- Comparing means : variances unequal -- comparing means : paired data -- Paired T test -- Alternative nonparametric methods -- Wilcoxon rank-sum test -- Wilcoxon signed-rank test for paired observations -- 11. Simple linear regression and correlation -- Model and parameter estimation -- Description of the model -- Least-squares estimation -- Properties of least-squares estimators -- Summary of theoretical results -- Confidence interval estimation and hypothesis testing -- Inferences about slope -- Inferences about intercept -- Inferences about estimated mean -- Inferences about single predicted value -- Repeated measures and lack of fit -- Residual analysis -- Residual plots -- Checking for normality : stem-and-leaf plots and boxplots -- Correlation -- Interval estimation and hypothesis tests on p -- Coefficient of determination -- 12. Multiple linear regression models -- Least-squares procedures for model fitting -- Polynomial model of degree p -- Multiple linear regression model -- Matrix approach to least squares -- Normal equations -- Solving the normal equations -- Simple linear regression : matrix formulation -- Polynomial model : matrix formulation -- Properties of the least-squares estimators -- Interval estimation -- Confidence interval on coefficients -- Confidence interval on estimated mean -- Prediction interval on a single predicted response -- Testing hypotheses about model parameters -- Testing a single predictor variable -- Testing for significant regression -- Testing a subset of predictor variables Use of indicator or "dummy" variables -- Criteria for variable selection -- Forward selection method -- Backward elimination procedure -- Stepwise method -- Maximum R2 method -- Mallows Ck statistic -- PRESS statistic -- Model transformation and concluding remarks -- 13. Analysis of variance -- One-way classification fixed-effects model -- Comparing variances -- Pairwise comparisons -- Bonferroni T tests -- Duncan's multiple range test -- Tukey's test -- Testing contrasts -- Randomized complete block design -- Effectiveness of blocking -- Paired comparisons -- Latin squares -- Random-effects model -- One-way classification -- Design models in matrix form -- Alternative nonparametric methods -- Kruskal-Wallis test -- Friedman test -- 14. Factorial experiments -- Two-factor analysis of variance -- Extension to three factors -- Random and mixed model factorial experiments -- Random-effects model -- Mixed-effects model -- 2k factorial experiments -- Computational techniques : Yates method -- 2k factorial experiments in an incomplete block design -- Fractional factorial experiments -- 15. Categorical data -- Multinomial distribution -- Chi-squared goodness of fit tests -- Testing for independence -- r X c test for independence -- Comparing proportions -- r X c test for homogeneity -- Comparing proportions with paired data : McNemar's test -- 16. Statistical quality control -- Properties of control charts -- Monitoring means -- Distribution of run lengths -- Shewhart control charts for measurements -- Mean -- Range -- Shewhart control charts for attributes -- P chart (proportion defective) -- C charts (average number of defects) -- Tolerance limits -- Two-sided tolerance limits -- Assumed normal distribution -- One-sided tolerance limits -- Nonparametric tolerance interval -- Acceptance sampling -- Two-stage acceptance sampling -- Extensions in quality control -- Modifications of control charts -- Parameter design procedures -- Statistics tables.

This well-respected text is designed for the first course in probability and statistics taken by students majoring in Engineering and the Computing Sciences. The prerequisite is one year of calculus. The text offers a balanced presentation of applications and theory. The authors take care to develop the theoretical foundations for the statistical methods presented at a level that is accessible to students with only a calculus background. They explore the practical implications of the formal results to problem-solving so students gain an understanding of the logic behind the techniques as well as practice in using them. The examples, exercises, and applications were chosen specifically for students in engineering and computer science and include opportunities for real data analysis.

There are no comments on this title.

to post a comment.