Field, A: Adventure in Statistics
(Sprache: Englisch)
An Adventure in Statistics has been shortlisted for the Once again, bestselling author and award-winning teacher Andy Field hasn't just broken the traditional textbook mould with his new novel/textbook, he has forged the only statistics book on the market...
Leider schon ausverkauft
versandkostenfrei
Buch
Fr. 64.90
inkl. MwSt.
- Kreditkarte, Paypal, Rechnungskauf
- 30 Tage Widerrufsrecht
Produktdetails
Produktinformationen zu „Field, A: Adventure in Statistics “
Klappentext zu „Field, A: Adventure in Statistics “
An Adventure in Statistics has been shortlisted for the Once again, bestselling author and award-winning teacher Andy Field hasn't just broken the traditional textbook mould with his new novel/textbook, he has forged the only statistics book on the market with a terrifying probability bridge, zombies and a talking cat!Andy Field's unique approach gently introduces students across the social sciences to the importance and relevance of statistics in a stunningly illustrated format and style. By weaving in a compelling narrative, he takes students on an exciting journey through introductory level statistics overcoming potential anxiety around the subject and providing a vibrant alternative to the dullness of many typical offerings. The medium, the message and the rock-solid statistics coverage combine to raise the level of attainment of even the most Maths-phobic student. It assumes no previous knowledge, nor requires the use of data analysis software. It covers the material you would expect for an introductory level statistics module that his previous books (Discovering Statistics Using IBM SPSS Statistics and Discovering Statistics Using R) only touch on, but with a contemporary twist, laying down strong foundations for understanding classical and Bayesian approaches to data analysis. In doing so, it provides an unrivalled launchpad to further study, research and inquisitiveness about the real world, equipping students with the skills to succeed in their chosen degree and which they can go on to apply in the workplace.Our Facebook page for lovers of Andy Field's books and statistics-phobes alike is a place for readers to share their experiences of Andy's texts and where we post news, free stuff, photos, videos, competitions and more.Join us at
Inhaltsverzeichnis zu „Field, A: Adventure in Statistics “
Prologue: The Dying Stars1 Why You Need Science: The Beginning and The End
1.1. Will you love me now?
1.2. How science works
1.2.1. The research process
1.2.2. Science as a life skill
1.3. Research methods
1.3.1. Correlational research methods
1.3.2. Experimental research methods
1.3.3. Practice, order and randomization
1.4. Why we need science
2 Reporting Research, Variables and Measurement: Breaking the Law
2.1. Writing up research
2.2. Maths and statistical notation
2.3. Variables and measurement
2.3.1. The conspiracy unfolds
2.3.2. Qualitative and quantitative data
2.3.3. Levels of measurement
2.3.4. Measurement error
2.3.5. Validity and reliability
3 Summarizing Data: She Loves Me Not?
3.1. Frequency distributions
3.1.1. Tabulated frequency distributions
3.1.2. Grouped frequency distributions
3.1.3. Graphical frequency distributions
3.1.4. Idealized distributions
3.1.5. Histograms for nominal and ordinal data
3.2. Throwing Shapes
4 Fitting Models (Central Tendency): Somewhere In The Middle
4.1. Statistical Models
4.1.1. From the dead
4.1.2. Why do we need statistical models?
4.1.3. Sample size
4.1.4. The one and only statistical model
4.2. Central Tendency
4.2.1. The mode
4.2.2. The median
4.2.3. The mean
4.3. The 'fit' of the mean: variance
4.3.1. The fit of the mean
4.3.2. Estimating the fit of the mean from a sample
4.3.3. Outliers and variance
4..4. Dispersion
4.4.1. The standard deviation as an indication of dispersion
4.4.2. The range and interquartile range
5 Presenting Data: Aggressive Perfector
5.1. Types of graphs
5.2. Another perfect day
5.3. The art of presenting data
5.3.1. What makes a good graph?
5.3.2. Bar graphs
5.3.3. Line graphs
5.3.4. Boxplots (box-whisker diagrams)
5.3.5. Graphing relationships: the scatterplot
5.3.6. Pie charts
6 Z-Scores: The wolf is loose
6.1. Interpreting raw
... mehr
scores
6.2. Standardizing a score
6.3. Using z-scores to compare distributions
6.4. Using z-scores to compare scores
6.5. Z-scores for samples
7 Probability: The Bridge of Death
7.1. Probability
7.1.1. Classical probability
7.1.2. Empirical probability
7.2. Probability and frequency distributions
7.2.1. The discs of death
7.2.2. Probability density functions
7.2.3. Probability and the normal distribution
7.2.4. The probability of a score greater than x
7.2.5. The probability of a score less than x: The tunnels of death
7.2.6. The probability of a score between two values: The catapults of death
7.3. Conditional probability: Deathscotch
Inferential Statistics: Going Beyond the Data
8.1. Estimating parameters
8.2. How well does a sample represent the population?
8.2.1. Sampling distributions
8.2.2. The standard error
8.2.3. The central limit theorem
8.3. Confidence Intervals
8.3.1. Calculating confidence intervals
8.3.2. Calculating other confidence intervals
8.3.3. Confidence intervals in small samples
8.4. Inferential statistics
9 Robust Estimation: Man Without Faith or Trust
9.1. Sources of bias
9.1.1. Extreme scores and non-normal distributions
9.1.2. The mixed normal distribution
9.2. A great mistake
9.3. Reducing bias
9.3.1. Transforming data
9.3.2. Trimming data
9.3.3. M-estimators
9.3.4. Winsorizing
9.3.5. The bootstrap
9.4. A final point about extreme scores
10 Hypothesis Testing: In Reality All is Void
10.1. Null hypothesis significance testing
10.1.1. Types of hypothesis
10.1.2. Fisher's p-value
10.1.3. The principles of NHST
10.1.4. Test statistics
10.1.5. One- and two-tailed tests
10.1.6. Type I and Type II errors
10.1.7. Inflated error rates
10.1.8. Statistical power
10.1.9. Confidence intervals and statistical significance
10.1.10. Sample size and statistical significance
11 Modern Approaches to Theory Testing: A Careworn Heart
11.1. Problems with NHST
11.1.1. What can you conclude from a 'significance' test?
11.1.2. All-or-nothing thinking
11.1.3. NHST is influenced by the intentions of the scientist
11.2. Effect sizes
11.2.1. Cohen's d
11.2.2. Pearson's correlation coefficient,r
11.2.3. The odds ratio
11.3. Meta-analysis
11.4. Bayesian approaches
11.4.1. Asking a different question
11.4.2. Bayes' theorem revisited
11.4.3. Comparing hypothesis
11.4.4. Benefits of bayesian approaches
12 Assumptions: Starblind
12.1. Fitting models: bringing it all together
12.2. Assumptions
12.2.1. Additivity and linearity
12.2.2. Independent errors
12.2.3. Homoscedasticity/ homogeneity of variance
12.2.4. Normally distributed something or other
12.2.5. External variables
12.2.6. Variable types
12.2.7. Multicollinearity
12.2.8. Non-zero variance
12.3. Turning ever towards the sun
13 Relationships: A Stranger's Grave
13.1. Finding relationships in categorical data
13.1.1. Pearson's chi-square test
13.1.2. Assumptions
13.1.3. Fisher's exact test
13.1.4. Yates's correction
13.1.5. The likelihood ratio (G-test)
13.1.6. Standardized residuals
13.1.7. Calculating an effect size
13.1.8. Using a computer
13.1.9. Bayes factors for contingency tables
13.1.10. Summary
13.2. What evil lay dormant
13.3. Modelling relationships
13.3.1. Covariance
13.3.2. Pearson's correlation coefficient
13.3.3. The significance of the correlation coefficient
13.3.4. Confidence intervals for r
13.3.5. Using a computer
13.3.6. Robust estimation of the correlation
13.3.7. Bayesian approaches to relationships between two variables
13.3.8. Correlation and causation
13.3.9. Calculating the effect size
13.4. Silent sorrow in empty boats
14 The General Linear Model: Red Fire Coming Out From His Gills
14.1. The linear model with one predictor
14.1.1. Estimating parameters
14.1.2. Interpreting regression coefficients
14.1.3. Standardized regression coefficients
14.1.4. The standard error of b
14.1.5. Confidence intervals for b
14.1.6. Test statistic for b
14.1.7. Assessing the goodness of fit
14.1.8. Fitting a linear model using a computer
14.1.9. When this fails
14.2. Bias in the linear model
14.3. A general procedure for fitting linear models
14.4. Models with several predictors
14.4.1. The expanded linear model
14.4.2. Methods for entering predictors
14.4.3. Estimating parameters
14.4.4. Using a computer to build more complex models
14.5. Robust regression
14.5.1. Bayes factors for linear models
15 Comparing Two Means: Rock or Bust
15.1. Testing differences between means: The rationale
15.2. Means and the linear model
15.2.1. Estimating the model parameters
15.2.2. How the model works
15.2.3. Testing the model parameters
15.2.4. The independent t-test on a computer
15.2.5. Assumptions of the model
15.3. Everything you believe is wrong
15.4. The paired-samples t-test
15.4.1. The paired-samples t-test on a computer
15.5. Alternative approaches
15.5.1. Effect sizes
15.5.2. Robust tests of two means
15.5.3. Bayes factors for comparing two means
16 Comparing Several Means: Faith in Others
16.1. General procedure for comparing means
16.2. Comparing several means with the linear model
16.2.1. Dummy coding
16.2.2. The F-ratio as a test of means
16.2.3. The total sum of squares (SSt)
16.2.4. The model sum of squares (SSm)
16.2.5. The residual sum of squares (SSr)
16.2.6. Partitioning variance
16.2.7. Mean squares
16.2.8. The F-ratio
16.2.9. Comparing several means using a computer
16.3. Contrast coding
16.3.1. Generating contrasts
16.3.2. Devising weights
16.3.3. Contrasts and the linear model
16.3.4. Post hoc procedures
16.3.5. Contrasts and post hoc tests using a computer
16.4. Storm of memories
16.5. Repeated-measures designs
16.5.1. The total sum of squares, SSt
16.5.2. The within-participant variance, SSw
16.5.3. The model sum of squares, SSm
16.5.4. The residual sum of squares, SSr
16.5.5. Mean squares and the F-ratio
16.5.6. Repeated-measures designs using a computer
16.6. Alternative approaches
16.6.1. Effect sizes
16.6.2. Robust tests of several means
16.6.3. Bayesian analysis of several means
16.7. The invisible man
Factorial Designs
17.1. Factorial designs
17.2. General procedure and assumptions
17.3. Analysing factorial designs
17.3.1. Factorial designs and the linear model
17.3.2. The fit of the model
17.3.3. Factorial designs on a computer
17.4. From the pinnacle to the pit
17.5. Alternative approaches
17.5.1. Calculating effect sizes
17.5.2. Robust analysis of factorial designs
17.5.3. Bayes factors for factorial designs
17.6. Interpreting interaction effects
Epilogue: The Genial Night: SI Momentum Requiris, Circumspice
6.2. Standardizing a score
6.3. Using z-scores to compare distributions
6.4. Using z-scores to compare scores
6.5. Z-scores for samples
7 Probability: The Bridge of Death
7.1. Probability
7.1.1. Classical probability
7.1.2. Empirical probability
7.2. Probability and frequency distributions
7.2.1. The discs of death
7.2.2. Probability density functions
7.2.3. Probability and the normal distribution
7.2.4. The probability of a score greater than x
7.2.5. The probability of a score less than x: The tunnels of death
7.2.6. The probability of a score between two values: The catapults of death
7.3. Conditional probability: Deathscotch
Inferential Statistics: Going Beyond the Data
8.1. Estimating parameters
8.2. How well does a sample represent the population?
8.2.1. Sampling distributions
8.2.2. The standard error
8.2.3. The central limit theorem
8.3. Confidence Intervals
8.3.1. Calculating confidence intervals
8.3.2. Calculating other confidence intervals
8.3.3. Confidence intervals in small samples
8.4. Inferential statistics
9 Robust Estimation: Man Without Faith or Trust
9.1. Sources of bias
9.1.1. Extreme scores and non-normal distributions
9.1.2. The mixed normal distribution
9.2. A great mistake
9.3. Reducing bias
9.3.1. Transforming data
9.3.2. Trimming data
9.3.3. M-estimators
9.3.4. Winsorizing
9.3.5. The bootstrap
9.4. A final point about extreme scores
10 Hypothesis Testing: In Reality All is Void
10.1. Null hypothesis significance testing
10.1.1. Types of hypothesis
10.1.2. Fisher's p-value
10.1.3. The principles of NHST
10.1.4. Test statistics
10.1.5. One- and two-tailed tests
10.1.6. Type I and Type II errors
10.1.7. Inflated error rates
10.1.8. Statistical power
10.1.9. Confidence intervals and statistical significance
10.1.10. Sample size and statistical significance
11 Modern Approaches to Theory Testing: A Careworn Heart
11.1. Problems with NHST
11.1.1. What can you conclude from a 'significance' test?
11.1.2. All-or-nothing thinking
11.1.3. NHST is influenced by the intentions of the scientist
11.2. Effect sizes
11.2.1. Cohen's d
11.2.2. Pearson's correlation coefficient,r
11.2.3. The odds ratio
11.3. Meta-analysis
11.4. Bayesian approaches
11.4.1. Asking a different question
11.4.2. Bayes' theorem revisited
11.4.3. Comparing hypothesis
11.4.4. Benefits of bayesian approaches
12 Assumptions: Starblind
12.1. Fitting models: bringing it all together
12.2. Assumptions
12.2.1. Additivity and linearity
12.2.2. Independent errors
12.2.3. Homoscedasticity/ homogeneity of variance
12.2.4. Normally distributed something or other
12.2.5. External variables
12.2.6. Variable types
12.2.7. Multicollinearity
12.2.8. Non-zero variance
12.3. Turning ever towards the sun
13 Relationships: A Stranger's Grave
13.1. Finding relationships in categorical data
13.1.1. Pearson's chi-square test
13.1.2. Assumptions
13.1.3. Fisher's exact test
13.1.4. Yates's correction
13.1.5. The likelihood ratio (G-test)
13.1.6. Standardized residuals
13.1.7. Calculating an effect size
13.1.8. Using a computer
13.1.9. Bayes factors for contingency tables
13.1.10. Summary
13.2. What evil lay dormant
13.3. Modelling relationships
13.3.1. Covariance
13.3.2. Pearson's correlation coefficient
13.3.3. The significance of the correlation coefficient
13.3.4. Confidence intervals for r
13.3.5. Using a computer
13.3.6. Robust estimation of the correlation
13.3.7. Bayesian approaches to relationships between two variables
13.3.8. Correlation and causation
13.3.9. Calculating the effect size
13.4. Silent sorrow in empty boats
14 The General Linear Model: Red Fire Coming Out From His Gills
14.1. The linear model with one predictor
14.1.1. Estimating parameters
14.1.2. Interpreting regression coefficients
14.1.3. Standardized regression coefficients
14.1.4. The standard error of b
14.1.5. Confidence intervals for b
14.1.6. Test statistic for b
14.1.7. Assessing the goodness of fit
14.1.8. Fitting a linear model using a computer
14.1.9. When this fails
14.2. Bias in the linear model
14.3. A general procedure for fitting linear models
14.4. Models with several predictors
14.4.1. The expanded linear model
14.4.2. Methods for entering predictors
14.4.3. Estimating parameters
14.4.4. Using a computer to build more complex models
14.5. Robust regression
14.5.1. Bayes factors for linear models
15 Comparing Two Means: Rock or Bust
15.1. Testing differences between means: The rationale
15.2. Means and the linear model
15.2.1. Estimating the model parameters
15.2.2. How the model works
15.2.3. Testing the model parameters
15.2.4. The independent t-test on a computer
15.2.5. Assumptions of the model
15.3. Everything you believe is wrong
15.4. The paired-samples t-test
15.4.1. The paired-samples t-test on a computer
15.5. Alternative approaches
15.5.1. Effect sizes
15.5.2. Robust tests of two means
15.5.3. Bayes factors for comparing two means
16 Comparing Several Means: Faith in Others
16.1. General procedure for comparing means
16.2. Comparing several means with the linear model
16.2.1. Dummy coding
16.2.2. The F-ratio as a test of means
16.2.3. The total sum of squares (SSt)
16.2.4. The model sum of squares (SSm)
16.2.5. The residual sum of squares (SSr)
16.2.6. Partitioning variance
16.2.7. Mean squares
16.2.8. The F-ratio
16.2.9. Comparing several means using a computer
16.3. Contrast coding
16.3.1. Generating contrasts
16.3.2. Devising weights
16.3.3. Contrasts and the linear model
16.3.4. Post hoc procedures
16.3.5. Contrasts and post hoc tests using a computer
16.4. Storm of memories
16.5. Repeated-measures designs
16.5.1. The total sum of squares, SSt
16.5.2. The within-participant variance, SSw
16.5.3. The model sum of squares, SSm
16.5.4. The residual sum of squares, SSr
16.5.5. Mean squares and the F-ratio
16.5.6. Repeated-measures designs using a computer
16.6. Alternative approaches
16.6.1. Effect sizes
16.6.2. Robust tests of several means
16.6.3. Bayesian analysis of several means
16.7. The invisible man
Factorial Designs
17.1. Factorial designs
17.2. General procedure and assumptions
17.3. Analysing factorial designs
17.3.1. Factorial designs and the linear model
17.3.2. The fit of the model
17.3.3. Factorial designs on a computer
17.4. From the pinnacle to the pit
17.5. Alternative approaches
17.5.1. Calculating effect sizes
17.5.2. Robust analysis of factorial designs
17.5.3. Bayes factors for factorial designs
17.6. Interpreting interaction effects
Epilogue: The Genial Night: SI Momentum Requiris, Circumspice
... weniger
Autoren-Porträt von Andy Field
Bibliographische Angaben
- Autor: Andy Field
- 746 Seiten, Masse: 18,9 x 24,6 cm, Taschenbuch, Englisch
- Herausgegeben: Andy Field
- Verlag: Sage Publications Ltd.
- ISBN-10: 1446210456
- ISBN-13: 9781446210451
- Erscheinungsdatum: 25.05.2016
Sprache:
Englisch
Kommentar zu "Field, A: Adventure in Statistics"
0 Gebrauchte Artikel zu „Field, A: Adventure in Statistics“
Zustand | Preis | Porto | Zahlung | Verkäufer | Rating |
---|
Schreiben Sie einen Kommentar zu "Field, A: Adventure in Statistics".
Kommentar verfassen