Coverart for item
The Resource Geostatistics explained : an introductory guide for earth scientists, Steve McKillup, Melinda Darby Dyar

Geostatistics explained : an introductory guide for earth scientists, Steve McKillup, Melinda Darby Dyar

Label
Geostatistics explained : an introductory guide for earth scientists
Title
Geostatistics explained
Title remainder
an introductory guide for earth scientists
Statement of responsibility
Steve McKillup, Melinda Darby Dyar
Creator
Contributor
Subject
Language
eng
Cataloging source
DLC
http://library.link/vocab/creatorName
McKillup, Steve
Illustrations
illustrations
Index
index present
Literary form
non fiction
http://library.link/vocab/relatedWorkOrContributorName
Dyar, M. Darby
http://library.link/vocab/subjectName
Geology
Label
Geostatistics explained : an introductory guide for earth scientists, Steve McKillup, Melinda Darby Dyar
Instantiates
Publication
Note
Includes index
Contents
  • 1. Introduction -- 1.1. Why do earth scientists need to understand experimental design and statistics? -- 1.2. What is this book designed to do? -- 2. "Doing science": hypotheses, experiments and disproof -- 2.1. Introduction -- 2.2. Basic scientific method -- 2.3. Making a decision about a hypothesis -- 2.4. Why can't a hypothesis or theory ever be proven? -- 2.5. "Negative" outcomes -- 2.6. Null and alternate hypotheses -- 2.7. Conclusion -- 2.8. Questions -- 3. Collecting and displaying data -- 3.1. Introduction -- 3.2. Variables, sampling units and types of data -- 3.3. Displaying data -- 3.4. Displaying ordinal or nominal scale data -- 3.5. Bivariate data -- 3.6. Data expressed as proportions of a total -- 3.7. Display of geographic direction or orientation -- 3.8. Multivariate data -- 3.9. Conclusion -- 4. Introductory concepts of experimental design -- 4.1. Introduction -- 4.2. Sampling: mensurative experiments -- 4.3. Manipulative experiments -- 4.4. Sometimes you can only do an unreplicated experiment -- 4.5. Realism -- 4.6. A bit of common sense -- 4.7. Designing a "good" experiment -- 4.8. Conclusion -- 4.9. Questions -- 5. Doing science responsibly and ethically -- 5.1. Introduction -- 5.2. Dealing fairly with other people's work -- 5.3. Doing the sampling or the experiment -- 5.4. Evaluating and reporting results -- 5.5. Quality control in science -- 5.6. Questions -- 6. Probability helps you make a decision about your results -- 6.1. Introduction -- 6.2. Statistical tests and significance levels -- 6.3. What has this got to do with making a decision or statistical testing? -- 6.4. Making the wrong decision -- 6.5. Other probability levels -- 6.6. How are probability values reported? -- 6.7. All statistical tests do the same basic thing -- 6.8. A very simple example: the chi-square test for goodness of fit -- 6.9. What if you get a statistic with a probability of exactly 0.05? -- 6.10. Conclusion -- 6.11. Questions -- 7. Working from samples: data, populations and statistics -- 7.1. Using a sample to infer the characteristics of a population -- 7.2. Statistical tests -- 7.3. The normal distribution -- 7.4. Samples and populations -- 7.5. Your sample mean may not be an accurate estimate of the population mean -- 7.6. What do you do when you only have data from one sample? -- 7.7. Why are the statistics that describe the normal distribution so important? -- 7.8. Distributions that are not normal -- 7.9. Other distributions -- 7.10. Other statistics that describe a distribution -- 7.11. Conclusion -- 7.12. Questions -- 8. Normal distributions: tests for comparing the means of one and two samples -- 8.1. Introduction -- 8.2. The 95% confidence interval and 95% confidence limits -- 8.3. Using the Z statistic to compare a sample mean and population mean when population statistics are known -- 8.4. Comparing a sample mean to an expected value when population statistics are not known -- 8.5. Comparing the means of two related samples -- 8.6. Comparing the means of two independent samples -- 8.7. Are your data appropriate for a t test? -- 8.8. Distinguishing between data that should be analyzed by a paired-sample test and a test for two independent samples -- 8.9. Conclusion -- 8.10. Questions -- 9. Type 1 and Type 2 error, power and sample size -- 9.1. Introduction -- 9.2. Type 1 error -- 9.3. Type 2 error -- 9.4. The power of a test -- 9.5. What sample size do you need to ensure the risk of Type 2 error is not too high? -- 9.6. Type 1 error, Type 2 error and the concept of risk -- 9.7. Conclusion -- 9.8. Questions -- 10. Single-factor analysis of variance -- 10.1. Introduction -- 10.2. Single-factor analysis of variance -- 10.3. An arithmetic/pictorial example -- 10.4. Unequal sample sizes (unbalanced designs) -- 10.5. An ANOVA does not tell you which particular treatments appear to be from different populations -- 10.6. Fixed or random effects -- 10.7. Questions -- 11. Multiple comparisons after ANOVA -- 11.1. Introduction -- 11.2. Multiple comparison tests after a Model I ANOVA -- 11.3. An a posteriori Tukey comparison following a significant result for a single-factor Model I ANOVA -- 11.4. Other a posteriori multiple comparison tests -- 11.5. Planned comparisons -- 11.6. Questions -- 12. Two-factor analysis of variance -- 12.1. Introduction -- 12.2. What does a two-factor ANOVA do? -- 12.3. How does a two-factor ANOVA analyze these data? -- 12.4. How does a two-factor ANOVA separate out the effects of each factor and interaction? -- 12.5. An example of a two-factor analysis of variance -- 12.6. Some essential cautions and important complications -- 12.7. Unbalanced designs -- 12.8. More complex designs -- 12.9. Questions -- 13. Important assumptions of analysis of variance, transformations and a test for equality of variances -- 13.1. Introduction -- 13.2. Homogeneity of variances -- 13.3. Normally distributed data -- 13.4. Independence -- 13.5. Transformations -- 13.6. Are transformations legitimate? -- 13.7. Tests for heteroscedasticity -- 13.8. Questions -- 14. Two-factor analysis of variance without replication, and nested analysis of variance -- 14.1. Introduction -- 14.2. Two-factor ANOVA without replication -- 14.3. A posteriori comparison of means after a two-factor ANOVA without replication -- 14.4. Randomized blocks -- 14.5. Nested ANOVA as a special case of a single-factor ANOVA -- 14.6. A pictorial explanation of a nested ANOVA -- 14.7. A final comment on ANOVA: this book is only an introduction -- 14.8. Questions -- 15. Relationships between variables: linear correlation and linear regression -- 15.1. Introduction -- 15.2. Correlation contrasted with regression -- 15.3. Linear correlation -- 15.4. Calculation of the Pearson r statistic -- 15.5. Is the value of r statistically significant? -- 15.6. Assumptions of linear correlation -- 15.7. Conclusion -- 15.8. Questions -- 16. Linear regression -- 16.1. Introduction -- 16.2. Linear regression -- 16.3. Calculation of the slope of the regression line -- 16.4. Calculation of the intercept with the Y axis -- 16.5. Testing the significance of the slope and the intercept of the regression line -- 16.6. An example: school cancellations and snow -- 16.7. Predicting a value of Y from a value of X -- 16.8. Predicting a value of X from a value of Y -- 16.9. The danger of extrapolating beyond the range of data available -- 16.10. Assumptions of linear regression analysis -- 16.11. Multiple linear regression -- 16.12. Further topics in regression -- 16.13. Questions -- 17. Non-parametric statistics -- 17.1. Introduction -- 17.2. The danger of assuming normality when a population is grossly non-normal -- 17.3. The value of making a preliminary inspection of the data -- 18. Non-parametric tests for nominal scale data -- 18.1. Introduction -- 18.2. Comparing observed and expected frequencies: the chi-square test for goodness of fit -- 18.3. Comparing proportions among two or more independent samples -- 18.4. Bias when there is one degree of freedom -- 18.5. Three-dimensional contingency tables -- 18.6. Inappropriate use of tests for goodness of fit and heterogeneity -- 18.7. Recommended tests for categorical data -- 18.8. Comparing proportions among two or more related samples of nominal scale data -- 18.9. Questions -- 19. Non-parametric tests for ratio, interval or ordinal scale data -- 19.1. Introduction -- 19.2. A non-parametric comparison between one sample and an expected distribution -- 19.3. Non-parametric comparisons between two independent samples -- 19.4. Non-parametric comparisons among more than two independent samples --
  • 19.5. Non-parametric comparisons of two related samples -- 19.6. Non-parametric comparisons among three or more related samples -- 19.7. Analyzing ratio, interval or ordinal data that show gross differences in variance among treatments and cannot be satisfactorily transformed -- 19.8. Non-parametric correlation analysis -- 19.9. Other non-parametric tests -- 19.10. Questions -- 20. Introductory concepts of multivariate analysis -- 20.1. Introduction -- 20.2. Simplifying and summarizing multivariate data -- 20.3. An R-mode analysis: principal components analysis -- 20.4. How does a PCA combine two or more variables into one? -- 20.5. What happens if the variables are not highly correlated? -- 20.6. PCA for more than two variables -- 20.7. The contribution of each variable to the principal components -- 20.8. An example of the practical use of principal components analysis -- 20.9. How many principal components should you plot? -- 20.10. How much variation must a PCA explain before it is useful? -- 20.11. Summary and some cautions and restrictions on use of PCA -- 20.12. Q-mode analyses: multidimensional scaling -- 20.13. How is a univariate measure of dissimilarity among sampling units extracted from multivariate data? -- 20.14. An example -- 20.15. Stress -- 20.16. Summary and cautions on the use of multidimensional scaling -- 20.17. Q-mode analyses: cluster analysis -- 20.18. Which multivariate analysis should you use? -- 20.19. Questions -- 21. Introductory concepts of sequence analysis -- 21.1. Introduction -- 21.2. Sequences of ratio, interval or ordinal scale data -- 21.3. Preliminary inspection by graphing -- 21.4. Detection of within-sequence similarity and dissimilarity -- 21.5. Cross-correlation -- 21.6. Regression analysis -- 21.7. Simple linear regression -- 21.8. More complex regression -- 21.9. Simple autoregression -- 21.10. More complex series with a cyclic component -- 21.11. Statistical packages and time series analysis -- 21.12. Some very important limitations and cautions -- 21.13. Sequences of nominal scale data -- 21.14. Records of the repeated occurrence of an event -- 21.15. Conclusion -- 21.16. Questions -- 22. Introductory concepts of spatial analysis -- 22.1. Introduction -- 22.2. Testing whether a spatial distribution occurs at random -- 22.3. Data for the direction of objects -- 22.4. Prediction and interpolation in two dimensions -- 22.5. Conclusion -- 22.6. Questions -- 23. Choosing a test -- 23.1. Introduction -- Appendices -- Appendix A. Critical values of chi-square, t and F -- Appendix B. Answers to questions
Control code
ocn460059697
Dimensions
23 cm
Extent
xvi, 396 p.
Isbn
9780521763226
Isbn Type
(hardback)
Lccn
2010002838
Other physical details
ill.
System control number
(OCoLC)460059697
Label
Geostatistics explained : an introductory guide for earth scientists, Steve McKillup, Melinda Darby Dyar
Publication
Note
Includes index
Contents
  • 1. Introduction -- 1.1. Why do earth scientists need to understand experimental design and statistics? -- 1.2. What is this book designed to do? -- 2. "Doing science": hypotheses, experiments and disproof -- 2.1. Introduction -- 2.2. Basic scientific method -- 2.3. Making a decision about a hypothesis -- 2.4. Why can't a hypothesis or theory ever be proven? -- 2.5. "Negative" outcomes -- 2.6. Null and alternate hypotheses -- 2.7. Conclusion -- 2.8. Questions -- 3. Collecting and displaying data -- 3.1. Introduction -- 3.2. Variables, sampling units and types of data -- 3.3. Displaying data -- 3.4. Displaying ordinal or nominal scale data -- 3.5. Bivariate data -- 3.6. Data expressed as proportions of a total -- 3.7. Display of geographic direction or orientation -- 3.8. Multivariate data -- 3.9. Conclusion -- 4. Introductory concepts of experimental design -- 4.1. Introduction -- 4.2. Sampling: mensurative experiments -- 4.3. Manipulative experiments -- 4.4. Sometimes you can only do an unreplicated experiment -- 4.5. Realism -- 4.6. A bit of common sense -- 4.7. Designing a "good" experiment -- 4.8. Conclusion -- 4.9. Questions -- 5. Doing science responsibly and ethically -- 5.1. Introduction -- 5.2. Dealing fairly with other people's work -- 5.3. Doing the sampling or the experiment -- 5.4. Evaluating and reporting results -- 5.5. Quality control in science -- 5.6. Questions -- 6. Probability helps you make a decision about your results -- 6.1. Introduction -- 6.2. Statistical tests and significance levels -- 6.3. What has this got to do with making a decision or statistical testing? -- 6.4. Making the wrong decision -- 6.5. Other probability levels -- 6.6. How are probability values reported? -- 6.7. All statistical tests do the same basic thing -- 6.8. A very simple example: the chi-square test for goodness of fit -- 6.9. What if you get a statistic with a probability of exactly 0.05? -- 6.10. Conclusion -- 6.11. Questions -- 7. Working from samples: data, populations and statistics -- 7.1. Using a sample to infer the characteristics of a population -- 7.2. Statistical tests -- 7.3. The normal distribution -- 7.4. Samples and populations -- 7.5. Your sample mean may not be an accurate estimate of the population mean -- 7.6. What do you do when you only have data from one sample? -- 7.7. Why are the statistics that describe the normal distribution so important? -- 7.8. Distributions that are not normal -- 7.9. Other distributions -- 7.10. Other statistics that describe a distribution -- 7.11. Conclusion -- 7.12. Questions -- 8. Normal distributions: tests for comparing the means of one and two samples -- 8.1. Introduction -- 8.2. The 95% confidence interval and 95% confidence limits -- 8.3. Using the Z statistic to compare a sample mean and population mean when population statistics are known -- 8.4. Comparing a sample mean to an expected value when population statistics are not known -- 8.5. Comparing the means of two related samples -- 8.6. Comparing the means of two independent samples -- 8.7. Are your data appropriate for a t test? -- 8.8. Distinguishing between data that should be analyzed by a paired-sample test and a test for two independent samples -- 8.9. Conclusion -- 8.10. Questions -- 9. Type 1 and Type 2 error, power and sample size -- 9.1. Introduction -- 9.2. Type 1 error -- 9.3. Type 2 error -- 9.4. The power of a test -- 9.5. What sample size do you need to ensure the risk of Type 2 error is not too high? -- 9.6. Type 1 error, Type 2 error and the concept of risk -- 9.7. Conclusion -- 9.8. Questions -- 10. Single-factor analysis of variance -- 10.1. Introduction -- 10.2. Single-factor analysis of variance -- 10.3. An arithmetic/pictorial example -- 10.4. Unequal sample sizes (unbalanced designs) -- 10.5. An ANOVA does not tell you which particular treatments appear to be from different populations -- 10.6. Fixed or random effects -- 10.7. Questions -- 11. Multiple comparisons after ANOVA -- 11.1. Introduction -- 11.2. Multiple comparison tests after a Model I ANOVA -- 11.3. An a posteriori Tukey comparison following a significant result for a single-factor Model I ANOVA -- 11.4. Other a posteriori multiple comparison tests -- 11.5. Planned comparisons -- 11.6. Questions -- 12. Two-factor analysis of variance -- 12.1. Introduction -- 12.2. What does a two-factor ANOVA do? -- 12.3. How does a two-factor ANOVA analyze these data? -- 12.4. How does a two-factor ANOVA separate out the effects of each factor and interaction? -- 12.5. An example of a two-factor analysis of variance -- 12.6. Some essential cautions and important complications -- 12.7. Unbalanced designs -- 12.8. More complex designs -- 12.9. Questions -- 13. Important assumptions of analysis of variance, transformations and a test for equality of variances -- 13.1. Introduction -- 13.2. Homogeneity of variances -- 13.3. Normally distributed data -- 13.4. Independence -- 13.5. Transformations -- 13.6. Are transformations legitimate? -- 13.7. Tests for heteroscedasticity -- 13.8. Questions -- 14. Two-factor analysis of variance without replication, and nested analysis of variance -- 14.1. Introduction -- 14.2. Two-factor ANOVA without replication -- 14.3. A posteriori comparison of means after a two-factor ANOVA without replication -- 14.4. Randomized blocks -- 14.5. Nested ANOVA as a special case of a single-factor ANOVA -- 14.6. A pictorial explanation of a nested ANOVA -- 14.7. A final comment on ANOVA: this book is only an introduction -- 14.8. Questions -- 15. Relationships between variables: linear correlation and linear regression -- 15.1. Introduction -- 15.2. Correlation contrasted with regression -- 15.3. Linear correlation -- 15.4. Calculation of the Pearson r statistic -- 15.5. Is the value of r statistically significant? -- 15.6. Assumptions of linear correlation -- 15.7. Conclusion -- 15.8. Questions -- 16. Linear regression -- 16.1. Introduction -- 16.2. Linear regression -- 16.3. Calculation of the slope of the regression line -- 16.4. Calculation of the intercept with the Y axis -- 16.5. Testing the significance of the slope and the intercept of the regression line -- 16.6. An example: school cancellations and snow -- 16.7. Predicting a value of Y from a value of X -- 16.8. Predicting a value of X from a value of Y -- 16.9. The danger of extrapolating beyond the range of data available -- 16.10. Assumptions of linear regression analysis -- 16.11. Multiple linear regression -- 16.12. Further topics in regression -- 16.13. Questions -- 17. Non-parametric statistics -- 17.1. Introduction -- 17.2. The danger of assuming normality when a population is grossly non-normal -- 17.3. The value of making a preliminary inspection of the data -- 18. Non-parametric tests for nominal scale data -- 18.1. Introduction -- 18.2. Comparing observed and expected frequencies: the chi-square test for goodness of fit -- 18.3. Comparing proportions among two or more independent samples -- 18.4. Bias when there is one degree of freedom -- 18.5. Three-dimensional contingency tables -- 18.6. Inappropriate use of tests for goodness of fit and heterogeneity -- 18.7. Recommended tests for categorical data -- 18.8. Comparing proportions among two or more related samples of nominal scale data -- 18.9. Questions -- 19. Non-parametric tests for ratio, interval or ordinal scale data -- 19.1. Introduction -- 19.2. A non-parametric comparison between one sample and an expected distribution -- 19.3. Non-parametric comparisons between two independent samples -- 19.4. Non-parametric comparisons among more than two independent samples --
  • 19.5. Non-parametric comparisons of two related samples -- 19.6. Non-parametric comparisons among three or more related samples -- 19.7. Analyzing ratio, interval or ordinal data that show gross differences in variance among treatments and cannot be satisfactorily transformed -- 19.8. Non-parametric correlation analysis -- 19.9. Other non-parametric tests -- 19.10. Questions -- 20. Introductory concepts of multivariate analysis -- 20.1. Introduction -- 20.2. Simplifying and summarizing multivariate data -- 20.3. An R-mode analysis: principal components analysis -- 20.4. How does a PCA combine two or more variables into one? -- 20.5. What happens if the variables are not highly correlated? -- 20.6. PCA for more than two variables -- 20.7. The contribution of each variable to the principal components -- 20.8. An example of the practical use of principal components analysis -- 20.9. How many principal components should you plot? -- 20.10. How much variation must a PCA explain before it is useful? -- 20.11. Summary and some cautions and restrictions on use of PCA -- 20.12. Q-mode analyses: multidimensional scaling -- 20.13. How is a univariate measure of dissimilarity among sampling units extracted from multivariate data? -- 20.14. An example -- 20.15. Stress -- 20.16. Summary and cautions on the use of multidimensional scaling -- 20.17. Q-mode analyses: cluster analysis -- 20.18. Which multivariate analysis should you use? -- 20.19. Questions -- 21. Introductory concepts of sequence analysis -- 21.1. Introduction -- 21.2. Sequences of ratio, interval or ordinal scale data -- 21.3. Preliminary inspection by graphing -- 21.4. Detection of within-sequence similarity and dissimilarity -- 21.5. Cross-correlation -- 21.6. Regression analysis -- 21.7. Simple linear regression -- 21.8. More complex regression -- 21.9. Simple autoregression -- 21.10. More complex series with a cyclic component -- 21.11. Statistical packages and time series analysis -- 21.12. Some very important limitations and cautions -- 21.13. Sequences of nominal scale data -- 21.14. Records of the repeated occurrence of an event -- 21.15. Conclusion -- 21.16. Questions -- 22. Introductory concepts of spatial analysis -- 22.1. Introduction -- 22.2. Testing whether a spatial distribution occurs at random -- 22.3. Data for the direction of objects -- 22.4. Prediction and interpolation in two dimensions -- 22.5. Conclusion -- 22.6. Questions -- 23. Choosing a test -- 23.1. Introduction -- Appendices -- Appendix A. Critical values of chi-square, t and F -- Appendix B. Answers to questions
Control code
ocn460059697
Dimensions
23 cm
Extent
xvi, 396 p.
Isbn
9780521763226
Isbn Type
(hardback)
Lccn
2010002838
Other physical details
ill.
System control number
(OCoLC)460059697

Library Locations

    • ManawatÅ« LibraryBorrow it
      Tennent Drive, Palmerston North, Palmerston North, 4472, NZ
      -40.385340 175.617349
Processing Feedback ...