Ebooks

THE THEORY OF SAMPLE SURVEYS AND STATISTICAL DECISIONS: 2ND FULLY REVISED AND ENLARGED EDITION

K.S.Kushwaha, Rajesh Kumar
EISBN: 9789395319560 | Binding: Ebook | Pages: 0 | Language: English
Imprint: NIPA | DOI: 10.59317/9789391383442

295.00 USD 265.50 USD


INDIVIDUAL RATES ONLY. ACCESS VALID FOR 30 DAYS FROM THE DATE OF ACTIVATION FOR SINGLE USER ONLY.

In this second edition of the book, two new chapters of great academic importance have been added, namely Chapter 14 on Multiple and Partial Correlation Regression, and Chapter 15 on Non-Parametric Methods. These chapters provide fundamental definitions and numerical examples on multiple and partial correlation coefficients, as well as Spearman's rank correlation, partial correlation, and multiple regression using appropriate statistical formulas. Additionally, Chapter 15 covers the most appropriate methods for analyzing data collected in social sciences, which is often based on answers provided by respondents through their memories and may suffer from memory lapses or intentional errors. The examples provided in both chapters will be valuable assets for readers and users to gain a fundamental understanding of the subject matter.

0 Start Pages

In this second edition of the book, two new chapters (14) Multiple & Partial Correlation-Regression and (15) Non-Parametric Methods of significant academic importance have been incorporated. Some numerical examples on Multiple and Partial correlation-regress ion coefficients based on their fundamental definitions have been provided which are not available any where else. These examples will be proved to be very significant assets to its readers and users to understand the subject matter very fundamentaly. The author feels that the addition of these two new chapters will enable the book to be more useful and beneficial with more coverage of academic matters. Even then, if any suggestion from any corner of its readers and users is received, it will be highly appreciated and acknowledged by the author.

 
1 Preliminaries on Sample Survey Theory

1.1 Introduction The use of sampling in making decisions about an aggregate (or population) is possibly as old as civilization itself. Sampling is first broadly classified into two categories known as subjective and objective. Any type of sampling which depends upon the personal judgement or discretion of the sampler himself, is called subjective sampling. But the sampling method which is governed by a sampling rule or is independent of the sampler’s own judgement is known as objective sampling. The main difficulty with subjective sampling is that the sampler is ignorant of the degree of representativeness of his sample or the accuracy of the final estimates of the population values obtained.

1 - 22 (22 Pages)
USD34.99
 
2 Methods of Simple Random Sampling

2.1 Simple Random Sampling The simplest and common most method of sampling is simple random sampling. In this procedure, the sample is drawn unit by unit with equal probability of selection for each unit at each draw. It is sometimes referred to as unrestricted random sampling. 2.1.1 Definition “A sampling procedure which gives equal chance to all possible samples each of size n which can be drawn from a population of size N, is called simple random sampling.”

23 - 52 (30 Pages)
USD34.99
 
3 Stratified Random Sampling

3.1 Introduction Out of all methods of sampling, the most commonly used procedure in surveys is the stratified random sampling. When population units are of heterogeneous nature and if we select a sample using SRSWOR scheme then the sample selected may not be the representative sample and the estimate worked out from that sample will not be the reliable estimate of the population parameter. The estimate of sampling variance of the sample estimator obtained from that sampling scheme may happen to be very high leading to get a less precise estimate of the parameter. In such type of population, to overcome these draw backs, we use another sampling scheme known as stratified random sampling which is defined as follows.

53 - 74 (22 Pages)
USD34.99
 
4 Ratio, Product and Regression Methods of Estimation

4.1 Introduction In many surveys, information on an auxiliary (supplementary, ancillary or apriori) variable x, which is highly (positively or negatively)correlated with the variable y under study, is readily available and can be used for improving the sampling design. Stratified sampling and probability proportional to size (p.p.s) schemes are two examples of improved sampling designs, in which the use of data on auxiliary variable has been made. However in both the schemes, it implies that the information on auxiliary variable on individual sampling units is available prior to presentation of sampling design. In case, data on auxiliary variable for individual sampling units are not available but only the aggregate value for all units of auxiliary variable is available, the two schemes can not be used. In such a situation, the aggregate data on auxiliary variable can still be used at the time of estimation of parameters under consideration, provided the data on auxiliary variable for the sampled units can be easily obtained at the time of recording the values of the study variable.

75 - 110 (36 Pages)
USD34.99
 
5 Cluster Sampling

5.1 Introduction In random sampling it is assumed in prior that the population is divisible into a finite number of distinct and identifiable units and these units are known as sampling units. The smallest unit into which the population can be divide, is called an elementary unit (element) of the population. A group of such units is known as cluster. When such cluster is treated as sampling unit, then the sampling procedure adopted is known as cluster sampling. If the entire area containing the population under study is divided into smaller area segments and each element in the population belongs to one and only one such area segment, the sampling is sometimes known as area sampling. Generally, identification and location of an element requires considerable time. H

111 - 126 (16 Pages)
USD34.99
 
6 Systematic Random Sampling

6.1 Introduction So far we have considered the methods of sampling in which the successive units (elements or clusters) were selected with the help of random numbers. Now, we shall consider a new method of sample selection in which only the first unit of the sample is selected with the help of random number, the rest being selected automatically according to a predetermined pattern. The method is known as systematic sampling. It is operationally more convenient than simple random sampling and which at the same time ensures for each unit, equal probability ofinclusion in the sample. The systematic sampling can also be termed as systematic random sampling because of the first unit of the sample being chosen randomly. The pattern usually followed in selecting a systematic sample is a simple pattern involving regular spacing of units.

127 - 142 (16 Pages)
USD34.99
 
7 Multistage Sampling

7.1 Introduction In chapter 5, the cluster sampling has been discussed in which clusters were considered as sampling units and all the elements in the selected clusters were enumerated completely. It has been stated there that the cluster sampling is economical under certain circumstance but the method restricts the spread of sample over the population which results generally in increased sampling variance of the estimator under consideration. It is, therefore, logical to expect that the efficiency of the estimator will be increased by distributing the elements over a large numbers of clusters. Hence instead of observing all the units in the selected clusters, one may like to observe only few elements of the selected clusters. This will involve the sampling work to be done also within the selected clusters. If we do so, the process of selection will involve sampling work to be done at two stage, one for selecting the clusters where sampling units are the clusters and the second for selecting the units (elements) within the selected clusters where sampling units are the elements.

143 - 160 (18 Pages)
USD34.99
 
8 Statistical Decision 1(Point and Internal Estimation Theory)

8.1 Introduction We have already discussed in details how different sampling schemes can be employed to draw the representative samples suitable to many kinds of population whose sampling units differ in nature. From practical view points, however, it is often more important to be able to make decision about certain unknown characteristics of the population on the basis of informations available in the representative samples which are drawn from the parent populations. Such decisions taken, are known as the “Statistical decisions”. For example, we may wish to decide on the basis of sample data whether (i) One educational system is better than the other or not, (ii) A given six faced die is unbiased (balanced) or not, (iii) A new insecticide is really effective to control the attack of insects or not? etc. Such problems are dealt with in the theory of statistical decisions which uses the principles of sampling theory.

161 - 182 (22 Pages)
USD34.99
 
9 Test of Hypothesis and its Significance (Preliminaries)

9.1 Introduction In the preceeding chapter, we have studied the estimation part of statistical decision (inference) infered about the parameters of the population on the basis of information available in the sample observations drawn from the parent population under study. In the present chapter we discuss the problems related with the tests of statistical hypothesis and its significance about the population parameters under study. For understanding of the subject matter considered in this chapter, very clearly and easily, one must be familier with the statistical terminologies used in the subject matter. Hence we discuss the various notable terminologies with some suitable examples in a simple way for better understanding.

183 - 192 (10 Pages)
USD34.99
 
10 Normal Distribution and Test Based on it (Large Sample Test or Normal Test or Z Test)

10.1 Normal Distribution The normal distribution is the most widely used distribution in statistics. Almost all biological data including the agricultural data eg. crop yield, plant height, seed weight, seed size, protein percent in pulse, oil content in an oil seed etc are assumed to be normally distributed. In practice their distribution is rarely verified. In distribution theory, for large values of n, the number of trial (or sample size), almost all the distributions e.g. Binomial, poission, Negative binomial, chi square, student’s t, and Snedenor’s F distributions etc. are very closely approximated by normal distribution. It is a continuous distribution with mean and variance as its two parameters.

193 - 214 (22 Pages)
USD34.99
 
11 Exact Sampling Distribution and Related Small Sample Tests (F, t)

11.1 Introduction In the preceding chapter 10, we have discussed in detail, the test procedures for which it has been assumed that the sample size n is large, (i.e n >,30). Those tests are also known as approximate tests or z tests. But in case when sample size n is small i.e. n<30, the approximate test fails to deal with such problem and we apply the theory of exact sample, for small sample tests. The exact sample tests however can be applied to large samples also though the converse is not plausible. In all the exact sample tests, the basic assumption is that the parent population is normally distributed. In this part we will discusse in details the exact sample tests (based on exact sampling distributions namely snedekor’s F, student’s t and chi square (c2) distributions. These distributions will be simply introduced in short because main emphasis has to be given only on the test procedures based on them.

215 - 232 (18 Pages)
USD34.99
 
12 Chi-square Distribution and its Application

12.1 Chi-Square Variate The square of a standard normal variate is known as a chi - square variate or chi square statistic with one degree of freedom. Thus if , then the variate Z defined as  Z is a standard normal variate with mean zero and variance unity (one) and  = Z2 is a chi - square variate (statistic) with one degree of freedom. In general if xi, (i=1,2,.......n) are n independent normal variates with means μi and variance , then  is a chi - square variate (statistic) with n degree of freedom. It is denoted by the notation  and follows chi-square distribution. Symbolically, it is expressed as . 12.2 Chi-Square Distribution Let  be a chi - square variate with n degree of freedom. Then its probability density function is written as

233 - 250 (18 Pages)
USD34.99
 
13 Miscellaneous Tests of Significance

13.1 Introduction In the previous chapters, we have considered different tests of significance based on normal distribution, F distribution, student’s t distribution and Chi-square (x2) distribution which are frequently applied by the users of different disciplines of applied sciences and social sciences. In this chapter some more tests of significance are considered which have not been covered in those chapters. These tests are also based on t, F, Fisher’s z transformation,  and some other test statistics are also discussed one by one in the following sections. 13.2 t Test for Testing the Significance of an Observed Sample Correlation Coefficient r xy Let r be the observed sample correlation coefficient based on a small sample of size n drawn from a bi-variate normal population with population correlation coefficient r. The standard error of r is given as

251 - 278 (28 Pages)
USD34.99
 
14 Rank, Multiple, Partial and Intraclass Correlation-Regression

14.1 Introduction The theory of simple correlation (rxy), regression coefficients (byx & bxy) and their test of significance have been discucsed in chapter 13. The test of significance of observed partial correlation coefficient p123(k+2) and multiple correlation coefficient R123, have also been discussed in sections 13.4 and 13.5 respectively. Now in this chapter mainly the process of how to calculate spearman’s rank correlation coefficient, partial correlation and regression coefficients and multiple correlation coefficient using their fundamental definitions as well as appropriate statistical formule in terms of pairwise simple correlation coefficient rij (i = j = 1, 2, 3) in trivariate populations have been provided to make the students and its users to be familiar with basic and crystal clear concepts of the subject matter with the help of an empirical studies based on some original data on (X1, X2, X3). Here X1 has been considered as study variate and (X2, X3) as covariates.

279 - 302 (24 Pages)
USD34.99
 
15 Non-Parametric Methods

15. 1 Introduction Data collected in social sciences and economic study, are usually based on answers provided by the respondents. These answers are aften based on re-collection of past memories and often suffer from memory lapses, intentional erros etc. As such they provide only a crude informations which are never exact. The analysis of such data on the pattern of other biological data will not yield much information because many times they will not fulfil the underlying assumptions. For such data, the non-parametric (N.P.) methods are most appropriate. 15.2 Parametric Tests The tests which deal with the parameters of the populations, are known as parametric test e.g. Z, t & F tests are the parametric tests.

303 - 322 (20 Pages)
USD34.99
 
16 End Pages

Agarwal, B.L. (1999). Basic Statistics, 2nd edition, Wiley Eastern Ltd., New Delhi. Alexander M. Mood, Franklin A. Gryabill and Duane C. Boes: Introduction to the theory of statistics, McGraw -Hill. Bowley, A.L. (1926). Measurement of precision attained in sampling. Bull. Inter. Statist. Instt. 22: 1-62. Cochran, W.G (1946). Relative accuracy of systematic and stratified random sampling for a certain class of population. Ann. Math. Statist, 17: 164-177. Cochram, W.G (1977). Sampling Techniques, 3rd edition, Wiley Eastern, Ltd., New Delhi. Drapper, N.R. and Smith, H. (1966). Applied Regression Analysis Wiley series in probability and mathematical statistics, 2nd edition, 615-16. Fisher, R.A. and F. Yates (1963). Statistical table for biological, agricultural and medical research. Ganguli, M. (1941). A note on nested sampling. Sankhya, 5; 449-452. Gupta, S.C. and Kapoor, VK. (1983). Fundamentals of mathematical statistics, Sultan Chand and Sons, New Delhi.

 
9cjbsk
Payment Methods