Ebooks

BASIC BIOSTATISTICS

Biswanath Patra, Bharat Bhushan, Hitesh Purohit, Parth Gaur
EISBN: 9789358879087 | Binding: Ebook | Pages: 0 | Language: English
Imprint: NIPA | DOI: 10.59317/9789358879087

250.00 USD 225.00 USD


INDIVIDUAL RATES ONLY. ACCESS VALID FOR 30 DAYS FROM THE DATE OF ACTIVATION FOR SINGLE USER ONLY.

This Basic Biostatistics book was written to assist students and researchers of the biological sciences, with the primary purpose of helping them to learn about and apply appropriate experimental designs and statistical methods. The book chapters cover all basic topics at UG and PG levels at India and foreign organisations. Statistical methods applied to biological sciences are known as Biostatistics or Biometrics, and they have their origins in agricultural and biomedical research.

The special feathers that distinguish biometrics within statistics is the fact that biological measurements are variable, not only because of measurement error, but also from their natural variability from genetic and environmental sources. These sources of variability must be considered when drawing conclusions about biological material. Accounting for these sources of variation has led to the development of experimental designs that incorporate blocking, covariates and repeated measures. Appropriate techniques for analysis of data from these designs and others are sincerely discussed in the book.

This Basic Biostatics book has lot of important attractive features are simplicity, easy to understand, compiled and arranged with in simple English language. All the statistical theory has been illustrated with examples for better understanding of all statistical principles. A number of solved numerical problems has been discussed in this book.

0 Start Pages

The main aim for writing the book on “Basic Biostatistics” is to assist the students and faculties of the biological sciences in teaching and research. Besides, this book provides the basic concepts to apply appropriate experimental designs and statistical tools for recording and analysis of data for better inferences. The book chapters in this book cover basic concepts required for teaching at undergraduate as well as postgraduate levels keeping in view Indian and foreign universities/institutions. Statistical methods applied to biological sciences are known as Biostatistics or Biometrics, which have originated from agricultural and biomedical research. The peculiar characteristics that distinguish biometrics within statistics is the fact that biological measurements are variable, not only because of measurement error, but also from their natural variability from genetic and environmental sources. These sources of variability must be taken into account when drawing conclusions about any biological materials. Accounting for these sources of variation has led to the development of experimental designs that incorporate blocking, covariates and repeated measures. Appropriate techniques for analysis of data from these designs and others are discussed properly in this book. The syllabus of BVSc & AH and BSc (Ag.) has been covered in 15 chapters as per Veterinary Council of India (VCI). After this, two chapters have been written for PG and PhD students of Animal Genetics and Breeding disciplines, which contain Descriptive Biostatistics, Inferential Biostatistics and Linear Models in animal breeding. The last chapter covers the special topics question banks for UG and PG students, which will help the students to practice and prepare for various university examinations, ICAR examinations like JRF, SRF, ARS and CSIR etc. All the Veterinary Universities/Colleges and Agriculture Colleges in India and also Animal Science Colleges over the globe teach Basic Biostatistics to Animal Genetics and Breeding students in UG, PG and PhD level with the aim to learn and develop new methodology for livestock improvement, so that their productivity could be enhanced to the optimum level.

 
12 Introduction

Statistics It is a branch of mathematics, which comprises several areas like Algebra, Trigonometry, Calculus, Geometry, Statistics and Probability etc. Biostatistics is a mathematical science which analyses numerical facts or biometric data of biological events following certain rules like design of experiments, analysis of data, presentation of data with tables, figures hypothesis testing, making a probability prediction and conclusion etc. or it is an art of analysis and presentation of data following certain rules and international norms accepted by international journals for pharmacology experiments, animal farm data, human and animal medicines data, epidemiology data, non-parametric data etc. What is statistics? The number of introductory or elementary texts on the subject of statistics indicates how important the subject has become for everyone in the biological sciences. However, the fact that there are many texts might also suggest that we have yet to discover a fool proof method of presenting what is required. The problem confronted in biological statistics is as follows: When there is a need to make a set of numerical observations in biology, it has been observed that the values are scattered. Then questions arise to know whether these values differ because of factors e.g. treatments or these are part of a ‘background’ i.e., natural variation. You need to evaluate what the numbers actually mean, and to represent them in a way that readily communicates their meaning to others.

1 - 12 (12 Pages)
USD34.99
 
2 Classification and Tabulation of Data

2.1 What is a frequency distribution? A frequency distribution shows the frequencies of occurrence of the observations in a data set. Often the distribution of the observed data is called an empirical frequency distribution, in contrast to the theoretical probability distribution determined from a mathematical model. The class interval is a term used in statistics when we are given a continuous series. Class means a group of numbers in which items are placed such as 0-10, 10-20, 20-30, etc. Class interval refers to the numerical width of any class in a particular distribution. 2.2 Relative frequency distributions Although creating a frequency distribution is a useful way of describing a set of observations, it is difficult to compare two or more frequency distributions if the total number of observations in each distribution is different. A way of overcoming this difficulty is to calculate the proportion or percentage of observations in each class or category. These are called relative frequencies and each is obtained by dividing the frequency for that category by the total number of observations (column 3 of Table 2.1). The sum of the relative frequencies of all the categories is unity (or 100%) apart from rounding errors.

13 - 28 (16 Pages)
USD34.99
 
3 Measures of Central Tendency

The central tendency measure is defined as the number used to represent the center or middle of a set of data values. The three commonly used measures of central tendency are the mean, median, and mode. Measures of location (averages) The term average refers to any one of several measures of the central tendency of a data set. • The mean has the disadvantage that its value is influenced by outliers. An outlier is an observation whose value is highly inconsistent with the main body of the data. An outlier with an excessively large value will tend to increase the mean unduly, whilst a particularly small value will decrease it. • The mean is an appropriate measure of central tendency if the distribution of the data is symmetrical. The mean will be ‘pulled’ to the right (increased in value) if the distribution is skewed to the right, and ‘pulled’ to the left (decreased in value) if the distribution is skewed to the left.

29 - 38 (10 Pages)
USD34.99
 
4 Calculation of Mean, Median and Mode from Grouped Data

39 - 48 (10 Pages)
USD34.99
 
5 Measures of Dispersion

What is dispersion? In statistics, dispersion (also called variability, scatter, or spread) is the extent to which a distribution is stretched or squeezed. Common examples of measures of statistical dispersion are the variance, standard deviation, and interquartile range. For instance, when the variance of data in a set is large, the data is widely scattered. On the other hand, when the variance is small, the data in the set is clustered. Dispersion is contrasted with location or central tendency, and together they are the most used properties of distributions. Measure of dispersion (a) Range The range is defined as the difference between the largest and smallest observations. Range(X)= Max(X)- Min (X) Example The range in statistics for a given data set is the difference between the highest and lowest values. For example, if the given data set is {2,5,8,10,3}, then the range will be 10 – 2 = 8. Thus, the range could also be defined as the difference between the highest observation and lowest observation.

49 - 56 (8 Pages)
USD34.99
 
6 Moments, Skewness and Kurtosis

Moments In statistics, moments are measures of the shape and variability of a data set. They are used to describe the location and dispersion of the data. There are several types of moments that can be calculated, each providing different information about the data set. Moments is a term which refers to a measure of force with respect to its tendency which provides rotation. The strength of the tendency depends upon the amount of force and distance from the point of origin where force is exerted. So total moments of all the forces about origin, if total moment is divided by the total force, which is called moment actually arithmetic mean for frequency distribution. Hence, arithmetic mean is called first moment about the origin • Arithmetic mean of various powers expressed as deviation from mean in any distribution is called the moment of the distribution.

57 - 62 (6 Pages)
USD34.99
 
7 Correlation and Regression

Correlation coefficient We measure the degree of association between two variables by calculating Pearson’s product moment correlation coefficient, usually just called the correlation coefficient. It was developed by Karl Pearson from a related idea introduced by Francis Galton in the 1880s, and for which the mathematical formula was derived and published by Auguste Bravais in 1844. In 1896, Karl Pearson published his first rigorous treatment of correlation and regression in the Philosophical Transactions of the Royal Society of London. The linear correlation coefficient (sometimes called Pearson's Correlation Coefficient), commonly denoted, is a measure of the strength of the linear relationship between two variables. • We say that we have perfect correlation if all the points lie on the line; in this case, the value of the correlation coefficient takes one of its extreme values, either +1 or −1 (Figure 7a and 7b). • We have positive correlation if the sign of the correlation coefficient is positive; then there is a direct relationship between the two variables so that as one variable increases in value, the other variable increases (Figure 7a) or there is a tendency for it to do so. • We have negative correlation if the sign of the correlation coefficient is negative; then there is an inverse relationship between the two variables so that as one variable increases in value, the other variable decreases (Figure 7b) or there is a tendency for it to do so. • We have no linear association (i.e. the variables are uncorrelated) if the correlation coefficient is zero; then there is a random scatter of points with no indication of a linear relation between the variables (Figure 7c.). Note that a non-linear relationship between the variables can also give a correlation coefficient of zero.

63 - 78 (16 Pages)
USD34.99
 
8 Probability

A. Definitions of probability 1. A probability relies on having an understanding of the theoretical model defining the set of all possible outcomes of a trial; we evaluate the probability solely on the basis of this model, without recourse to performing the experiment at all. It is often called an a priori probability. So, for example, we know that there are two equally likely outcomes when an unbiased coin is tossed: either a head or a tail. This is the model from which we can deduce that the probability of a defined event, obtaining a head, say, is 1/2 = 0.5. 2. The next approach to defining a probability, and the one commonly used in statistical inference, is to regard a probability as the proportion of times a particular outcome (the event) will occur in a very large number of ‘trials’ or ‘experiments’ performed under similar conditions. The result of any one trial should be independent of the result of any other trial, so whether or not the event occurs in any one trial should not affect whether or not the event occurs in any other trial.

79 - 84 (6 Pages)
USD34.99
 
9 Probability Distributions: Binomial, Poisson, Chi-Square and Normal

A probability distribution is a statistical function that describes all the possible values and likelihoods that a random variable can take within a given range. This range will be restricted between the minimum and maximum possible values. The probability distribution shows how the set of all possible mutually exclusive events are distributed, and can be presented as an equation, a chart or a table. A variable which can take different values with given probabilities is called a random variable. There are numerous probability distributions which may be distinguished by whether the random variable is discrete, taking only a finite set of possible values, or continuous, taking an infinite set of possible values in a range of values. A discrete random variable with only two possible values is called a binary variable, e.g. pregnant or not pregnant, diseased or healthy. Probability distributions for discrete random variables The probability distribution for a discrete random variable y is the table, graph or formula that assigns the probability P(y) for each possible value of the variable y.

85 - 100 (16 Pages)
USD34.99
 
10 Hypothesis Testing

We can categorize statistical theory into two general parts. 1. Firstly, there is descriptive statistics which uses the appropriate tools, typically tables, diagrams and/or numerical measures, to describe a data set and provide a summary of its distribution. 2. In addition, there is inferential statistics which is concerned with drawing conclusions about a population using information obtained from a representative sample selected from it. (a) One aspect of statistical inference is the estimation of a population parameter by the appropriate sample statistic (e.g. the population mean by the sample mean). (b) The second aspect of inferential statistics is hypothesis testing. In this case, we examine a hypothesis, framed in terms of the parameters in one or more populations. Estimation is concerned with description whereas hypothesis testing is ultimately concerned with decision. Basic concepts of hypothesis testing Hypothesis testing is a process that is concerned with making conclusions about the population using the information obtained from its sample.

101 - 108 (8 Pages)
USD34.99
 
11 Statistical Tests

109 - 130 (22 Pages)
USD34.99
 
12 F Test

The F-test for the equality of two variances Rationale The two-sample t-test and the analysis of variance make the assumption of homoscedasticity, i.e. of equal variances in groups of data. The F-test, often called the variance ratio test, may be used to investigate the homoscedasticity of two data sets. We have to make a decision whether or not the population variances are likely to be different. This means that we need a cut-off for the variance ratio; if the variance ratio exceeds this cut-off value, we will conclude that the variances are unequal. We determine this cut-off value formally, under the null hypothesis that the two population variances are equal, by referring the ratio to the table of the F-distribution. The degrees of freedom are (n1 – 1) in the numerator (the larger variance) and (n2 – 1) in the denominator (the smaller variance), where n1 and n2 represent the two sample sizes. For the required significance level in a two-tailed test, therefore, we must halve this tail area. So, for a two-tailed test at the 5% level of significance, we have to relate the test statistic to P = 0.025. For convenience, we give the upper percentage points corresponding to P = 0.025 and P = 0.005 (relating to two tailed P-values of 0.05 and 0.01, respectively) in separate tables.

131 - 140 (10 Pages)
USD34.99
 
13 Design of Experiment

Design of Experiment was first elaborated by R. A. Fisher (1935). • Design of Experiment: Proper allotment of treatments to the available experimental units. • Treatment: It is the factor whose effect is to be studied. Example: Application of drugs, feeds, conditions, methods, etc. • Experimental units: The subjects or individuals upon which an experiment is conducted. Example: Laboratory animals like mice, rat, monkeys, primates; livestock like cattle, buffalo, sheep, goat, poultry; plants, a plot of land, etc. Necessity of designing an experiment 1. To get maximum information from the available resources. 2. To know the effect of treatment on experimental units and to make comparison between treatments. 3. To know whether the difference between the effect of two treatments is significant or not. 4. A well designed experiment has a well defined method of statistical analysis of data. 5. To minimize error in the whole experimental procedure. 6. An ill designed experiment is not likely to give proper estimates of parameters and comparisons between treatments with desired precision even if the method of data analysis is very good. 7. The design of experiment is essential to get all the important treatment effects to be compared independently.

141 - 158 (18 Pages)
USD34.99
 
14 Sampling and Livestock Census

Important Definitions • Sampling method or sampling technique is the process of studying the population by gathering information (on basis of samples) and analyzing that data. • Population: The whole group we are interested in census. • Census: A collection of data from the whole population • Sample: A collection of data from part of the population • Parameter Vs Statistics: Parameter is statistical measure pertaining to population whereas statistics is measure pertaining to sample. • Estimate-Value of population parameter obtained from sample. • Livestock census carried every 5 years (2019-20th livestock census)

159 - 170 (12 Pages)
USD34.99
 
15 Bioassays

Biological assay or bioassay • A bioassay is an analytical method to determine the concentration or potency of a substance by its effect on living animals or plants (in vivo), or on living cells or tissues (in vitro). • Bio-assays are thus a type of experiments with the object of comparing the efficacy of two or more substances, or preparations, like drugs, by using responses produced by them on suitable living organisms. • The bio assay involves a stimulus applied to a subject and the response of the subject to the stimulus. • When a stimulus is applied to a subject there may be a change in some characteristics of the subject. Such changes in the subject are known as responses. • The response may be quantitative as in the case of weight or qualitative as in the case of mortality. The magnitude of response depends upon the dose. • Normally, two preparations having a common effect are taken for assaying. One of the preparations is of known strength and is called the standard preparation and the other is of unknown strength and is called test preparation.

171 - 176 (6 Pages)
USD34.99
 
16 Least Square Method and Linear Models

What is Least Square Method? The Least Squares Method is used to derive a generalized linear equation between two variables, one of which is independent and the other dependent on the former. The value of the independent variable is represented as the x-coordinate and that of the dependent variable is represented as the y-coordinate in a two-dimensional cartesian coordinate system. Initially, known values are marked on a plot. The plot obtained at this point is called a scatter plot. Then, we try to represent all the marked points as a straight line or a linear equation. The equation of such a line is obtained with the help of the least squares method. This is done to get the value of the dependent variable for an independent variable for which the value was initially unknown. This helps us to fill in the missing points in a data table or forecast the data. The method is discussed in detail as follows. The least-squares method was officially discovered and published by Adrien-Marie Legendre (1805), though it is usually also co-credited to Carl Friedrich Gauss (1809), who contributed significant theoretical advances to the method. Least Square Method Definition The least-squares method can be defined as a statistical method that is used to find the equation of the line of best fit related to the given data. This method is called so as it aims at reducing the sum of squares of deviations (residual part) as much as possible. The line obtained from such a method is called a regression line.

177 - 188 (12 Pages)
USD34.99
 
17 Understanding BLUEs, BLUPs and Breeding Values in Linear Mixed Models

• In statistics, best linear unbiased prediction (BLUP) is used in linear mixed models for the estimation of random effects. BLUP was derived by Charles Roy Henderson in 1950 but the term “best linear unbiased predictor” (or “prediction”) seems not to have been used until 1962 (Henderson, 1975, 1985). • “Best linear unbiased predictions” (BLUPs) of random effects are similar to best linear unbiased estimates (BLUEs) of fixed effects. The distinction arises because it is conventional to talk not about estimating fixed effects but rather about predicting random effects, but the two terms are otherwise equivalent. BLUE and BLUP are both acronyms used in statistics • BLUE stands for Best Linear Unbiased Estimates and refers to the solutions (or estimates) associated with the fixed effects of a model. • BLUP stands for Best Linear Unbiased Predictions and refers to the solutions (but identified as predictions) associated with the random effects of a model. • Both BLUEs and BLUPs are linear functions of the data and have the lowest variance. • The expected value of a mean estimate for an individual equals its true value for BLUEs. • BLUPs are similar to BLUEs, but it is conventional to talk about estimating fixed effects but predicting random effects. • BLUP is the standard selection method in animal breeding where the breeding values of sires are estimated based on progeny performance to select superior genotypes and to breed superior families.

189 - 196 (8 Pages)
USD34.99
 
18 Animal Genetics and Breeding Question Bank for B.V.Sc. & A.H

Subject: Biostatistics and computer application (UNIT-I) Write true (T) or false (F) Q1. Write true (T) or false (F) 1. The mean and mode are always equal in case of normal distribution. 2. Pie diagram is circular diagram. 3. Skewness is the measure of asymmetry of distribution. 4. Mean sum of squares are the variances. 5. Coefficient of variation may be utilized for comparing two data of different unit. 6. Goodness fit is tested by t-test. 7. Variance is a measure of central tendency. 8. CRD can be use where experimental material is heterogenous. 9. Frequency polygon is made by joining the middle points of the top of each rectangle of histogram.

197 - 238 (42 Pages)
USD34.99
 
19 End Pages

Biostatistical Analysis by Zar, Jerrold H., ISBN 9780131008465, 10 November 2009 Biostatistics for Animal Science by Miroslav Kaps and William R. Lamberson (CABI Publishing), 2004. Design and Analysis of Experiments by MN Das and NC Giri Fundamentals of applied statistics – S. C. Gupta and V. K. Kapoor Fundamentals of Biostatistical Analysis –Bernard Rosner, publication date 1995 Fundamentals of Statistics – S.C. Gupta Handerson CR, Best Linear Unbiased Prediction Using Relationship Matrices Derived from Selected Base Populations. Journal of Dairy Science Vol. 68, No. 2, 1985.

 
9cjbsk
Payment Methods