
This book, Data Acquisition and Analysis in Fisheries, aims to provide a comprehensive guide to the state-of-the-art techniques and methodologies used in fisheries research today. The importance of precise and reliable data in fisheries cannot be overstated. Effective management strategies hinge on our ability to gather accurate data on fish populations, their habitats, and the environmental factors that influence them. This edited volume brings together contributions from leading experts in the field, offering a diverse range of perspectives and expertise on the various aspects of data acquisition and analysis.
From traditional methods such as net sampling and acoustic surveys to advanced techniques including satellite remote sensing, genetic analysis, and artificial intelligence, this book covers a broad spectrum of approaches. Each chapter delves into the specific methodologies, tools, and technologies employed, providing detailed explanations and case studies that highlight their application in real- world scenarios. The book also addresses the challenges and limitations associated with different data acquisition techniques, offering insights into how these can be mitigated or overcome. In addition to the technical aspects, this volume emphasizes the importance of data quality and management. Proper data stewardship ensures that collected data is accurate, consistent, and usable for long-term research and policy-making. Chapters on data validation, storage, and sharing protocols provideessential guidelines for maintaining the integrity of fisheries data. By providing a thorough grounding in theprinciples and practices of data acquisition and analysis, this book aims to equipresearchers, managers, and students with the knowledge and tools they need tocontribute to this vital field.
The sustainable management of fisheries resources is essential for ensuring the long-term viability of both marine ecosystems and the human communities that depend on them. The global demand for seafood continues to rise, placing increased pressure on fish populations and necessitating robust, accurate methods for data acquisition and analysis. This book, "Data Acquisition and Analysis in Fisheries," aims to provide a comprehensive guide to the state-of-the-art techniques and methodologies used in fisheries research today. The importance of precise and reliable data in fisheries cannot be overstated. Effective management strategies hinge on our ability to gather accurate data on fish populations, their habitats, and the environmental factors that influence them. This edited volume brings together contributions from leading experts in the field, offering a diverse range of perspectives and expertise on the various aspects of data acquisition and analysis. From traditional methods such as net sampling and acoustic surveys to advanced techniques including satellite remote sensing, genetic analysis, and artificial intelligence, this book covers a broad spectrum of approaches. Each chapter delves into the specific methodologies, tools, and technologies employed, providing detailed explanations and case studies that highlight their application in real world scenarios. The book also addresses the challenges and limitations associated with different data acquisition techniques, offering insights into how these can be mitigated or overcome. In addition to the technical aspects, this volume emphasizes the importance of data quality and management. Proper data stewardship ensures that collected data is accurate, consistent, and usable for long-term research and policy-making. Chapters on data validation, storage, and sharing protocols provide essential guidelines for maintaining the integrity of fisheries data. The collaborative nature of fisheries research is a recurring theme throughout the book. Successful data acquisition and analysis often require multidisciplinary approaches and international cooperation. We are at a critical juncture in fisheries science, where the integration of traditional knowledge with cutting-edge technology can lead to significant advancements in our ability to monitor and manage fish populations sustainably. By providing a thorough grounding in the principles and practices of data acquisition and analysis, this book aims to equip researchers, managers, and students with the knowledge and tools they need to contribute to this vital field.
Introduction The term analytics and data analytics are used inter-changeably. When referring to analytics, it means “analysis of data” or “what can be learnt from an analysis of data”. In an industry like fishing industry, the researchers have been slow to adopt the tools used to portray accurate representations of data. For example, a dashboard with historical data from the past ten or fifteen years displayed on a digital map can enable the researchers to decide the places where fishing can be good. This map can help in locating the points fished during specific seasons. This will enable the researchers to filter the data on various parameters like area, year, season etc. or on any other specific parameter which can give valuable information. Similarly, this type of dashboard can affect the decision making process while fishing, like one can find the monetary value associated with each trip during a specific season or compare the values associated with two different trips to gauge the cost-benefits. Data Analytics helps in taking most informed decisions in the fishing industry if one of the goals is to increase income while ensuring sustainability. It helps in deciding whether to reduce overhead costs or to increase expenditure to avoid overhead costs in future. Fisheries sector undergoes continuous changes with introduction of newer technologies evolved through research and development institutions. Statistics per se deals with generation of data, data management, data analysis and information generation from data. There are varied disciplines of fisheries viz., Aquaculture, Fisheries Resource Management, Fish Genetics, Fish Biotechnology, Aquatic Health, Nutrition, Environment, Fish Physiology, and Post-harvest Technology, wherein research on practical aspects is undertaken which affect production and sustainability.
Introduction Indian fisheries and aquaculture is an important sector of food production providing nutritional security, besides livelihood support and gainful employment to more than 14 million people, and contributing to agricultural exports. With diverse resources ranging from deep seas to lakes in the mountains and more than 10% of the global biodiversity in terms of fish and shellfish species, the country has shown continuous and sustained increments in fish production since independence. The sector constitutes about 6.3 percent of the global fish production and contributes to 0.91 percent of the GDP and 5.23 percent of the agricultural GDP. The total fish production of 13.34 million metric tonnes presently has nearly 74 percent contribution from the inland sector and rest by the marine sector. Paradigm shifts in terms of increasing contributions from inland sector and further from aquaculture have been significant over the years. With high growth rates, the different facets, viz., marine fisheries, coastal aquaculture, inland fisheries, freshwater aquaculture, and coldwater fisheries are contributing to the food basket, health, economy, exports, employment and tourism of the country. More than 50 different types of fish and shellfish products are being exported to 75 countries around the world. Fish and fishery products have presently emerged as the largest group in agricultural exports from India, with 13.77 lakh tonnes in terms of quantity and ` 45,106.89 crore in value. This accounts for around 10% of the total exports and nearly 20% of the agricultural exports, and contribute to about 0.91% of the GDP and 5.23% to the Agricultural GDP of the country. The Pradhan Mantri Matsya Sampada Yojana (PMMSY) is an initiative launched by the Government of India to establish a comprehensive framework and reduce infrastructural gaps in the fisheries sector. The scheme was announced by the Finance Minister, Smt. Nirmala Sitharaman during her speech in the Parliament of India while presenting the Union budget for 2019–20 on 5 July 2019. The government intends to place India in the first place in Fish production and processing by implementing Blue Revolution. This scheme is in line with government’s aim to double the farmers’ income by 2022–23.The policy envisages integrating all the fishermen with agricultural farmers and providing all the facilities available through various farmer welfare schemes to the fishermen. A new dedicated Department of Fisheries was constituted in a newly carved out Ministry of Fisheries, Animal Husbandry and Dairying to implement this and other policy initiatives of the government.
Need of Analytics in Fisheries Management New technologies are introduced for industrial vessels every day: Fuel and gear sensors, cameras, electronic catch reports etc. There is a global push to track small-scale fishing vessel. In the coming years, fisheries monitoring centers will have to adjust from tracking a few hundred vessels to tens of thousands of vessels, meaning huge quantity of data. To make sense of all this information, it must be properly analysed. The future of fisheries depends on big-data analytics This may help in many ways. Fishing trips could be tailored to meet quotas in the shortest time reducing fuel consumption and crew costs. Real-time catch data could help administrations close a fishing zone once quota is reached. Fisheries administrations can monitor catch efforts over an entire region and decide which zones to close, improving management of stocks and fishing licenses. Fighting the challenge of Illegal and Unreported fishing contributing to over-exploitation of fish stocks. Use of disruptive technologies in fisheries and aquaculture like block chains, sensors and automatic identification systems (AIS) has the potential to change the processes profitability and sustainability of the sector Globally used popular Data Science Software • MS-EXCEL • SPSS (IBM) • STATA • MINITAB • SAS • R • PYTHON
Experimental Designs Experimental Designs are used widely to compare more than 2 groups, commonly called as treatments. It is very common in agriculture to conduct several experiments, mostly in fields to test various varieties, different levels of inputs, etc. When one is interested in crop modelling, it is necessary to identify the values of several constants, such as input efficiency, growth periods, growth rates, etc. Instead of guessing these values, it would be more appropriate to conduct experiments to derive these values empirically, which can improve the modelling efforts. The principles of experimentation were given by Sir R.A. Fisher during 1920’s, which are still relevant and found in every scientific research namely, (i) Randomization, (ii) Replication and (iii) Local Control. The following sections introduce some of the commonly used terms in Experimental Designs: Factor: Variables which are set/considered by the experimenters in any experiment. Examples of factors are: Nitrogen, Varieties etc. Level: The various values of a particular factor considered in the experiment. Examples of levels are: Rates of nitrogen applied, different varieties considered in the experiments. Treatment: The levels of one or more variables (Factors) set/considered by the experimenters. Note that in many experiments, the number of variables (i.e. factors) considered may be one only.
Introduction Importance of basic statistics in everyday life cannot be stressed enough. Inclusion of basic statistics in high school curriculum is uncommon but proves highly beneficial to supplement certain careers in future. This field may be the most widely used of all and most people don’t even know they are using it. Many of today’s brightest business minds use this study to create charts and graph that allow them to stress a certain aspect of the business they are dealing with. Market research agencies, News rooms, marketing divisions of large corporations, NGOs, accounting firms etc. are full of people who specialize in basic statistics. Statistics is concerned with the describing, interpreting and analyzing data. It is, therefore, an essential element in any improvement process. Statistics is often categorized into descriptive and inferential statistics. It uses analytical methods which provide the math to model and predict variation. It uses graphical methods to help making numbers visible for communication purposes. Descriptive Statistics • Descriptive statistics summarizes or describes characteristics of a data set. • They help to make sense of large numbers of individual responses, to communicate the essence of those responses to others • Helps exploring and making conclusions about the data in order to make rational decisions. • Includes calculating things such as the average of the data, its spread and the shape it produces. • Are displayed as tables, charts, percentages, frequency, distributions and reported as measures of central tendency • Determine if the sample is normally distributed (bell curve) as most statistical tests require the sample to have normal distribution • Determine if the sample can be compared to the larger population Difference between Descriptive and Inferential Statistics Nowadays, statistics is playing a major role in the field of research; that helps in the collection, analysis and presentation of data in a measurable form. It is quite hard to identify, whether the research relies on descriptive statistics or inferential statistics, as people usually, lacks knowledge about these two branches of statistics. As the name suggests, descriptive statistics is one which describes the population. On the other end, Inferential statistics is used to make the generalisation about the population based on the samples.
There has been a surge of data in every field of research due to advancements in information technology. Data collection and storage has become much easier and cheaper in recent years. This voluminous data needs to be systematically stored for quick retrieval, process and to find useful patterns. Data mining techniques help in deriving useful patterns from data. These techniques which are also called as machine learning techniques majorly categorized as classification, clustering and association rule mining methods. Classification is a supervised learning method which needs labeled dataset for model building and prediction purposes. Clustering and association rule mining methods do not require labeled dataset and hence they are referred as unsupervised learning methods. Clustering aims at grouping the objects based on a set of measured variables where similar objects are assigned to one group. For example, let us say, if we have collected information on 50 fish pond parameters (50 variables) and we wish to find the groups in terms of pond management (like well managed to poorly managed), with cluster analysis, we can group the ponds based on pond variables. In the process of clustering, each pond is assigned to one of the groups made. The question is how many natural clusters present in the data as in many cases we would not be getting perfect clusters of disjoint sets. We can achieve better accuracies in cluster formation only by trial and error. Good clustering criteria are to see that there is minimum with-in cluster variations and maximum between cluster variations. Cluster analysis has several applications in fisheries research. It could be used to group fish stocks from different regions, categorise ponds based on their technical efficiencies, identify the farm clusters with high disease risk, group farmers based on their preferences, agro zoning based on climatic variables and culture practices. Cluster analysis is also applied in fish bioinformatics to group the similar sequences.
Product Development and Marketing Several efforts are being made on the research front for development of novel products from fish which have enhanced shelf life, retain the freshness and nutrients and are safe to consume. During the course of product development, which are in many stages, studies are conducted prior to commercialisation to ensure the acceptance of a product. In a business, understanding consumer behaviour plays an important role in success. To be in a business for a long time, it is essential to know what the consumer prefers and why. Consumers make the buying decisions based on a number of factors. The purpose behind studying the buying behaviour and consumer preference is to produce and market products which may better meet the needs of a consumer. The emerging fastfood culture among the young and affordable has brought focus on processed food and its demand in the domestic food market in India. Domestically, spending on food and food products constitutes the largest portion of the Indian consumer’s spending – more than a 31% share of wallet. Evaluation of consumer preferences before introducing a new product will help the marketer to refine the product for better reach. Conjoint Analysis Conjoint analysis is a popular technique used in marketing research to study the features a product that should possess to have a wide consumer reach. Conjoint analysis was initially conceptualised by Luce and Tukey (1964) and further developed by Green and Rao (1971) for marketing research. It employs a decompositional method to estimate the structure of consumer preferences and consumer utility values of different attributes of a product or service. It is a decompositional method that disaggregates the structure of consumer preferences into utility values. The relative importance of a product can also be estimated using this method.
Introduction This chapter is intended to throw light on some statistical and econometric methods used for analysing time series data with specific emphasis on basic concepts related to time series analysis, co-integration techniques as well as forecasting techniques such as ARIMA model. Time series analysis is often helpful in revealing the broad patterns associated with longitudinal data, which in turn would aid in important economic and policy decisions. In fisheries sector too, economic variables such as market demand and supply, prices, etc. are of prime importance and often come in longitudinal format. There are a number of alternative methodologies available in the time series literature to determine relevant parameters pertaining to longitudinal data and the ensuing sections are set apart to delve deeper into this realm. Some Key Concepts Stationary Stochastic Process A random or stochastic process is a collection of random variables ordered in time. A stochastic process is said to be stationary if its mean and variance are constant over time and the value of covariance between two time periods depend only on the distance or the gap between the two time periods and not the actual time at which the covariance is computed. In the time series literature, such a stochastic process is known as a weekly stationary or second order stationarity or covariance stationarity.
Introduction Resources must be managed effectively to achieve maximum sustainable agricultural production from limited natural resources. Many of the resources that agriculture depends on (such as water and land) are finite and need to be well managed for long term sustainability. In most cases, this means avoiding adverse effects on the society and environment. With the development of agricultural sectors, the utilization patterns of natural resources in agriculture are important concern for policy makers in present climate change scenario. Unlike other production activities, agricultural activities also utilize factor resources to produce several marketable products or services. The way the natural resources are utilized in agriculture and its concern for society and environment, plays a vital role in economic viability and ecological sustainability of agriculture. There are several conventional methodologies to estimate the level of profit and efficiency of production system at enterprise level. Only marketable factors are considered in conventional methods keeping the non-marketable component of production system away. Non-marketable component of a production system includes various social and environmental dimensions of it. Nammalwar (1997) has explained the concept of environmentally sound and sustainable enterprises, which include the physical resources, ecological resources, human use value, quality of life values with either way impact of agriculture and environment. In recent years, non-marketable dimensions of agricultural activities have been given importance because of its concern for the environment and society. This chapter provides a methodology that incorporates non-marketable component of agricultural enterprise besides its marketable component. The Resource Cost Ratio (RCR) approach is a modified methodology developed by Debnath et al. (2010) from Domestic Resource Cost Ratio (DRCR) that Morris (1990) has applied at macro-economic level to determine the comparative advantage between countries.
Introduction The term ‘Fish Marketing’ is composed of two broad meaning term ‘Fish’ and ‘Marketing’. Before discussing the methodological issues of fish marketing data analysis, we must understand the broader meaning of it as both words are being used frequently with a précised meaning. Google dictionary explains fish as ‘A limbless cold-blooded vertebrate animal with gills and fins living wholly in water’. But ‘fish’ as a component of fish marketing or social science study, means a product and a variety of product, which has different time, space and form utilities for consumers. The word ‘marketing’ is not merely buying and selling of fish, rather it connotes a series of activities involved in moving goods or services from point of production to the point of consumption. So ‘fish marketing’ is defined as physical and institutional set up to perform all activities involved in the flow of fish, fish products and services from the point of initial fish production until they are in the hands of ultimate customers. It includes assembling, handling, storage, transport, processing, wholesaling, retailing and export of fish and fishery products as well as accompanying supporting services such as market information, establishment of grades and standards, community trade, financing and price risk management and the institutions involved in performing the above functions. It is desired to have an efficient and well planned distribution networkso that fish and fishery products move from the place of landing or production to the place of consumption at cheapest cost with quality –intact, ensuring the maximum price realization to the farmers or fish producers. It is really challenging to the public agencies involved to meet the requirements of all three major players, viz., fishermen or fish farmers, consumers and traders. It includes all activities involved in creation of time, place, form and possession utility. Philip Kotler explained marketing as the science and art of exploring, creating, and delivering value to satisfy the needs of a target market at a profit. ‘Fish marketing’ is different from marketing of other goods and services and even differs a bit from marketing of variousagricultural goods and services.
Setting The basic difference which dovetails most of the animal breeding experiments is the unbalancedness of the data collated with respect to factorial classes thereby making non-orthogonal nature of the effects involved ubiquitous. Henderson, C.R. (1953), through a series of seminal works dealt with these types of data and came out with three distinct methods to estimate the variance components emanating out of such non-orthogonal animal experiment data. Before discussing those methods, let us delve a bit on the challenges in estimating the variance components in animal breeding data. There are two major reasons, first, several methods of estimation are available (most of which are reduced to the analysis of variance method for balanced data), but not one of them has yet been clearly established as superior to the others. Second, all the methods involve relatively cumbersome algebra; discussion of unbalanced data can therefore easily deteriorate into a welter of symbols, a situation which functions as the anti-appetizer to applied researchers. A safe starting point towards understanding the nuances of dealing with unbalanced data could be the paper by Henderson (1953) which is quite seminal in nature. The methods described have been often referred to as the gold standard in the area of estimating variance components from unbalanced data for many years to come. Of the three, method 1 is simply an analogue of the analysis of variance method used with balanced data and method 2 is used to correct a deficiency of method 1 that arises out of mixed models and the last one is quite different and involves the method of fitting constants so often used in fixed effects models. Let us see these methods in a bit more detail in the ensuing passages.
Introduction Regression analysis is one of the most widely used techniques for studying relationships involving multiple variables for analysing data by expressing a relationship between a variable of interest (the response) and a set of related predictor variables. The regression models include linear and non linear approaches assuming appropriate functional forms. A good account on regression analysis and related topics can be found in Draper and Smith (1998), Montgomery et al. (2001), Chatterjee and Hadi (2006) etc. In this write-up, regression model fitting, some of the detection techniques which are useful in detecting the problem of multicollinearity between the so-called ‘independent variables’ and also outlier detection in data are discussed. Linear regression with qualitative regressor variables is also discussed. Moreover, variable selection procedures, goodness of fit measures for model adequacy and validation are also discussed. In addition, non-linear regressions viz., logistic (both binary and multinomial) when the response variable is qualitative are also covered. Multiple Linear Regression Modeling Let the response variable (variable to be forecasted) be denoted by Y and the set of predictor variables, by X1 , X2 , …, Xp , where p denotes the number of predictor variables. The true relationship between Y and (X1 , X2 ,…,Xp ) can be approximated by a multiple linear regression model given by
Introduction A fundamental issue when analysing trade policy reform in global seafood market is the extent to which domestic agricultural commodity markets in developing countries respond to changes in international prices. Price trans mission from the world to domestic markets is central in understanding the extent of the integration of economic agents into the market process. Studies on the transmission of price signals are founded on concepts related to competitive pricing behavior. In spatial terms, the classical paradigm of the Law of One Price, as well as the predictions on market integration provided by the standard spatial price determination models (Enke, 1951; Samuelson, 1952; Takayama and Judge, 1972) postulate that price transmission is complete with equilibrium prices of a commodity sold on competitive foreign and domestic markets differing only by transfer costs, when converted to a common currency. These models predict that changes in supply and demand conditions in one market will affect trade and therefore prices in other markets as equilibrium is restored through spatial arbitrage. The absence of market integration, or of complete pass-through of price changes from one market to another has important implications for economic welfare. Incomplete price transmission arising either due to trade and other policies, or due to transaction costs such as poor transport and communication infrastructure, results in a reduction in the price information available to economic agents and consequently may lead to decisions that contribute to inefficient outcomes. Fisheries and seafood food trade policy reform, especially, is a priority issue in the next WTO negotiations, as trade liberalization is viewed as encouraging allocative efficiency and long run growth.
Introduction Indian agriculture, including fisheries, is highly monsoon dependent and effect of climate change is immense on its large tracts of rainfed farms. The effects of climate change could be thwarted through better adaptation techniques using all information available with various agencies concerned with agriculture and using advanced analytical tools. Presently, in the country, there is a gap in the capability in capturing, processing and analyzing huge source of information generated in Indian agriculture. This task can be accomplished using Big Data Platform which works in cluster mode and provides the capability of handling huge volume of data, capturing streaming data or real time data and also can accommodate unstructured data. As per FAO, 70 percent in increase in food production in another 30 years is required to overcome the increasing global food demand. It may not be possible to achieve that target due to variety of problems such as reduction in land for cultivation, climate change, and various other factors, such as increasing the fish production etc. Improvement in crop genetics, application of fertilizer in higher dosages, irrigation facilities, etc. may not be possible in future. We need to create a big data platform involving important parameters on crop production viz., soil, soil temperature, water requirements, and relative humidity, pests and diseases, and micro and macro nutrients in temporal and spatial. Now, since available data is properly utilized for making valid analytics by the farming community due to certain issues. Even the data created is incompatible, data available from various sources is also not compatible among themselves due to which decision making is also a big challenge. Due to the incompatibility of agricultural data from different sources, farmers do not have the accessibility to vast pool of available information for making effective, efficient and timely decisions. Big Data in Agriculture (including fisheries) Big data, an emerging data driven system helps in precision agriculture based on data collected. This helps farmers in making efficient decisions on sowing, planting, fertilizing and various other farm operations including harvesting crops at the appropriate time. Of late, sensors are placed in the fields to collect data on related parameters viz., soil temperature and moisture relative humidity, wind speed, weather, etc. which help in crop growth. Images obtained using satellites are helpful in monitoring the crop growth during crop period. Agro-produce includes not only yields obtained from crops but also include produce from horticulture, livestock, fisheries, and other systems also. Agriculture is dependent on various parameters such as climate, seed, soil cultivation practices, agronomic practices, irrigation requirement, fertilizers pesticides, weeds, harvesting, post harvesting techniques, etc. Data collected on all these parameters would help in estimation of yield and other relevant parameters. Governments, Universities, Research departments, Agro-business and Agri-input companies generate, maintain and use voluminous data related to various farm management practices and other related issues say insurance marketing, supply chain, packaging, distribution, etc. This voluminous and wide variety of data poses several challenges for making effective decisions. Information on various fish related parameters can be collected from the stakeholders and a big data platform can be developed for effective decision making.
Introduction The influence of various socio-economic factors on the willingness of the decision makers to adopt new technologies has been investigated by a number of studies (Roe, 1983; Shakya and Flinn, 1985; Thomas et al., 1990). In most studies of adoption behaviour, the dependent variable is constrained to lie between 0 and 1 and the models used are exponential functions (Kebede et al., 1990). However, the decision to adopt a new technology can be very effectively captured using binary choice models. Binary choice models are appropriate when the choice between two alternatives depends on the characteristics of the problem. Application of a linear probability model to this type of problem, however, suffers from a number of deficiencies (Capps and Kramer, 1985), particularly, the one associated with the estimated probabilities in some cases being greater than one or lesser than zero as a result of neglecting significant interaction effects (Mingche, 1977). These deficiencies could be circumvented through the use of a monotonic transformation (probit or logit specification) which guaranties that predictions lie within the unit interval (Capps and Kramer, 1985). Univariate logit and probit models and their modified forms have been used extensively to study the adoption behaviour of farmers and consumers (Schmidt and Strauss, 1975; Garcia et al., 1983; Shakya and Flinn, 1985; Harper et al., 1990). According to Hanushek and Jackson (1977), the choice between logit and probit models is largely a matter of convenience. However, Maddala (1983) and Shakya and Flinn (1985) have recommended probit models for functional forms with limited dependent variables that are continuous between 0 and 1, and logit models for discrete dependent variables. Model Specification Univariate logit model is generally used to study the adoption behaviour of farmers and consumers. The model is estimated using the maximum likelihood method.
Biometrical Genetics - Historical Background It is a science built on the principles of Mendelian genetics and the way in which the genetic factors influence the statistical properties of traits in families and populations. The roots of biometrical genetics go back to Gregor Mendel, who first demonstrated the existence of the hereditary factors and to Francis Galton who first applied statistical methods to study biological inheritance. Galton introduced the concepts of regression and correlation for studying the continuous variation in humans, but was not aware of the principles of genetics discovered by Mendel. Galton, Karl Pearson and Walter Weldon together started the journal Biometrika in 1901, with the aim of promoting the statistical studies in biological phenomena. The differences of opinion persisted between the Biometricians and Mendelians in the early 20th Century. In 1918 Ronald A. Fisher published a landmark paper ‘The correlation between relatives on the supposition of Mendelian inheritance, in which he showed the two seemingly contradictory approaches could be reconciled in terms of a simple biometrical genetic model. Ronald A. Fisher’s paper laid the foundation for the discipline of Biometrical Genetics. Foundation a) Mendelian Inheritance Mendel proposed three principles based on his work and now known as Mendel’s laws of inheritance • Law of uniformity: if two homozygote parents of different alleles cross together, all offspring must have the same genotype • Law of segregation: an individual receives with the equal probability one of the two genes from the genotype of the mother and similarly from that of the father. • Law of independent assortment: the segregation of the genes for one trait is independent of the segregation of genes for other traits in the formation of reproductive cells In association studies, the law of segregation is the basis for the transmission disequilibrium test (TDT). The possible violation of law of independent assortment forms the basis of the gene mapping technique, linkage analysis. These principles are used to check the quality of parent-offspring genetic data. Mendelian principles though developed for dichotomous traits, they can be applied to the inheritance of traits influenced by multiple genes and are fundamental to genetic analysis.
Parametric and Non-parametric tests Parametric and Non-parametric tests are used for comparing differences between groups (e.g. Difference in catch-per-unit-effort (CPUE) of two types of Codends) and to test the relationship between two or more variables (e.g. Gender differences in awareness levels of fish farming (High, medium and low). As there seen differences in types of data, Non-parametric statistics are used when • Data is non-normal • Small sample size • The variables are categorical • Short response categories (<5 ) in scales
Introduction Resources are used unscientifically commonly. For example applying feed somewhere in the pond results in loss, but by using technology, we can know when to apply where to apply, and how to apply which leads to efficient management of resources. Monitoring should be continuous which is not possible and very difficult for humans, but with technology, it could be monitored and the status could be known even from home which is communicated through mobile, which is sent by sensors installed. The data are also easily maintained. When technology is used, resources are efficiently utilized, since everything has a demand and that can be satisfied. So resources can easily be maintained which reduces wastage. Field management is a very skilled activity and it needs years of experience and onsite exposure which needs skilled labour, in such cases technologies aids with better methodologies. When the resources are applied at the right time, at the right amount, the growth becomes normal which increases the produce quality. After a product is produced, it has to be sent to a series of channels such as transport, market etc. and a technology should backup and monitor everything. The produce after being produced should be sold and many apps can be downloaded to improve the marketing. The whole world is changing within a period of time. So one should stay updated to impress the livelihood and the wide knowledge on technology helps to lead a better life with easy management practices. Thus data driven technology interventions are needed for efficient management of resources, monitoring of things anytime & anywhere, reduction of wastage of resources, reduction of dependence on trained and skilled manpower, enhanced product quality, better integration, better marketability, to stay updated with latest knowledge and access to other knowledge source for better upgrades.
Key Facts • Global fish production peaked at about 171 million tonnes (MT) in 2016, with aquaculture representing 47 perc ent of the total and 53 per cent of non-food uses (Global capture fisheries production was 90.9 MT in 2016) • World fish production is projected to increase by 15% in the next 10 years, reaching around 200 MT per year • The total first sale value of fisheries and aquaculture production in 2016 was estimated at USD 362 billion, of which USD 232 billion was from aquaculture production • Fish (including shellfish) provides essential nutrition for 3 billion people and at least 50% of animal protein and minerals to 400 million people from the poorest countries • 10-12% people, i.e. over 870 million people, depend on fisheries and aquaculture • Aquaculture is the world’s fastest growing food production system, growing at 7% annually • Women account for 19% of all people directly engaged in the fisheries and aquaculture sector, and over 50% when including the post-harvest sector Climate Change: A global externality Climate Change (CC) is a result of the externality associated with greenhouse gas emissions. It entails costs that are not paid for by those who create the emissions. Distinguished Features from Other Externalities • Climate change is global in its causes and consequences; • The impacts of climate change are long-term and persistent; • Uncertainties and risks in the economic impact are pervasive; • There is a serious risk of major, irreversible change with non- marginal economic effects Definition of Climate Change Climate change refers to a statistically significant variation in either the mean state of the climate or in its variability, persisting for an extended period (typically decades or longer).
Introduction Impact Assessment is simply defined as the process of identifying the future consequences of a current or proposed action. The “impact” is the difference between what would happen with the action and what would happen without it (International Association for Impact Assessment). It’s a structured process for considering the implications, for people and their environment, of proposed actions while there is still an opportunity to modify (or even, if appropriate, abandon) the proposals. It is applied at all levels of decision-making, from policies to specific projects. IA aims to • provide information for decision-making that analyses the biophysical, social, economic and institutional consequences of proposed actions; • promote transparency and participation of the public in decision-making; • identify procedures and methods for the follow-up (monitoring and mitigation of adverse consequences) in policy, planning and project cycles; and contribute to environmentally sound and sustainable development.
Introduction Economists consider a producer as an income or profit maximiser. In our context, it may mean enhancement of fisheries contribution to the GDP, increased fish production, Nutritional security of those who need it most, not just those who can afford to pay for it improvement in the livelihood conditions of the stakeholders, augmenting exports to net more foreign exchange, growth with social equity or inclusive growth, etc. Realisation of a goal or target depends on development and enforcement of necessary policy strategies for which a reliable data base is inevitable. Analytical trends of the past data could help to project future trend which could indicate feasibility of the envisioned goal. Here, an outline of some of the Methodological issues in Fisheries data collection is provided. The Resource Endowments Fisheries is a sunrise sector in India, in terms of food and nutritional security, employment and income generation. India is the 3rd largest fish producer and the 2nd in aquaculture production, globally. It contributes about 1% of national GDP and 5 % of agriculture GDP. The exports of marine products account for about 5% of total Indian exports in 2018-19. The estimated fish production during 2017-18 was 12-59 million tones (mt). It was 10.14% increase over 11.43 million t in 2016-17. Inland fisheries grew at 14.05% while marine fisheries rose at 1.73% in 2017-18. Fisheries sub-sector provides livelihood support and gainful employment to 14 million people. Many more may be employed now as available data is yet to be updated.
Introduction Precise learning of the components of the general structure of a research article is inevitable in writing a research article. The general structure of a research article consists of Title, Abstract, Keywords, Introduction, Methods, Results and Discussion, Conclusion, Acknowledgements, References and Supporting materials. There are no specific guidelines for which one of the above should be written first. Process of Writing - Building the Article To start writing a research article, it is not advisable to consider all the tables and data that are available to the writer. The easiest way to start with is to choose the tables and data that can be used wisely in the paper. The writer should be aware of the types of the chart or table which are accepted and the number of charts and tables that should be included in the paper (generally a research paper contain 7 to 9 figures and charts in each category). It is also not recommended to use conventional ways of presenting data and figures (like bar charts, pie diagram). The figures or illustrations should not be so simple. Figures and Tables should be self-communicating. The reader of the paper is very well educated and may have mastery in the particular subject. The language and illustration should be appealing to them. The figures and illustrations should have an impact on the reader to think and draw inference from the data provided by the research. Two dimensional and three dimensional data presentation create more interest in the reader rather than the one dimensional presentations. Next to consider is Methods that explain how the data was collected. In case of writing the methods in research paper, unlike thesis, the methods need not be elaborated in the paper. Results are the description of the data and the broad observation made. Results should not contain the inferences that are already known. Results should be supplementing the tables and charts, not a description of them. Discussion involves the comparison of the obtained result and the previous results. The contribution to the previous research results can be discussed. New findings and solutions can also be discussed. Conclusion is a single paragraph which altogether summarizes the final outcome. Generally the conclusion should not contain citations.
A ARIMA 79, 82, 86, 87, 88, 89, 90 Artificial Intelligence 5, 6, 59, 186 ANOVA 68, 69, 106, 111, 116, 121, 124, 127, 177, 178, 180, 184 Analysis of data 1, 73 Augmented Dickey-Fuller 84, 150 Autoregressive Distributed Lag 149, 150 Advanced analytics 3, 5, 6, 156
