Nimon, K. (2012). Statistical Assumptions of Substantive Analyses Across the General Linear Model: A Mini-Review. Frontiers in Psychology Front. Psychology, 3(322).
When using inferential statistics depends on the sampling technique used as well as the characteristics of the population data. This dependency does assume that the sample and population meet certain criteria. The criteria is called statistical assumptions, and if there are violations of these statistical assumptions that are not addressed, than the data may not be interpreted correctly. In particular, Type I or Type II errors may be increased or decreased incorrectly. Nimon’s (2012) article focused on assumptions associated with substantive statistical analyses across the
…show more content…
Using unreliable data may cause underestimation which increase the probability of Type II errors. If there is correlated error, we may see unreliable data to be overestimated which would increase the risk of Type I errors. To satisfy the assumption of error-free data, researchers may conduct and report analyses using latent variables as opposed to observed variables. This technique is called structural equation modeling (SEM). This is where latent variables are formed from item scores, the former of which becomes the unit analyzed. It is important to note that SEM is a large sample technique. A researcher can also delete a few items to raise the reliability of an observed score, but this should be reported along with the accompanied estimated of the reliability with and without the deleted items. Some researchers consider measurement to be an assumption while others do not. Nimon (2012) does state because measurement levels play such a pivotal role in statistical analyses decision trees, it would not be recommended to lower the measurement level of data unless certain characteristics are met. This may discard important information and may produce misleading or erroneous information. Yet another assumption, is the assumption that inferential statistics in psychological and educational research produce population data that is normally distributed. This does depend on the analysis conducted, such as univariate, multivariate,
Inferential statistics helps us to analyze predictions, inferences, or samples about a specific population from the observations that they make. “With inferential statistics, you are trying to reach conclusions that extend beyond the immediate data alone” (Trochim, 2006). The goal for this type of data is to review the sample data to be able to infer what the test group may think. It does this by making judgment of the chance that a difference that is observed between the groups is indeed one that can be counted on that could have otherwise happened by coincidence. In order to help solve the issue of generalization, tests of significance are used. For example, a chi-square test or T-test provides a person with the probability that the analysis’ sample results may or may not represent the respective population. In other words, the tests of significance provides us the likelihood of how the analysis results might have happened by chance in a scenario that a relationship may not exist between the variables in regards to the population that is being studied.
Week Seven Homework Exercise Answer the following questions, covering material from Ch. 13 of Methods in Behavioral Research Define inferential statistics and how researchers use inferential statistics to draw conclusions from sample data. According to Cozby (2009) inferential statistics are used to determine whether we can in fact make statements that the results reflect what would happen if we were to conduct the experiment again and again with multiple samples Define probability and discuss how it relates to the concept of statistical significance. Probability is the possible that an outcome of an experience or an event will occur (Cozby 2009) Statistical significant and probability are one in the same. A researcher is studying the
The goal of inferential statistics is to end up rejecting the null hypothesis and concluding that a significant relationship exists; therefore, the null hypothesis always presume no relationship.
6. Why is it important to pay attention to the assumptions of the statistical test? What are your options if your dependent variable scores are not normally distributed?
In order to know whether the evidence of research studies are accurate, one must be able to have a fundamental understanding in statistical analyses to determine if such descriptions and findings within manuscripts and articles are presented correctly and explicitly (Sullivan, 2012). Proper use of statistics begins with the understanding of both descriptive and inferential statistics. Correct organization and description of data characteristics from the population sample being studied leads the researcher to identify a hypothesis and formulate inferences about such characteristics. It is with inferential statistics that researchers conduct appropriate tests of significance and determine whether to accept or reject the identified null
Researchers tend to draw conclusions and/or generalizations from the sample of participants in order, to generalize about others that are not in the specific study (Trochim & Donnelly, 2008). Therefore, it is essential that researchers learn
Missing values were replaced by the average of all the other items in that scale for that individual respondent. Analyses were conducted to determine if the mean scores for individual items were significantly different (less than or greater) from zero, the neutral response value—signifying either statistically significant disagreement or agreement with an item, respectively. Next, a factor analysis was performed to determine if similar items clustered together into subscales. Descriptive statistics (means and standard deviations) and one sample, two-tailed t-tests were calculated for each resulting subscale (Butler, 2010).
In his 2013 book, Naked Statistics, Charles Wheelan explains a field that is commonly seen, commonly applied, and commonly misinterpreted: statistics. Though statistical data is ubiquitous in daily life, valid statistical conclusions are not. Wheelan reveals that when data analysis is flawed or incomplete, faulty conclusions abound. Wheelan’s work uncovers statistics’ unscrupulous potential, but also makes a key distinction between deliberate misuse and careless misreading. However, his analysis is less successful in distinguishing common sense from poor judgement, a gap that enables the very statistical issues he describes to perpetuate themselves.
In population-based studies, instead of looking at a small group of individuals to make an assumption on the entire population, we are taking numbers that represent the population and determining
The reliability of an instrument contributes to the level of usability for empirical research (Whiston, 2009). Further, it refers to the replicability andstability of a measurement and whether it will result in the same assessment in the same individuals when repeated (Frankfort-Nachmias & Nachmias, 2008). When determining the reliability of an assessment, a reliability coefficient of at least .80 indicates a trustworthy level of reliability (Trochim, 2006).
In Fantuzzo, et al. (1991), there appears to be a lack of base line in which to rely upon the facts, due to the exclusion of what one would consider the social norms. Fantuzzo, et al. should have had a baseline in which to rely giving their study more standing.
This video introduces new vocabulary terms. First of all, statistics is the study of variability. Secondly, statistics is broken down into two parts descriptive and inferential. Descriptive is when you collect data and talk about it. Inferential is when you take a sample and use it to make an inference about the population. Thirdly, the video talks about the difference between population and sample. Sample is a smaller part of the population. When you take a sample and come up with an average, that average is called a statistic. When you find the average of population, that average is called a parameter. Lastly, gathering information about the whole population is called a census. In conclusion, a statistic based on a sample helps make an inference about the parameter of the whole population.
a. Conclusion drawn must be based on a sample that represent the entire group .
This part of the case study will explore the application of inferential statistics to the Zagat Survey sample data. In Part II, claims were made that attempted to extrapolate the sample data to the population, but they were statistically invalid. So here in Part III, I will properly project the sample data into the population and compare it to the previous methods.
Together, the LISREL 9.2 package makes possible estimations of a wide range of statistical used in educational research, such as exploratory- and confirmatory factor analysis (EFA and CFA) with continuous and ordinal variables, multiple-group analysis, multilevel linear and nonlinear models, latent growth curve models, and generalized linear models (GLM) to complex survey data and simple random sample data (Byrne 2012; Sörbom 2001).