Research Methods for the Behavioral Sciences (MindTap Course List)
Research Methods for the Behavioral Sciences (MindTap Course List)
6th Edition
ISBN: 9781337613316
Author: Frederick J Gravetter, Lori-Ann B. Forzano
Publisher: Cengage Learning
bartleby

Concept explainers

bartleby

Videos

Textbook Question
Book Icon
Chapter 15, Problem 1E

In addition to the key words, you should also be able to define each of the following terms:

frequency distribution

histogram

polygon

bar graph

degrees of freedom, df

line graph

scatter plot

Pearson correlation

Spearman correlation

regression equation

slope constant

Y- intercept

multiple-regression equation

null hypothesis

effect size

Cohen’s d

percentage of variance accounted for r2 or n2

ratio scale

interval scale

ordinal scale

nominal scale

chi-square test for independence

independent-measures t-test

repeated-measures t-test

single-factor analysis of variance or one-way ANOVA

two-factor analysis of variance or two-way ANOVA

split-half reliability

Spearman-Brown formula

Kuder-Richardson formula 20, or K-R 20

Cronbach’s alpha

Cohen’s kappa

Expert Solution & Answer
Check Mark
To determine

To define:

The terms given in the question

Explanation of Solution

Frequency distribution:

it is a table where all the data points are mentioned as x and their corresponding frequencies are also provided. It actually summarizes the complete dataset and presents it is a more easily understandable and various statistical operations can be performed on this data.

Histogram:

It is a chart generally used for continuous datasets. These datasets are converted to class intervals, and then the number of data points in each interval is termed as its frequency. In the histogram the class intervals come on the horizontal axis, and the frequencies are labeled on the vertical axis. Each class interval is represented by a bar on the graph and the height of the bar represents the frequency.

Polygon:

Once the histogram is formed, the midpoints of all the bars are connected by straight line. It results in a frequency polygon. A frequency polygon is a very useful tool to depict about the distribution of the data set. If the polygon is more of an inverted U shape this implies that the data is normally distributed.

Bar graph:

This chart is specifically used for representing the categorical data set. Now suppose we got the information about boys and girls. There are 57 boys and 43 girls. So each category is presented on the x axis, and its frequency is presented on the Y axis. Like this bars on the graph are formed for each category. These bars are formed at a gap from each other.

Degree of freedom:

this term is used in Statistics to define the number of independent values that one is free to assign to a distribution. For example if the independent variable is temperature. This implies you are free to choose any value of temperature for the experiment. This number of values that can be selected here is called degree of freedom.

Line graph:

This is chart tool is used to represent the data that is generally dependent on time. In this only a line is displayed for a particular variable on the graph. The x axis represents the time and the Y axis represents the variable under study. Generally stock prices are studies with help of line chart.

Scatter plot:

This chart is used when there are two variables under the study x and y. The values of y and x are plotted on the chart, with x value on the horizontal line and the y value on the vertical line. Generally x variable is the one that is independent and Y variable denotes the dependent variable. The pattern of the scatter plot tells us about the relation of the two variables.

Pearson correlation:

it is a statistical tool to find out the magnitude and the direction of relation between two variables. It helps us find out if there exists any linear relation between the variables under study.

Spearman correlation:

it is a statistical tool to find out the magnitude and the direction of relation between two ranked variables. It helps to find out if there exists any relation between the two categorical variables under study.

Regression equation:

An equation that defines the relation of the dependent and independent variable is called regression equation. It consists of two constant values (a and b) they are called regression parameters.

Slope constant:

The regression parameter b that multiplies to the independent variable in the regression equation is called the slope constant. It lets us know how the dependent variable changes with one unit change in the independent variable.

Y-intercept:

This is the value of regression parameter that is a constant value denoted by a. This value tells us what is the minimum value of the dependent variable when the independent variable has a value equal to Zero.

Multiple regression equation:

When there are more independent variables that influence one dependent variable we get an equation that relates all the independent variables to that dependent variable. This is called multiple regression equation.

Null hypothesis:

In the process of hypothesis testing the main aim of the researcher is to test the claim he has about the research variables. When the hypothesis testing process is started, it starts with an assumption totally opposite to the claim of the researcher. This assumption with which the hypothesis testing process is started is called Null Hypothesis.

Effect size:

it is a method that simply tells us about difference between any two groups. It actually quantifies that difference in a standard manner. It is an effective statistical tool because it is not influenced by sample size.

Cohen's d:

this is a special method of calculating effect size. This method is employed to know the difference between two mean values. When specifically the difference between two mean values is to be found Cohen's d is the ideal measure to provide the effect size.

Percentage of variance accounted for:

In the correlation analysis, the output provides with a value R square. This value of R square is called coefficient of determination. It lets us know the percent of the variation explained by the regression model. Thus if we get a R square value of say 40%, this would imply that the regression model is able to explain on 40% of the variation in the response variable. Higher the value of R square the better the regression model is considered to be.

Ratio scale:

in this scale the variables are quantitative in nature, with fixed zero, which makes comparison possible and it helps understand the variables in terms of ratio and proportion as well.

Interval scale:

In this scale the variables are quantitative in nature. It helps in comparing the two values. But they cannot be expressed as ratios or there is no fixed zero for these variables.

Ordinal scale:

in this scale the variables are non-quantitative, but they are comparable. Thus it is easier to arrange the variables in a specific order or rank them, thus making comparison possible.

Nominal scale:

in this scale the variables are non-quantitative. They are either labeled or numbered just to create different categories. These categories can neither be compared nor ranked. They just identify different groups.

Chi-square test of independence:

The main purpose of this test is to verify if there exists any association between the dependent and the independent variable. Purpose-wise it is similar to the correlation test, the only major difference is that for this test both the variables must be categorical in nature. For example: If the research question is: does the maturity level of students get influenced by their gender. In this case gender is the independent variable, and it is categorical in nature (male and female). Maturity level is the dependent variable, and again it is categorical in nature.

Independent measures T test:

This test is specifically applied only when there are two independent samples. The aim of this test is to compare the means of the two groups and decide upon if the two groups have different mean values or not.

Repeated measures T test:

In a 2 sample T test, at times there is a situation when the same group of sampling units are measured twice For example: if a set of people are sample before the training and then they are measured after the training to note the effect of the training, in this scenario the same sample is measured twice. In this case a Repeated measures T test is used.

One way ANOVA:

There are multiple groups, the aim of this test it to compare the mean value of all the groups. This test applies when there are 2 or more groups and all the groups relate to same variable. In this case the independent variable is categorical and dependent variable is continuous. Example: Suppose there are different colleges, and we compare the performance of students from each college. Thus the independent variable here is "college" it is categorical in nature. The dependent variable is "performance scores". It is continuous in nature.

Two way ANOVA:

When the multiple groups are compared on the basis of two different factors, then the Two Way Anova test is used. Again in this scenario the two independent variables and one dependent variable all are continuous in nature. For example: we can consider a scenario when the dependent variable is salary, and the independent variables can be age of the employee and experience in form of number of years. All these factors are continuous in nature, and since there are two independent factors, this calls for a Two Way Anova test.

Split half reliability:

in this method the sample is split in two equal parts to test the same knowledge. This helps us get two sets of items. It is expected that the statistical values from both the sets comes out to be same. This ensures that there is reliability in the measurement system. This method of testing the reliability of measurement system is called split half reliability.

Spearman Brown formula:

it is a specific formula related to psychometric reliability. This formula is used to tell how reliable a test is after changing the test length. It helps us understand how the relation between test length and its reliability is not a linear one.

Kuder-Richardson formula 20:

this formula applies only to the situations that have only two choices or are dichotomous choices. This formula helps check that the measurement are in tune and almost similar every time. It lets us know about the consistency of the measurement.

Cronbach's alpha:

Again, this method is designed to let us know if the measurements are consistent or not. It is a coefficient of consistency or as we consider it as reliability. It tells us if the measurements can be relied upon based on how the measurements are obtained for similar situations. If there is consensus in the measurements this implies there is good consistency, else we cannot rely on the measurements obtained.

Cohen' kappa:

It is a tool that helps the research measure the inter-rater agreement. It relates to qualitative dataset only. This measure is considered to be more powerful because it also considers the likelihood of agreement of the raters just by chance. It is a simpler tool to understand conceptually. This measure deals with only two raters. The problem that Cohen's Kappa addresses is the issue of "inter-rater reliability"

Want to see more full solutions like this?

Subscribe now to access step-by-step solutions to millions of textbook problems written by subject matter experts!
Knowledge Booster
Background pattern image
Statistics
Learn more about
Need a deep-dive on the concept behind this application? Look no further. Learn more about this topic, statistics and related others by exploring similar questions and additional content below.
Recommended textbooks for you
Text book image
Big Ideas Math A Bridge To Success Algebra 1: Stu...
Algebra
ISBN:9781680331141
Author:HOUGHTON MIFFLIN HARCOURT
Publisher:Houghton Mifflin Harcourt
Text book image
College Algebra
Algebra
ISBN:9781305115545
Author:James Stewart, Lothar Redlin, Saleem Watson
Publisher:Cengage Learning
Text book image
Glencoe Algebra 1, Student Edition, 9780079039897...
Algebra
ISBN:9780079039897
Author:Carter
Publisher:McGraw Hill
Correlation Vs Regression: Difference Between them with definition & Comparison Chart; Author: Key Differences;https://www.youtube.com/watch?v=Ou2QGSJVd0U;License: Standard YouTube License, CC-BY
Correlation and Regression: Concepts with Illustrative examples; Author: LEARN & APPLY : Lean and Six Sigma;https://www.youtube.com/watch?v=xTpHD5WLuoA;License: Standard YouTube License, CC-BY