87d ago

ISS Exam 3


  1. Know that the research design which allows researchers to talk about causality = experimental designs

 

Know the definition of and examples of the following:

  1. Null Hypothesis: states that there is no effect or no difference between groups or conditions in a study.

  2. Experimental Hypothesis: often referred to as the alternative hypothesis, is a statement that predicts a specific effect or relationship between variables in a study. It suggests that there is a difference or an effect due to the independent variable on the dependent variable. 

  3. A Variable

  4. Control variables: Control variables are factors that researchers keep constant or monitor in an experiment to ensure that any observed effects on the dependent variable are due to the independent variable alone.

  5. Dependent variables: A dependent variable is the factor that you measure in an experiment. It's called 'dependent' because its value depends on changes made to the independent variable. For example, in a study examining how different amounts of water affect plant growth, the growth of the plants (measured in height or number of leaves) would be the dependent variable. 

  6. Independent variables: An independent variable is a factor that is manipulated or changed in an experiment to observe its effect on a dependent variable. It's the variable that you think will influence the outcome. For example, if you're studying how different amounts of sunlight affect plant growth, the amount of sunlight would be the independent variable. 

  7. Demographics: Common demographic variables include age, gender, race, ethnicity, income level, education, marital status, and geographic location. 

  8. Control groups: The control group does not receive the experimental treatment, allowing researchers to see what happens in the absence of that treatment. This helps to isolate the effect of the independent variable on the dependent variable.

  9. Experimental group: the groups in a study that receive the treatment or intervention being tested. This is where researchers apply the independent variable to observe its effects on the dependent variable

  10. Independent groups / samples designs (Between subjects designs)

    1. A between-subjects design is an experimental setup where different participants are assigned to different groups or conditions, and each group experiences only one level of the independent variable. This means that each participant is only exposed to one condition, allowing researchers to compare the effects across different groups. 

  11. Repeated measures /  groups / samples designs (Within subjects designs)

    1. A within-subjects design is an experimental approach where the same participants are exposed to all levels of the independent variable. This means that each participant experiences every condition in the study, allowing researchers to compare their performance across different conditions directly. 

  12. Multivariate Designs

    1. A multivariate design is an experimental approach that involves the simultaneous examination of multiple dependent variables. This type of design allows researchers to understand how different independent variables may affect several outcomes at once, rather than focusing on just one dependent variable

    2. Example: Let's consider an example of a multivariate design in a study examining the effects of a new educational program on students. In this study, researchers might measure multiple dependent variables such as students' academic performance (test scores), motivation levels (self-reported surveys), and social skills (peer evaluations). 

  13. Multi-method Analysis

    1. A multi-method analysis is an approach in research that combines different methods or techniques to collect and analyze data. This can include qualitative methods (like interviews or focus groups) and quantitative methods (like surveys or experiments) within the same study. 

  14. A double-blind design: an experimental setup where neither the participants nor the researchers know which participants are in the experimental group and which are in the control group.

  15. A  Matched-Groups design: a matched-groups design involves pairing participants based on certain characteristics relevant to the study, and then assigning each member of the pair to different conditions. This design aims to control for individual differences that could affect the outcome.

  16. Counterbalancing: a technique used in experimental research to control for order effects, particularly in repeated measures designs. It involves varying the order in which participants experience different conditions to ensure that no single condition is consistently favored or disadvantaged due to its position in the sequence (helps increase internal validity). 

  17. Fatigue from repeated testing: a specific type of fatigue effect that occurs when participants are required to complete the same or similar tasks multiple times in a study. As they go through these tasks repeatedly, they may become tired, bored, or disengaged, which can lead to a decline in their performance on subsequent tests. 

  18. Order Effects: refer to the potential influence that the sequence in which participants experience different conditions can have on the results of an experiment. This can include various types of effects, such as practice effects, fatigue effects, carryover effects, and sensitization effects. 

  19. Practice Effects: specifically occur when participants improve their performance on a task simply because they have repeated it, rather than due to the effects of the independent variable being tested. 

  20. Placebos: substances or treatments that have no therapeutic effect but are used in research to control for the psychological effects of receiving treatment. In clinical trials, a placebo is often given to a control group to compare against the experimental group receiving the actual treatment. 

  21. Random Assignment: a technique used in experimental research to assign participants to different groups or conditions in a way that is entirely random

  22. Population: A population refers to the entire group of individuals or instances that a researcher is interested in studying. 

  23. Sample: a subset of the population that is selected for the actual study. 

  24. Random Sample (and how would you develop one?)

    1. Simple Random Sampling: Every member of the population has an equal chance of being selected. Think of it like drawing names from a hat.

  25. Stratified Sampling

    1. The population is divided into subgroups (strata) that share similar characteristics, and random samples are taken from each stratum. For example, if you want to sample college students, you might stratify by year (freshman, sophomore, etc.).

  26. Snowball Sampling

    1. Existing study subjects recruit future subjects from among their acquaintances. This is often used in hard-to-reach populations. (Non probability sampling) 

  27. Purposive Sampling: also known as judgmental sampling, is a non-probability sampling technique where researchers select participants based on specific characteristics or criteria that are relevant to the research question. (can be used in qualitative research) 

  28. Saturation: refers to the point in qualitative research when no new information or themes are emerging from the data being collected. 

  29. Quantitative Measures and Designs: involve the collection and analysis of numerical data to understand patterns, relationships, or effects. This approach often uses structured methods such as surveys, experiments, or statistical analyses to test hypotheses and draw conclusions. 

  30. Qualitative Measures and Designs: focus on understanding the meaning and experiences of individuals through non-numerical data. This approach often employs methods such as interviews, focus groups, or observations to gather rich, descriptive information. 

  31. Correlational designs: examine the relationships between two or more variables without manipulating them. Researchers look for patterns or associations to determine whether changes in one variable are related to changes in another. 

  32. Ethnographic Studies: a qualitative research method that involves the in-depth exploration of a particular culture, community, or social group. Researchers immerse themselves in the environment of the participants, often through participant observation, to gain a deep understanding of their behaviors, beliefs, and social interactions. 

  33. Exploratory Studies: conducted to investigate a research question or topic that is not well understood. These studies aim to gather preliminary information and insights, often using qualitative methods, to help define problems, generate hypotheses, or inform future research. 

  34. Grounded Theory: qualitative research methodology that aims to develop a theory based on data collected from participants. Researchers gather data through interviews, observations, or other means, and then analyze it systematically to identify patterns and concepts. The goal is to generate a theory that explains the phenomenon being studied, grounded in the actual experiences of the participants

  35. Phenomenological Studies: focus on understanding the lived experiences of individuals regarding a specific phenomenon. Researchers aim to capture the essence of these experiences by conducting in-depth interviews and analyzing the data to identify common themes and meanings. The goal is to gain insights into how individuals perceive and make sense of their experiences.

  36. “Immersion” into a setting: refers to the process by which researchers deeply engage with a particular environment, community, or social group to gain a comprehensive understanding of the context and the experiences of the individuals within it. This often involves spending extended periods of time in the setting, observing behaviors, participating in activities, and interacting with participants to gather rich, qualitative data.

  37. Focus Groups: qualitative research method that involves gathering a small group of people (typically 6-12) to discuss a specific topic or set of topics guided by a moderator. 

  38. Interviews

    1. Structured Interviews: These interviews follow a predetermined set of questions that are asked in a specific order. This format is often used in quantitative research to ensure consistency across all interviews, making it easier to compare responses. For example, in a job interview, a structured format might involve asking all candidates the same questions about their experience and skills. This helps reduce bias and allows for easier data analysis.

    2. Unstructured Interviews: In contrast, unstructured interviews are more flexible and conversational. While they may start with a few guiding questions, the interviewer can adapt the conversation based on the responses of the interviewee. This format is often used in qualitative research to explore deeper insights and understand the interviewee's perspective. For instance, in a qualitative study about people's experiences with a health condition, the interviewer might ask open-ended questions and follow up based on the interviewee's answers.

  39. Single-case designs: research methods that focus on the detailed examination of a single individual, group, or situation over time. This approach allows researchers to observe changes and effects in a specific case, often using repeated measures to assess the impact of an intervention or treatment. 

  40. Surveys: used to gather information about opinions, attitudes, perceptions, and behaviors as well as to test hypotheses.

  41. Quasi-experimental designs: research methods that aim to evaluate the effects of an intervention or treatment without random assignment to groups. In these designs, researchers compare groups that are already formed or that have been exposed to different conditions, which can introduce potential confounding variables.

  42. How many independent variables are in a 4 X 2 X 3 design?

    1. The total number of independent variables in a factorial design is determined by how many factors (or dimensions) you have. In this instance you have three independent variables: one with 4 levels, one with 2 levels, and one with 3 levels.

  43. How many levels or conditions or ways of manipulating various independent variables are in a 4 X 2 X 3 design?

    1. The total number of levels or conditions is calculated by multiplying the number of levels of each independent variable together. There are 24 conditions or ways of manipulating the number of levels. 

  44. Which are threats to Internal Validity and / or External Validity and what does each mean?

    1. Internal Validity: This refers to the extent to which a study can establish a cause-and-effect relationship between variables. High internal validity means that the changes in the dependent variable are directly caused by the manipulation of the independent variable, rather than by other factors. Threats: confounding variables, selection bias, maturation, history, and instrumentation. 

    2. External Validity: This is about the generalizability of the study's findings to other settings, populations, or times. High external validity means that the results of the study can be applied beyond the specific conditions of the experiment. Threats: sample characteristics, setting, time, interaction events (how treatment interacts with participant characteristics) 

  45. Attrition: refers to the loss of participants from a study over time, which can occur for various reasons, such as participants dropping out, losing interest, or being unable to continue for personal or logistical reasons.

  46. History: Events occurring outside the study that may impact participants' responses can threaten internal validity.

  47. Instrumentation: Changes in measurement tools or procedures can affect the consistency of data collection.

  48. Maturation: Changes in participants over time (e.g., aging, learning) can influence outcomes, especially in longitudinal studies.

  49. Selection Bias: If participants are not randomly assigned to groups, pre-existing differences may affect the results.

  50. Testing

    1. Testing Situation Bias: This bias arises from the conditions under which a test is administered. Factors like the environment, time of day, or even the presence of an observer can influence how participants perform. 

    2. Test Bias: This refers to a situation where a test unfairly advantages or disadvantages certain groups of people. For example, a standardized test that uses culturally specific language may disadvantage students from different backgrounds.

  51. Descriptive Statistics: These summarize or describe the main features of a dataset. They don’t draw conclusions beyond the data. Examples are mean, median, and mode.

  52. Internal Statistics: These allow researchers to make predictions or generalizations about a population based on a sample. (T-tests, ANOVA, Chi-square tests, Confidence intervals, p-values) 

  53. How do you calculate or determine what is the mean, median, and / or mode? 

    1. Mean: Add all the values together to get the total sum. Divide the total sum by the number of values in the dataset.

    2. Median:First, arrange the values in ascending order. If there is an odd number of values, the median is the middle value. If there is an even number of values, the median is the average of the two middle values.

    3. Mode:Identify the value that appears most frequently in the dataset.

  54. In regard to the mean, median and mode, which relates to “frequency,” or a “value,” or an “average”?

    1. Mean=Average, Median=Middle value, Mode= Frequency 

  55. Central Tendency Measures: Mean, Median, Mode

  56. Outliers: An outlier is a data point that is significantly higher or lower than most of the other values in a dataset.

  57. Bimodal Distributions: ​​A bimodal distribution has two peaks or modes—meaning there are two values or ranges that occur most frequently.

  58. Normal Distributions: A normal distribution is a bell-shaped curve where: The data is symmetrical, The mean = median = mode, Most values cluster around the center, wth fewer at the extremes


  59. Standard Normal Distributions: A standard normal distribution is a special type of normal distribution where:The mean is 0, The standard deviation is 1

  60. Skewed Distributions: A skewed distribution is one where the data is not symmetrical. It stretches more on one side.


  1. Variability / Spread: This refers to how much the data points in a dataset differ from each other and from the mean. Common measures of variability 

  2. Alpha Value: The alpha value (often set at 0.05) is the threshold for statistical significance in hypothesis testing. It represents the probability of making a Type I error, which is rejecting the null hypothesis when it is actually true. 

  3. Effect Size: This is a measure of the strength of the relationship between two variables or the magnitude of the difference between groups. It helps to understand the practical significance of research findings, beyond just statistical significance. 

  4. Power: Statistical power is the probability that a test will correctly reject a false null hypothesis (i.e., detect an effect when there is one). It is influenced by sample size, effect size, and alpha level. 

  5. Confidence Interval: a range of values, derived from a dataset, that is likely to contain the true population parameter (like a mean or proportion) with a specified level of confidence, usually expressed as a percentage (e.g., 95% or 99%). It provides an estimate of uncertainty around a sample statistic. 

  6. Correlation Coefficients (e.g., -0.73) and what the number / sign means for these

    1. The strength of a correlation is determined by how close the coefficient is to 1 or -1. The direction of the correlation is determined by if it is positive (same direction) or negative (opposite direction) 

  7. Sampling Error: This refers to the difference between the sample statistic (like a sample mean) and the actual population parameter it estimates. Sampling error occurs because a sample is only a subset of the population, and it can lead to inaccuracies in estimates. 

  8. Standard Error: This is a measure of the variability of a sample statistic (like the mean) from sample to sample. It is calculated as the standard deviation of the sample divided by the square root of the sample size. A smaller standard error indicates that the sample mean is a more accurate estimate of the population mean. 

  9. Sample Size: This refers to the number of observations or data points collected in a study. Larger sample sizes generally lead to more reliable estimates and smaller sampling errors. 

What are these used for?

  1. Independent Samples T-Test: This test is used to compare the means of two independent groups to determine if there is a statistically significant difference between them. For example, it could be used to compare test scores between two different classes. 

  2. Repeated Measures Test: This test is used when the same participants are measured multiple times under different conditions. It helps to determine if there are significant differences in the means across those conditions.

  3. One-Way ANOVA: This test is used to compare the means of three or more independent groups to see if at least one group mean is different from the others. For instance, it could be used to compare the effectiveness of three different teaching methods. 

  4. Two-Way ANOVA: This test extends the one-way ANOVA by examining the effect of two independent variables on a dependent variable, as well as any interaction between the two. For example, it could analyze how both teaching method and student gender affect test scores.

  5. Chi Square: This test is used to examine the association between categorical variables. It helps to determine if the distribution of sample categorical data matches an expected distribution. For example, it could be used to see if there is a relationship between gender and preference for a particular product.


knowt logo

ISS Exam 3

  1. Know that the research design which allows researchers to talk about causality = experimental designs

 

Know the definition of and examples of the following:

  1. Null Hypothesis: states that there is no effect or no difference between groups or conditions in a study.

  2. Experimental Hypothesis: often referred to as the alternative hypothesis, is a statement that predicts a specific effect or relationship between variables in a study. It suggests that there is a difference or an effect due to the independent variable on the dependent variable. 

  3. A Variable

  4. Control variables: Control variables are factors that researchers keep constant or monitor in an experiment to ensure that any observed effects on the dependent variable are due to the independent variable alone.

  5. Dependent variables: A dependent variable is the factor that you measure in an experiment. It's called 'dependent' because its value depends on changes made to the independent variable. For example, in a study examining how different amounts of water affect plant growth, the growth of the plants (measured in height or number of leaves) would be the dependent variable. 

  6. Independent variables: An independent variable is a factor that is manipulated or changed in an experiment to observe its effect on a dependent variable. It's the variable that you think will influence the outcome. For example, if you're studying how different amounts of sunlight affect plant growth, the amount of sunlight would be the independent variable. 

  7. Demographics: Common demographic variables include age, gender, race, ethnicity, income level, education, marital status, and geographic location. 

  8. Control groups: The control group does not receive the experimental treatment, allowing researchers to see what happens in the absence of that treatment. This helps to isolate the effect of the independent variable on the dependent variable.

  9. Experimental group: the groups in a study that receive the treatment or intervention being tested. This is where researchers apply the independent variable to observe its effects on the dependent variable

  10. Independent groups / samples designs (Between subjects designs)

    1. A between-subjects design is an experimental setup where different participants are assigned to different groups or conditions, and each group experiences only one level of the independent variable. This means that each participant is only exposed to one condition, allowing researchers to compare the effects across different groups. 

  11. Repeated measures /  groups / samples designs (Within subjects designs)

    1. A within-subjects design is an experimental approach where the same participants are exposed to all levels of the independent variable. This means that each participant experiences every condition in the study, allowing researchers to compare their performance across different conditions directly. 

  12. Multivariate Designs

    1. A multivariate design is an experimental approach that involves the simultaneous examination of multiple dependent variables. This type of design allows researchers to understand how different independent variables may affect several outcomes at once, rather than focusing on just one dependent variable

    2. Example: Let's consider an example of a multivariate design in a study examining the effects of a new educational program on students. In this study, researchers might measure multiple dependent variables such as students' academic performance (test scores), motivation levels (self-reported surveys), and social skills (peer evaluations). 

  13. Multi-method Analysis

    1. A multi-method analysis is an approach in research that combines different methods or techniques to collect and analyze data. This can include qualitative methods (like interviews or focus groups) and quantitative methods (like surveys or experiments) within the same study. 

  14. A double-blind design: an experimental setup where neither the participants nor the researchers know which participants are in the experimental group and which are in the control group.

  15. A  Matched-Groups design: a matched-groups design involves pairing participants based on certain characteristics relevant to the study, and then assigning each member of the pair to different conditions. This design aims to control for individual differences that could affect the outcome.

  16. Counterbalancing: a technique used in experimental research to control for order effects, particularly in repeated measures designs. It involves varying the order in which participants experience different conditions to ensure that no single condition is consistently favored or disadvantaged due to its position in the sequence (helps increase internal validity). 

  17. Fatigue from repeated testing: a specific type of fatigue effect that occurs when participants are required to complete the same or similar tasks multiple times in a study. As they go through these tasks repeatedly, they may become tired, bored, or disengaged, which can lead to a decline in their performance on subsequent tests. 

  18. Order Effects: refer to the potential influence that the sequence in which participants experience different conditions can have on the results of an experiment. This can include various types of effects, such as practice effects, fatigue effects, carryover effects, and sensitization effects. 

  19. Practice Effects: specifically occur when participants improve their performance on a task simply because they have repeated it, rather than due to the effects of the independent variable being tested. 

  20. Placebos: substances or treatments that have no therapeutic effect but are used in research to control for the psychological effects of receiving treatment. In clinical trials, a placebo is often given to a control group to compare against the experimental group receiving the actual treatment. 

  21. Random Assignment: a technique used in experimental research to assign participants to different groups or conditions in a way that is entirely random

  22. Population: A population refers to the entire group of individuals or instances that a researcher is interested in studying. 

  23. Sample: a subset of the population that is selected for the actual study. 

  24. Random Sample (and how would you develop one?)

    1. Simple Random Sampling: Every member of the population has an equal chance of being selected. Think of it like drawing names from a hat.

  25. Stratified Sampling

    1. The population is divided into subgroups (strata) that share similar characteristics, and random samples are taken from each stratum. For example, if you want to sample college students, you might stratify by year (freshman, sophomore, etc.).

  26. Snowball Sampling

    1. Existing study subjects recruit future subjects from among their acquaintances. This is often used in hard-to-reach populations. (Non probability sampling) 

  27. Purposive Sampling: also known as judgmental sampling, is a non-probability sampling technique where researchers select participants based on specific characteristics or criteria that are relevant to the research question. (can be used in qualitative research) 

  28. Saturation: refers to the point in qualitative research when no new information or themes are emerging from the data being collected. 

  29. Quantitative Measures and Designs: involve the collection and analysis of numerical data to understand patterns, relationships, or effects. This approach often uses structured methods such as surveys, experiments, or statistical analyses to test hypotheses and draw conclusions. 

  30. Qualitative Measures and Designs: focus on understanding the meaning and experiences of individuals through non-numerical data. This approach often employs methods such as interviews, focus groups, or observations to gather rich, descriptive information. 

  31. Correlational designs: examine the relationships between two or more variables without manipulating them. Researchers look for patterns or associations to determine whether changes in one variable are related to changes in another. 

  32. Ethnographic Studies: a qualitative research method that involves the in-depth exploration of a particular culture, community, or social group. Researchers immerse themselves in the environment of the participants, often through participant observation, to gain a deep understanding of their behaviors, beliefs, and social interactions. 

  33. Exploratory Studies: conducted to investigate a research question or topic that is not well understood. These studies aim to gather preliminary information and insights, often using qualitative methods, to help define problems, generate hypotheses, or inform future research. 

  34. Grounded Theory: qualitative research methodology that aims to develop a theory based on data collected from participants. Researchers gather data through interviews, observations, or other means, and then analyze it systematically to identify patterns and concepts. The goal is to generate a theory that explains the phenomenon being studied, grounded in the actual experiences of the participants

  35. Phenomenological Studies: focus on understanding the lived experiences of individuals regarding a specific phenomenon. Researchers aim to capture the essence of these experiences by conducting in-depth interviews and analyzing the data to identify common themes and meanings. The goal is to gain insights into how individuals perceive and make sense of their experiences.

  36. “Immersion” into a setting: refers to the process by which researchers deeply engage with a particular environment, community, or social group to gain a comprehensive understanding of the context and the experiences of the individuals within it. This often involves spending extended periods of time in the setting, observing behaviors, participating in activities, and interacting with participants to gather rich, qualitative data.

  37. Focus Groups: qualitative research method that involves gathering a small group of people (typically 6-12) to discuss a specific topic or set of topics guided by a moderator. 

  38. Interviews

    1. Structured Interviews: These interviews follow a predetermined set of questions that are asked in a specific order. This format is often used in quantitative research to ensure consistency across all interviews, making it easier to compare responses. For example, in a job interview, a structured format might involve asking all candidates the same questions about their experience and skills. This helps reduce bias and allows for easier data analysis.

    2. Unstructured Interviews: In contrast, unstructured interviews are more flexible and conversational. While they may start with a few guiding questions, the interviewer can adapt the conversation based on the responses of the interviewee. This format is often used in qualitative research to explore deeper insights and understand the interviewee's perspective. For instance, in a qualitative study about people's experiences with a health condition, the interviewer might ask open-ended questions and follow up based on the interviewee's answers.

  39. Single-case designs: research methods that focus on the detailed examination of a single individual, group, or situation over time. This approach allows researchers to observe changes and effects in a specific case, often using repeated measures to assess the impact of an intervention or treatment. 

  40. Surveys: used to gather information about opinions, attitudes, perceptions, and behaviors as well as to test hypotheses.

  41. Quasi-experimental designs: research methods that aim to evaluate the effects of an intervention or treatment without random assignment to groups. In these designs, researchers compare groups that are already formed or that have been exposed to different conditions, which can introduce potential confounding variables.

  42. How many independent variables are in a 4 X 2 X 3 design?

    1. The total number of independent variables in a factorial design is determined by how many factors (or dimensions) you have. In this instance you have three independent variables: one with 4 levels, one with 2 levels, and one with 3 levels.

  43. How many levels or conditions or ways of manipulating various independent variables are in a 4 X 2 X 3 design?

    1. The total number of levels or conditions is calculated by multiplying the number of levels of each independent variable together. There are 24 conditions or ways of manipulating the number of levels. 

  44. Which are threats to Internal Validity and / or External Validity and what does each mean?

    1. Internal Validity: This refers to the extent to which a study can establish a cause-and-effect relationship between variables. High internal validity means that the changes in the dependent variable are directly caused by the manipulation of the independent variable, rather than by other factors. Threats: confounding variables, selection bias, maturation, history, and instrumentation. 

    2. External Validity: This is about the generalizability of the study's findings to other settings, populations, or times. High external validity means that the results of the study can be applied beyond the specific conditions of the experiment. Threats: sample characteristics, setting, time, interaction events (how treatment interacts with participant characteristics) 

  45. Attrition: refers to the loss of participants from a study over time, which can occur for various reasons, such as participants dropping out, losing interest, or being unable to continue for personal or logistical reasons.

  46. History: Events occurring outside the study that may impact participants' responses can threaten internal validity.

  47. Instrumentation: Changes in measurement tools or procedures can affect the consistency of data collection.

  48. Maturation: Changes in participants over time (e.g., aging, learning) can influence outcomes, especially in longitudinal studies.

  49. Selection Bias: If participants are not randomly assigned to groups, pre-existing differences may affect the results.

  50. Testing

    1. Testing Situation Bias: This bias arises from the conditions under which a test is administered. Factors like the environment, time of day, or even the presence of an observer can influence how participants perform. 

    2. Test Bias: This refers to a situation where a test unfairly advantages or disadvantages certain groups of people. For example, a standardized test that uses culturally specific language may disadvantage students from different backgrounds.

  51. Descriptive Statistics: These summarize or describe the main features of a dataset. They don’t draw conclusions beyond the data. Examples are mean, median, and mode.

  52. Internal Statistics: These allow researchers to make predictions or generalizations about a population based on a sample. (T-tests, ANOVA, Chi-square tests, Confidence intervals, p-values) 

  53. How do you calculate or determine what is the mean, median, and / or mode? 

    1. Mean: Add all the values together to get the total sum. Divide the total sum by the number of values in the dataset.

    2. Median:First, arrange the values in ascending order. If there is an odd number of values, the median is the middle value. If there is an even number of values, the median is the average of the two middle values.

    3. Mode:Identify the value that appears most frequently in the dataset.

  54. In regard to the mean, median and mode, which relates to “frequency,” or a “value,” or an “average”?

    1. Mean=Average, Median=Middle value, Mode= Frequency 

  55. Central Tendency Measures: Mean, Median, Mode

  56. Outliers: An outlier is a data point that is significantly higher or lower than most of the other values in a dataset.

  57. Bimodal Distributions: ​​A bimodal distribution has two peaks or modes—meaning there are two values or ranges that occur most frequently.

  58. Normal Distributions: A normal distribution is a bell-shaped curve where: The data is symmetrical, The mean = median = mode, Most values cluster around the center, wth fewer at the extremes

  59. Standard Normal Distributions: A standard normal distribution is a special type of normal distribution where:The mean is 0, The standard deviation is 1

  60. Skewed Distributions: A skewed distribution is one where the data is not symmetrical. It stretches more on one side.

  1. Variability / Spread: This refers to how much the data points in a dataset differ from each other and from the mean. Common measures of variability 

  2. Alpha Value: The alpha value (often set at 0.05) is the threshold for statistical significance in hypothesis testing. It represents the probability of making a Type I error, which is rejecting the null hypothesis when it is actually true. 

  3. Effect Size: This is a measure of the strength of the relationship between two variables or the magnitude of the difference between groups. It helps to understand the practical significance of research findings, beyond just statistical significance. 

  4. Power: Statistical power is the probability that a test will correctly reject a false null hypothesis (i.e., detect an effect when there is one). It is influenced by sample size, effect size, and alpha level. 

  5. Confidence Interval: a range of values, derived from a dataset, that is likely to contain the true population parameter (like a mean or proportion) with a specified level of confidence, usually expressed as a percentage (e.g., 95% or 99%). It provides an estimate of uncertainty around a sample statistic. 

  6. Correlation Coefficients (e.g., -0.73) and what the number / sign means for these

    1. The strength of a correlation is determined by how close the coefficient is to 1 or -1. The direction of the correlation is determined by if it is positive (same direction) or negative (opposite direction) 

  7. Sampling Error: This refers to the difference between the sample statistic (like a sample mean) and the actual population parameter it estimates. Sampling error occurs because a sample is only a subset of the population, and it can lead to inaccuracies in estimates. 

  8. Standard Error: This is a measure of the variability of a sample statistic (like the mean) from sample to sample. It is calculated as the standard deviation of the sample divided by the square root of the sample size. A smaller standard error indicates that the sample mean is a more accurate estimate of the population mean. 

  9. Sample Size: This refers to the number of observations or data points collected in a study. Larger sample sizes generally lead to more reliable estimates and smaller sampling errors. 

What are these used for?

  1. Independent Samples T-Test: This test is used to compare the means of two independent groups to determine if there is a statistically significant difference between them. For example, it could be used to compare test scores between two different classes. 

  2. Repeated Measures Test: This test is used when the same participants are measured multiple times under different conditions. It helps to determine if there are significant differences in the means across those conditions.

  3. One-Way ANOVA: This test is used to compare the means of three or more independent groups to see if at least one group mean is different from the others. For instance, it could be used to compare the effectiveness of three different teaching methods. 

  4. Two-Way ANOVA: This test extends the one-way ANOVA by examining the effect of two independent variables on a dependent variable, as well as any interaction between the two. For example, it could analyze how both teaching method and student gender affect test scores.

  5. Chi Square: This test is used to examine the association between categorical variables. It helps to determine if the distribution of sample categorical data matches an expected distribution. For example, it could be used to see if there is a relationship between gender and preference for a particular product.