G

ISS Exam 2

Note that there is one item that includes a message from me to answer the question as False – do that

  1. Know what a correlation coefficient means (e.g., direction and strength) (e.g., -.72 means what?). Also, know what a positive, negative and no correlation looks like when graphed.

The strength of a correlation is determined by how close the coefficient is to 1 or -1. The direction of the correlation is determined by if it is positive (same direction) or negative (opposite direction) 

  1. Be able to identify the difference between a “true score” and “measurement error” This relates to sampling error, which is also known as the standard error.

True Score: This refers to the actual value or the real ability of a person or the true level of a variable being measured. For example, if you're measuring someone's intelligence, the true score would represent their actual intelligence level, free from any influences that might distort the measurement.

Measurement Error: This is the difference between the observed score (what you actually measure) and the true score. It can arise from various factors, such as the measurement tool itself, the environment, or even the person being measured.


  1. Know that prerequisites for a successful observation of behavior is to have defined what you are looking for and for you to have a proper selection of a target behavior

  2. Know that participants often behave differently when they feel they are being observed

  3. Know that long surveys lead to drop out and boredom which negatively affects reliability of the survey

  4. Determine which type of survey is the cheapest to administer (which delivery method – mail? Electronic?)

Electronic surveys often have lower costs associated with them because they eliminate printing and mailing expenses. 

Be able to define and identify examples of all of the items below:

  1. Independent variables

An independent variable is a factor that is manipulated or changed in an experiment to observe its effect on a dependent variable. It's the variable that you think will influence the outcome. For example, if you're studying how different amounts of sunlight affect plant growth, the amount of sunlight would be the independent variable. 

  1. Dependent variables

A dependent variable is the factor that you measure in an experiment. It's called 'dependent' because its value depends on changes made to the independent variable. For example, in a study examining how different amounts of water affect plant growth, the growth of the plants (measured in height or number of leaves) would be the dependent variable. 

  1. Nominal, Ordinal, Ratio and Interval variables (also, which has a “true 0”? which is a forced choice / yes no question?, which involves ranking? Which involves categories?)

Nominal: allows us to classify individuals into different groups based on a given characteristic. For example, numbers represent different hair colors. The number is only used for identification, not for ranking. There is no absolute zero. 

Ordinal: permits the assignment of a higher number to an individual who has a greater degree of a measured characteristic (ranking). For example, a race is an ordinal ranking. 

Interval: variable measured by order with equal distances between each variable on the scale. There is no true zero. For example, a temperature scale where the difference between degrees is consistent, but zero doesn't mean 'no temperature'

Ratio: Has all the properties of interval measurement, but it also has a true zero point. For example, a measurement where zero means the absence of the quantity being measured, like weight or height. 


  1. Surveys: used to gather information about opinions, attitudes, perceptions, and behaviors as well as to test hypotheses.

  2. Naturalistic Observations: features very direct data collection using visual observation, field notes, and recordings in natural settings. Jane Goodall’s classic studies of chimpanzees are a good example of this type of research. 

  3. Participant Observations: refers to the immersion of the researcher into the phenomenon under study. For example, scientists interested in homelessness have lived as homeless persons for a period of time to gain authentic insight into the experience. It can be either disguised, where the researcher blends in with the setting, or undisguised, where the researcher is obviously an outsider looking into the situation.

  4. Functional Magnetic Resonance Imaging (FMRI): a neuroimaging technique that measures and maps brain activity by detecting changes in blood flow. 

  5. Primary Data: data collected by the researchers themselves.

  6. Primary Data Collection: researchers collect the data needed to answer their research question.

  7. Secondary Data: data collected by another researcher for another purpose that is then used in new research.  

  8. Secondary Data Collection: Existing sources of data are used to answer our research question.

  9. Different kinds of biases (e.g., social desirability, confirmation bias, testing situation bias, test bias, content bias, researcher bias,)

Social Desirability Bias: This occurs when respondents provide answers they believe are more socially acceptable rather than their true feelings or behaviors. For instance, in a survey about health habits, people might underreport smoking or overeating because they think those behaviors are frowned upon.

Confirmation Bias: This is the tendency to search for, interpret, and remember information in a way that confirms one’s preexisting beliefs or hypotheses. For example, if a researcher believes that a certain teaching method is effective, they might focus on data that supports this view while ignoring data that contradicts it.

Testing Situation Bias: This bias arises from the conditions under which a test is administered. Factors like the environment, time of day, or even the presence of an observer can influence how participants perform. Can you think of a scenario where the testing environment might affect results?

Test Bias: This refers to a situation where a test unfairly advantages or disadvantages certain groups of people. For example, a standardized test that uses culturally specific language may disadvantage students from different backgrounds.

Content Bias: This occurs when the content of a test or survey does not accurately reflect the construct it is intended to measure. For instance, if a math test includes questions that require knowledge of a specific cultural context, it may not fairly assess the math skills of all students.

Researcher Bias: This happens when a researcher’s expectations or preferences influence the outcome of a study. For example, if a researcher has a strong belief in a particular theory, they might unintentionally design their study or interpret results in a way that supports that theory.



  1. Different types of sampling methods, including Non-probability and probability sampling AND the sub-types under each – also generally know the benefits and issues of each type

Probability sampling: a method where every individual in the population has an equal chance of being selected. This approach allows researchers to generalize their findings to the larger population

  1. Simple Random Sampling: Every member of the population has an equal chance of being selected. Think of it like drawing names from a hat.

  2. Stratified Sampling: The population is divided into subgroups (strata) that share similar characteristics, and random samples are taken from each stratum. For example, if you want to sample college students, you might stratify by year (freshman, sophomore, etc.).

  3. Cluster Sampling: The population is divided into clusters (often geographically), and entire clusters are randomly selected. This is useful when the population is spread out over a large area.

Non-probability sampling: does not give every individual an equal chance of being selected. This method is often easier and more cost-effective but can lead to bias. 

  1. Convenience Sampling: Samples are taken from a group that is easy to reach. For example, surveying people in a mall.

  2. Judgmental or Purposive Sampling: The researcher uses their judgment to select participants who are deemed to be most informative.

  3. Snowball Sampling: Existing study subjects recruit future subjects from among their acquaintances. This is often used in hard-to-reach populations. 

20) Internal Validity, External Validity, and Ecological Validity

Internal Validity: This refers to the extent to which a study can establish a cause-and-effect relationship between variables. High internal validity means that the changes in the dependent variable are directly caused by the manipulation of the independent variable, rather than by other factors. For example, if you conduct an experiment to test a new teaching method, high internal validity would mean that any observed improvements in student performance are due to that method and not other variables like prior knowledge or motivation.

External Validity: This is about the generalizability of the study's findings to other settings, populations, or times. High external validity means that the results of the study can be applied beyond the specific conditions of the experiment. For instance, if a study on a new drug is conducted on a specific age group, external validity would consider whether the results can be applied to other age groups or different populations.

Ecological Validity: This is a subset of external validity that focuses specifically on how well the findings of a study can be generalized to real-world settings. It considers whether the study's conditions and tasks reflect real-life situations. For example, if a psychological study is conducted in a lab setting, it may have lower ecological validity compared to a study conducted in a natural environment where participants behave more like they would in everyday life.

21) Reactivity: the phenomenon where individuals alter their behavior or responses when they know they are being observed or measured.  

22) A cross-sectional survey vs. a longitudinal survey

Cross-Sectional Survey: This type of survey collects data at a single point in time from a sample or population. It provides a snapshot of a particular characteristic or phenomenon. For example, if you wanted to know the prevalence of a certain health condition among adults in a city, you would survey a group of adults at one specific time. This method is often used to identify patterns or correlations but does not allow for conclusions about cause-and-effect relationships over time.

Longitudinal Survey: In contrast, a longitudinal survey collects data from the same subjects repeatedly over an extended period. This allows researchers to observe changes and trends over time. For instance, if you were studying the development of a particular skill in children, you might assess the same group of children at multiple points in their development. This method is valuable for understanding how variables change and can help establish causal relationships.

23) Structured vs. Unstructured Interviews

Structured Interviews: These interviews follow a predetermined set of questions that are asked in a specific order. This format is often used in quantitative research to ensure consistency across all interviews, making it easier to compare responses. For example, in a job interview, a structured format might involve asking all candidates the same questions about their experience and skills. This helps reduce bias and allows for easier data analysis.

Unstructured Interviews: In contrast, unstructured interviews are more flexible and conversational. While they may start with a few guiding questions, the interviewer can adapt the conversation based on the responses of the interviewee. This format is often used in qualitative research to explore deeper insights and understand the interviewee's perspective. For instance, in a qualitative study about people's experiences with a health condition, the interviewer might ask open-ended questions and follow up based on the interviewee's answers.

24) Focus Groups: qualitative research method that involves gathering a small group of people (typically 6-12) to discuss a specific topic or set of topics guided by a moderator. 

25) Demographic Variables: are characteristics of a population that are often used in research to describe and identify respondents (age, gender, race & ethnicity, education level) 

26) Likert-type items on surveys: a common format used in surveys to measure attitudes, opinions, or perceptions. They typically present a statement, and respondents indicate their level of agreement or disagreement on a scale. For example, a statement might be "I enjoy studying research methods," and the response options could range from "1 - Strongly Disagree" to "5 - Strongly Agree." 

27) Semantic-differential survey items: a type of rating scale used to measure respondents' attitudes or perceptions about a particular concept, object, or event.

28) Reliability: This refers to the consistency and stability of a measure. In other words, if you were to administer the same survey or test multiple times under the same conditions, reliability indicates that you would get similar results each time.

29) Validity: This refers to the extent to which a test or survey measures what it is intended to measure. Validity ensures that the conclusions drawn from the data are accurate and meaningful. 

30) Types of Reliability and Validity (construct validity, test-retest reliability, criterion validity, content validity)

Construct validity: This evaluates whether the test truly measures the theoretical construct it claims to measure.

Test-retest reliability: This measures the stability of a test over time by comparing scores from the same individuals at two different points in time.

Criterion-related validity: This examines how well one measure predicts an outcome based on another measure (e.g., how well a test predicts future performance).

Content validity: This assesses whether the test covers the entire content area it is supposed to measure.

31) Constructs generally – what is a construct?

A construct is an abstract concept or variable that is not directly observable but is used in research to represent a specific phenomenon or idea. For example, self-esteem is a construct.