PSYC100 Lecture Review: The Scientific Method and Research Design
Pre-Class Retrieval Practice
Students are asked to compare and contrast certain topics before class, without notes, and then check their answers.
Key topics for this practice include:
Inattentional blindness (not seeing something obvious because you're focused elsewhere) vs. Change blindness (not noticing a change in something)
Structuralism (understanding the basic parts of the mind) vs. Functionalism/Gestalt (understanding the purpose of the mind or seeing things as a whole)
Course Objectives for Today
Discuss the scientific method (a way to investigate and understand things).
Start identifying the important things to think about for all research studies.
Psychology as a Hub Science
Psychology is seen as a central or interconnected science, meaning it relates to and is important for many other subjects and fields.
The Scientific Method
The scientific method is a repeating cycle that involves developing a theory, making a hypothesis, doing research, and interpreting data. Its goal is to strengthen, change, or even discard existing theories.
Theory: An overall explanation for observations, like why something happens.
Hypothesis: A specific guess or prediction that comes from a theory and can be tested.
Research: An experiment or study designed to test the hypothesis and collect information (data).
Data Outcomes:
Support the theory: If the data matches the prediction, the theory becomes stronger.
Don't support the theory: If the data doesn't match, the current theory might be thrown out or changed so it can be tested again.
Revision: Based on what the data shows, the theory can be made more complete or specific.
This process keeps going: revised theories lead to new predictions and more research.
Key Considerations for All Types of Research Designs
Researchers need to carefully think about several important things when they design a study:
Ethical Considerations: Has the study's impact on people involved and on society been fully thought through and handled responsibly?
Sampling Strategy: How were the people or subjects chosen from the larger group the researcher is interested in?
Operational Definitions and Measurement: How exactly have the concepts (variables) in the study been defined so they can be measured?
Reliability and Accuracy of Measurements: Are the measurements consistent (always getting similar results) and correct (reflecting the true situation)?
Construct Validity: Do the measurements truly capture the real-world idea or concept they are supposed to measure?
Internal and External Validity: How sure are we that the study's results are truly because of what the researcher changed (internal validity)? And can these results be applied to other situations or people (external validity)?
Sampling Strategy
Population vs. Sample:
Population: The entire group of people or things the researcher wants to learn about (e.g., all students currently at UD).
Sample: A smaller part of that population that actually takes part in the study.
Types of Sampling:
Random Sampling: Every person in the population has an equal chance of being picked for the sample. This usually makes the sample a good representation of the whole group.
Convenience Sampling: The sample includes people who are easy for the researcher to find and access. This is often simpler and cheaper but might not give a sample that truly represents the whole population.
Sampling Examples:
Example 1: A researcher studies how parents in Delaware interact with their 4-year-old children by asking 50 volunteers from university childcare centers. This is convenience sampling because the participants were easy to reach and volunteered themselves.
Example 2: A researcher studies the habits of UD students by asking 50 students at the Morris Library. This is also convenience sampling because the researcher used an easily accessible group of students.
Operational Definitions and Measurement
Purpose: To clearly state how abstract ideas (variables) will be measured in a study, turning them into concrete, observable terms.
Process:
Identify the variable(s) (the things you're interested in) in your research question.
Provide an operational definition for each variable, explaining the exact steps or tools you will use to measure it.
Example: Research Question: "Does regular exercise make college students happier than those who don't exercise regularly?"
Operational Definition of Happiness: Researchers measure happiness by counting how many times participants smile for one hour after they exercise.
Reliability and Accuracy of Measurements
Reliability: How consistent a measurement is. A reliable measure gives similar results each time it's used under the same conditions.
Accuracy: How close a measurement is to the true value. An accurate measure correctly reflects what it's trying to measure.
Validity of Research
Construct Validity:
Definition: How well the variables in a study genuinely measure the specific ideas or concepts they are supposed to measure.
Importance: It's vital to make sure the research is actually studying what it claims to. If a study lacks construct validity, its conclusions might not mean anything.
Illustrative Example (Historical Test): Imagine an old general knowledge test (like from the World War I era) that claims to measure overall intelligence. However, if its questions are very specific to that time or culture (e.g., "The main factory of the Ford automobile is in [multiple choices]", "Jess Willard is a [multiple choices]", "The Union Commander at Mobile Bay was [multiple choices]"), it wouldn't really measure universal intelligence well. It would be unfairly favoring people with specific knowledge from that old era. This shows that how you define and measure something directly affects whether it truly reflects the concept you're trying to study.
Internal Validity:
This refers to how sure we are that the changes we see in an experiment are actually caused by what we changed (the independent variable), and not by other outside factors. It helps us confirm cause-and-effect within the study.
External Validity:
This refers to how well the findings of a study can be applied or generalized to other real-world situations, different groups of people, and other times. It tells us if the results are useful beyond the specific study itself.