1/97
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
• Intuition
We rely on our gut and emotions to make decisions; can be wrong because it can favor emotion over logic.
• Authority
Accepting new ideas because some authority figure states they're true; need to make sure to question authority figures to evaluate if trusting is ok.
• Rationalism
Using logic and reasoning to acquire new knowledge; conclusion may not be valid if premises are wrong or error in logic.
• Empiricism
Acquiring knowledge through observation and experience; we are limited in what we can experience and prior experience and senses can deceive us.
• Scientific Method
Process of systematically collecting and evaluating evidence to test ideas and answer questions; most likely to produce valid knowledge, but not always feasible and cannot answer all questions (only empirical ones).
• Pseudoscience
Activities and beliefs claimed to be scientific by their proponents, but lack features of science.
• Falsifiable
There is an observation that would, if it were made, count as evidence against the claim.
• Goals of science
Describe, predict, and explain.
• Basic research
Conducted to get a greater understanding of human behavior, not addressing a particular problem.
• Applied research
Has a more immediate goal and looks for solutions to problems.
• Availability heuristic
Judging how likely something is based on how easily examples come to mind (e.g., riding out a hurricane).
• Representativeness heuristic
Judging something based on how much it seems like a typical example (e.g., “healthy” food).
• Better
than
• Overconfidence phenomenon
Being too sure that our judgments or predictions are correct.
• Hindsight bias
Thinking we “knew it all along” after something has already happened.
• Confirmation bias
Paying attention to information that supports our beliefs and ignoring what doesn’t.
• Focusing effect
Giving too much importance to one factor while ignoring others.
• What you see is all there is phenomenon
Making decisions based only on the information we have, without considering what we might be missing.
• Skepticism
Pausing to consider other possibilities and looking for real evidence before believing something, especially when it matters.
Chapter 2
• Steps in the research process
Formulate hypothesis → Design experiment → Collect the data → Analyze the data and draw conclusions → Report findings.
• What makes a source scholarly?
Written by experts, based on research or evidence, and usually reviewed by other scholars before publication.
• Types of scholarly sources
Journal articles (empirical papers, review articles, theoretical articles, meta
• Double
blind peer review
• PsycINFO
A psychology research database that helps you find scholarly articles and studies.
• How to generate research questions
By observing behavior, reading past research, and thinking about real
• Criteria for evaluating research questions
Should be clear, specific, testable, and based on existing knowledge.
• Theory
A broad explanation that organizes and predicts many related findings.
• Hypothesis
A specific, testable prediction about the relationship between variables.
• Differences between a theory and hypothesis
A theory is a general explanation; a hypothesis is a specific prediction.
• Three characteristics of a good hypothesis
Testable, specific, and falsifiable (can be proven wrong).
• Variable: Quantitative and Categorical/Qualitative
A measurable factor in a study; quantitative is measured with numbers (age, height), qualitative/categorical is grouped into categories (gender, color).
• Independent variable
The variable the researcher changes or controls.
• Dependent variable
The variable that is measured to see the effect.
• Extraneous variable
Any outside factor that could affect the results.
• Confounds
Extraneous variables that change along with the independent variable and affect the results.
• Operational definition
A clear, specific explanation of how a variable is measured or manipulated.
• Population
The entire group the researcher wants to study.
• Sample
A smaller group taken from the population.
• Simple random sampling
Every member of the population has an equal chance of being selected.
• Convenience sampling
Selecting participants who are easy to reach.
• Internal validity
How sure we are that the independent variable caused the results.
• External validity
How well the results apply to real life or other groups.
• Laboratory study
Research done in a controlled, artificial setting.
• Field study
Research done in a natural, real
• Field experiment
An experiment conducted in a real
• Relationship between internal and external validity
Experiments usually have high internal validity but lower external validity; non
• Descriptive statistics
Numbers that summarize or describe data (mean, median, mode).
• Correlation coefficient
A number that shows the strength and direction of a relationship between two variables.
• Inferential statistics
Methods used to draw conclusions about a population based on a sample.
• Statistically significant
A result that is unlikely to have happened by chance.
• Type I and II Errors
Type I: saying there is an effect when there isn’t one (false positive); Type II: saying there is no effect when there actually is one (false negative).
Chapter 3
• Morality
Personal beliefs about what is right and wrong.
• Ethics
Rules or standards that guide right and wrong behavior, especially in research.
• Framework for thinking about research ethics
A set of principles used to decide if research is ethical, including weighing risks against benefits, acting responsibly and with integrity, seeking justice, and respecting people’s rights and dignity.
• Autonomy
A person’s right to make their own decisions.
• Informed consent
Giving participants full information about the study so they can choose whether to take part.
• Confidentiality
Keeping participants’ personal information private.
• Anonymity
Not collecting or linking participants’ identities to their data.
• Framework for thinking about research ethics using the Milgram study
Showed the need to protect participants from harm, ensure informed consent, and properly debrief them after deception.
• Nuremberg code
A set of research ethics principles created after World War II, emphasizing voluntary consent.
• Declaration of Helsinki
Ethical guidelines for research with human participants, created by the medical community.
• Belmont Report
Established three main ethical principles: respect for persons, beneficence, and justice.
• Federal Policy for the Protection of Human Subjects
U.S. rules (Common Rule) that protect people who participate in research.
• Institutional Review Board (IRB) and the three classification levels
A committee that reviews research to make sure it is ethical and safe.
• APA Ethics Code
Guidelines created by the American Psychological Association for ethical behavior in research and practice.
• Deception
Misleading participants about some part of the study.
• Debriefing
Explaining the true purpose of the study to participants after it ends.
• Tuskegee Syphilis Study
An unethical study where Black men with syphilis were not treated or fully informed, even when treatment became available.
Chapter 4
• Psychological constructs
Ideas or traits in psychology that cannot be directly observed, like intelligence or stress.
• Conceptual definition
A general explanation of what a construct means.
• Operational definition
A specific explanation of how a construct will be measured.
• Self
report measures
• Behavioral measures
Observing and recording a person’s actions.
• Physiological measures
Recording body responses, like heart rate or brain activity.
• Nominal level of measurement
Categories with no order (e.g., eye color, gender).
• Ordinal level of measurement
Categories that can be ranked, but differences between them are not equal (e.g., 1st, 2nd, 3rd place).
• Interval level of measurement
Number scales with equal intervals but no true zero (e.g., temperature in Fahrenheit).
• Ratio level of measurement
Number scales with equal intervals and a true zero (e.g., weight, height, time).
• Reliability
The consistency of a measurement.
• Test
retest reliability
• Internal consistency reliability
How well different items on the same test measure the same construct.
• Interrater reliability
How much different observers agree in their measurements.
• Validity
How well a test measures what it is supposed to measure.
• Face validity
How much a test appears to measure what it should, at first glance.
• Content validity
How well a test covers all parts of the construct.
• Criterion validity
How well a test relates to other important outcomes.
• Concurrent validity
When the test and the related outcome are measured at the same time.
• Predictive validity
When the test predicts a future outcome.
• Convergent validity
When a test is strongly related to other measures of the same construct.
• Discriminant validity
When a test is not related to measures of different constructs.
• Socially desirable responding
Answering in a way that makes you look good to others.
• Demand characteristics
Clues in a study that influence how participants think they should behave.