Untitled Flashcards Set

Chapter 1: 


People watching: why people do what they do: Why we do what we do 

  • Curiosity of human behavior 


Research is used in lots of different fields: 


Ask yourself: what's the evidence??

  • Use research to come up with clinically supportive treatments: so they know that treatments work and can help 


Three approaches we learn about the world: 

Intuition: “gut feeling” to draw conclusions about the world 

  • Main problem: Biased (everyone is but still an issue :)

Authority/Prestigious: when someone we respect/ trust tells us something that we perceive is true 

  • Could be biased, we dont know it true 

Faith: gain knowledge and experience through spiritual intuition, using faith to create scientific inquiry  

  • Sometimes accurate, sometimes not 


Scientific approach: spectacle: train your mind to be spectacle

  • Based on logic

  • Grew out of the tradition of empiricism


Empiricism: accurate information can only be obtained through direct and verifiable observation 

  • If you can’t observe it, then it isn't true 

  • Both a process and a worldview 


Characteristic of Science: making a more accurate way to observe human behavior

  • Skeptical: looking for evidence before accepting it 

  • Empirical: based in direct observation, needs to be testable or falsifiable 

  • Collaborative: research scientists when they find something they share their findings and allow other scientist collaborate on it  

  • Adversarial: theories compete, multiple theories of the same thing, all fighting to be accurate 

    • Example: Obese theory: more calories in than out or hormonal theory: calories are not equallying up to each other: hormonal impact makes people obese over time 

  • Humble: scientist are humble: when theories are refuted scientist must shift gears and try another way 

  • Peer-reviewed: before research finding is shared and published, it is evaluated by a group of your peers.

    • To make sure high quality research is out in the world


Pseudoscience: the use of seemingly scientific terms that have no real evidence made in empirical observation 

Where do we often see it/hear it? 

  • Tiktok, youtube 


sign a an assertion is not scientific 

  1. Not testable/ falsifiable: if not testable, they can't be refuted 

  2. Vague, biased or extreme language: “always” or “for everyone”  

  3. Based on anecdote / testimony: worked for one person, works for everyone  

  4. Claims made by “experts” who: 

  1. Vague credentials  

  2. Outside of the scope of expertise 

  3. Conflicts of interest: someone whos making a claim about a product that they can make money off of  

5. Ignore conflicting evidence: only talking about evidence that supports what they have to say, not about the negates it  

6. Cannot be independently verified: one off finding, no one else can replicate  


Be skeptical of everything you hear/ read!!


Being a skilled consumer of research: 

  1. Trust depends on type and quality of the design 

  2. When conclusion are not  supportable by the design 

  1. Type of design: different types of designs allow you to draw different types of conclusions 

  • Ex: people try to draw causal conclusions from correlational evidence 

  1. Quality of design: 

  • if you have a really small sample size, the measures that your using to quantify the data  

C. Questions to ask when reading and evaluating a study: 

  1. What was the primary goal of the study? 

  • The goal of the study determines the research design 

  1. What was the research method and did the research method used allow the research method used allow the researcher to meet their goal? 

  • If a researcher is trying to find causation, they need to do a true experiment 

  1. What was measured and how was it being measured? 

  • Identify key dependent variables 

  1. To what/ whom can we generalize the findings?

  • Can't generalized the finding farther than the people that were studied 

  1. What were the results? Do the results support or refute the goal of the study? 

  2. Have other researchers found similar results? 

  3. What are the limitations of the study?

  • Discussion part of the study → talks about limitations 

  1. Did the study have any ethical issues? If so, how were they addressed by the researcher? 

D. Be aware of the possibility for bias: people may see what they wanna see 

  1. Even scientists are biased

  2. Research process is geared toward the hypothesis 

  3. Censorship by journal editors  

  4. The socio-political environment influences what research questions are asked and what research findings are published. 

  • How big of a sample to generalize? 

  • Conclusions only applied to men 

  • Failure to rule out alternative explanation 

  • Inaccurately inferring causation 


The logic of the scientific method: 

  1. Inductive approach 

  1. Bottom-up: start with data, then we move up to theory

  • Data driven approach that starts with data and leads to theory building 

  1. Data → theory 

B. Deductie approach 

  1. Top-down 

  • Start with existing theory → move the way down to collect data 

  1. Theory → data 


The goals of science: correspond to different research techniques 

Description: your trying to describe a phenomena 

Where, when, how much: it occurs 

  1. Purpose 

  2. Directly observable vs. latent variables 

Directly observable: Reaction time, frequency of sound 

Latent variables: hunger, depression: you know that they are there but it ethereal/ hypothetical   

Ex: Negata et Al.: Studied social media used in adolescence during the COVID pandemic 

  • Asked over 5,000 adolescents to report and describe the amount of non-schoolwork screen time they used during the day. 

  • Broke down data by racial and ethnicity 

  1. Research methods used for description 

  • Naturalistic observation: go into a citation (the wild) and observe people  

  • Surveys: Questionnaires/ interview can be done with pen and paper or online  

  • Case studies: in depth look at an individual, family or company. Looking at the one case, all factors and influences on that one case. 

Correlation and prediction: 

  1. Purpose #1: Determine if two variables are related, are they correlated?  

  2. Purpose #2: If variables are related, we can use our knowledge of one variable to make predictions about another variable 


Research methods used for correlation and prediction: 

  1. Correlational study: measures two or more variables, explore the relationship between the variables to see if they are related  

  • Only dependent variables 

  1. Regression analysis: create a production equation  

 When two variables are correlated, we can use a statistical equation to see where we would score on another variable

  • Guessing weight by just knowing your height, SAT scores to see what your college GPA would be 

  1. Quasi-experiments (judging defendants based on attractiveness): relationships between a participants characteristics and people's behavior 

You don't have as much control as you do with a true experiment 

  • You are using a characteristic of your participants to determine the groups instead of a treatment 

  • Determine groups by characteristic of the participants: coming preassigned into groups 

Ex: gender, age, socio-economic status


Correlation doesn't imply causation!!!


Causation: 

  1. Purpose 

  2. Experiment is the only research method that can establish causation 

  3. Criteria for establishing causation 

  • Covariation: as one variable changes, so does the other one  

    • The treatment was playing violent video games, the behavior studies was aggression: violence video games is covariant to violent acts  

  • Temporal precedence: the change in one variable precedes the change in another, you need this to see which variable is the case and which is the effect 

    • Playing the violent video games proceed the aggressive behavior, meaning the aggressive behavior has to be the effect 

  • Ruling out alternative explanations: have to control for all variables that also might impact behavior, manipulate only the one that we think is the true cause 

    • Other variables for video games to rule out that influence aggression: gender, amount of parental supervision   

Explanation: involves explaining why something occurs 

  1. Explaining why something occur 

Ex: children of alcoholics, some become alcoholics while others don’t: what's the difference?  

  1. Very difficult to truly explain human behavior, thoughts, and emotions

Behavior has multiple causes 

No human behavior has just one cause!! 

What might be a cause in one person, would not be a cause in another person!!  




Types of research: Basic vs. Applied:  

  1. Basic Research 

  1. Science for science’s sake 

  • Ask and answer questions about behavior 

  • Understanding theoretical issues 

Ex: Marian & Neisser (2000) tested the recall of personal memories in bilingual people. (spoke russian and english) 

  • What they found, people remembered more about their lives in Russian when they spoke Russian and were interviewed in Russian (Same for English) 

  • Demonstrates Dependent retrieval: how language helps memory recall 

  1. More examples of basic research in the textbook 


B. Applied research 

  1. Science for finding solutions to practical problems 

EX:  Wells Perrine (2001) studied the effects of bringing pets to work

They found that people who brought pets to work were less stressed 

  • They found a solution to lower the stress levels at work 

  1. Program evaluation: is a specific type of applied research: evaluates the effectiveness of programs:  any and all programs 

1998: Broski and Terney: evaluated the effectiveness of the Big Brother/ sister program among 10-16 year olds: helped disadvantaged kids by giving them a friend or someone to rely on that was older than them 

  • The younger children that participated were less likely to skip school, do drugs, act violent, etc than children not in the big brother/ sister program  

  • Their evaluation reveal the program was effective 

C. Comparing Basic and Applied Research : 

  • Neither is more important than the other 

  • They have reciprocal relationship, with one often serving as a springboard for the other. 

Both are necessary and required: they both can be intertwined with another 



Chapter 2:  

Research Questions, Hypotheses & Predictions: 


  1. Choosing a research question 

  1. Specific enough to be testable 

Ex: What causes depression? (not testable and not specific) 

Ex: Is depression related to a diet with ultra processed foods (specific and testable) 

B. a hypothesis: a tentative statement (answer) concerning the nature of the relationship between two or more variables. 

Ex: Theres a relationship between consumption of high processed foods and depression 

  • Hypothesis → study design 

C. A prediction: grows out of the hypothesis, but is testable

  • a testable statement form what you expect from the research

  • Operational definition:  

Ex: there will be a direct correlation between scores on the Beck Depression inventory and scores on the short questionnaire of the highly processed food consumption 

  • The role of evidence 

    • When to keep the hypothesis 

    • When to reject the hypothesis or do more research 

    • A hypothesis never is PROVEN!! We have evidence that supports, suggests or demonstrates! 



More on Hypotheses and Predictions: 

  1. Terminology when testing hypothesis 

Null Hypothesis: 

  1. Nothing is going on 

  2. The one we actually test 

Research Hypothesis: 

  1. A.aka. Alternative hypothesis 

  2. Something IS going on 

  3. The one we actually believe 


We test the null hypothesis to decide what to do with the research hypothesis: 

If the null is rejected, then the research hypothesis is accepted 

If the null is retained then the research hypothesis is rejected/ refuted 


Variables → determines research design → determine your hypothesis   

^And how we rationalize them determines research designs


B. Types of Hypothesis/ Predictions 

  1. Causal Hypotheses: “Taking drug A causes symptoms of  depression to abate”

  • Only possible during a true experiment 

  • Anything the implies causation 

  1. Descriptive Hypotheses: “southerns are likely to espouse conservative political views” 

  • Surveys, natural observation, case studies  

  • Descriptive research methods 

  1. Correlational Hypothesis: “involvement in school related activities is related to a freshmen’s degree of adjustment in their first semester” 

  • Or hypotheses of associated 

  • Correlational research and quasi experiments 

  • correlational : dependant variables only!!!: we measure them 


C. Characteristics of a good predictions: 

What makes a prediction a good one? 

Testable 

  1. Uses operational definitions 

  2. Specific, not vague: they need to be testable 

  • Looking at the relationship between gender and empathy

    • Can't say “ there a relationship between gender and empathy

    • Can say: “women are more likely to score higher on an empathy test than men”   

Has a sound rationale 

Based in research or theory 


Relevant (not Trivial) 

Applied research is relevant 

Not all basic research is relevant but a lot of it is 

  • Helps describe the world


Practical and ethical to test 

  1. Do you have the resources? 

  2. Are there any ethical concerns? 



Sources of Research Ideas

Common sense 

  1. Are cultural truisms really true?

cultural truisms: expression we think are true: “absence makes the heart grow fonder”   

Ex: Zajonc (1968) “familiarity breeds contempt” 

  • The more you know someone the less interested that you are in them ( He believed that this was true but realized this was actually the opposite) 

^ the more familiar you are with people, the more you like them!

Familiarity breeds content!!


Practical Problems: when you have practical problems you need to solve

  1. Applied research 

Ex 1: placement of brake lights on cars 

Ex 2: vending Machines in schools 


Observing the World  

  1. Both personal and public events/ situations provoke curiosity 

Ex: controversy over the effects of music lyrics on behavior of kids 

  1. Especially concerned about rap music. 

  2. Freid (1999) “bad man’s blunder” lyrics study (rap vs. country) 

Got two groups of participants: had ALL of them read the same song lyrics “bad man’s blunder” 

The lyrics have a violent theme 

Half of the participants were told it was a rap song 

Half of the participants were told it was a country song 

Asked the participants to rate their reaction to the song

  • People who thought it was a rap song rated the song more negatively than people who thought it was a country song 


Theories 

Theory: a systematic, coherent, and logical set of ideas about a particular topic or phenomenon that serves to organize and explain data and generate new knowledge. 

  • Good theories explain a large body of data and facilitate the creation of new hypotheses. 

Ex: Buss (1989) socio-biological theory to explain gender differences in sexual behavior-parental investment theory:  if the goal is to have offspring 

women invest A LOT into having a child: the women’s strategy is looking for a good provider 

Men: their entire investment, a couple of minutes: have as many sexual partners and children as they can 

  1. Are there difference in the characteristics each gender uses to “select mates”: across cultures  

  • Men: looking for signs of “reproduction capacity”: youth and attractiveness  

  • Women: “resource acquisition”: resources and status  

  1. Other explanations? Eagly & Wood (1999) 

  • Different socio-cultural factors for why men and women choose men 

Past Research: science is an accumulative endeavor  

  1. Very convenient sources for research ideas 

  2. Look in the discussion section of any published study!!

  3. Evaluate the methodology of a study, and try to improve it or even refute it 

  • Choose to study a different sample (ex: instead of college student sample maybe do older adults)

  • Operationally define the variables a different ways 


Library Research 

Why do Literature Searches? 

  1. Learn about your topic 

  2. Make sure your topic is original

  3. Give credit to people for previous work 

  4. Insights on research design 

About Professional Journals 

  1. Conduct research, then submit to a journal for publication 

  2. Ditor receives it and decides if it will be reviewed 

  3. Most papers are rejected 

  4. If accepted, then it is assigned to at least 3 experts to review 

  5. Few are accepted pending revision, and even fewer are accepted as is.

  6. Can take up to 2 years! 



Types of Journal Articles 

  1. Empirical Research Articles:

  1. Scholarly articles that: report on a research study that answer a specific question 

Ex: Seifi et Al (2024):  looking at the correlation between hyperinsulinemia (chronically high insulin levels) and mental health

  • When we have a lifestyle and diet that keeps our insulin levels chronically high, associated with increased levels of anxiety and depression. 

What we are doing in class!!


B. Literature Reviews: 

Summarize and evaluate empirical articles on a topic and summarize and evaluate them. 

Scholarly articles that: 

  1. Mentioned in the article title (ex: “a review of the literature” or “ a systematic review”) 

  2. Read lit reviews of topics that are new to you 

  3. Psychological bulletin, Psychological Review & annual Review of Psychology


C. Meta-Analyses: 

Scholarly articles that:  

  1. Collectively analyze data from several similar empirical studies using statistical research studies 

  2. Allow researchers to draw conclusions based on objective statistical procedures 

  • Draws conclusions based on statistical 


D. Theoretical Articles 

Scholarly Articles that: summarize all information (empirical information) on a topic to understand behavior  

Ex: Bandura (1977): “Self-efficacy: Toward a unifying theory of behavioral change”

  • Proposes a comprehensive theory that our beliefs about our own self-efficacy predict our outcomes when it comes to behavioral change 

  1. Beliefs about self-efficacy predict behavioral outcomes 

  2. People with high perceived self-efficacy are more likely to succeed. 


Closer Look at Anatomy of an Empirical Research Article

Abstract: 

  1. A short, concise summary of the research, sort of like a sneak preview of the article 

  2. Information included: 

Starts with a general statement about the import hypothesis then it gives a brief description 


Introduction: a logical introduction of the current study’s research question as it relates to previous research 

  • Information included: starts by discussing the problem under investigation 

  • Deduce the variables and explain how they are defined 

  • Go on to build an argument for the predictions: literature review 

 

Method: A description of what you did in sufficient details to allow replication 

Allows people to: 

  • Evaluate the efficacy of your design 

  • to allow the reader to replicate your study if they wanted to 

Participants or subjects: referring to humans and non-human animals: describe sample size, age, gender and other key demographics of the sample 

Apparatus/ materials: you describe any equipment, devices, questionnaires, surveys, videos used in the study 

Procedure: describe what happens to your subjects from start to finish in detail.  

Results: 

  1. A summary of the outcome(s) of the study 

  2. Information included: 

Describe the scores being analyzed and what those scores mean, tell the reader what high and low scores mean. 

Go into mean, median and mode: describe them 

Go into the statistics → used to test you hypothesis 

Use tables and figures to show your findings in an interesting and ethical way. 

Discussion: 

  1. The concluding part of the paper in which the research gets to speculate about his findings 

  2. Information included: 

Restating results, relating results back to the argument back in the introduction 

Draw conclusions 

Ok to speculate 

In the discussion you can speculate why 

Offer ideas for future research for the topic 



Chapter 3: 


Milgram’s experiment: Studied Obedience 

  1. Design of the study: 

  1. Participants: men from all walks of life  

  2. Task: “teach” another person (confederate) sets of words, if they didn't learn they would have to shock them 

  3. Findings: many people obeyed the “scientist”

B. importance of the study and Real world implications: 

  1. Would it be approved today? 

  2. Do the benefits outweigh the harm? 

  3. Do the findings apply to real-world situations>> 


Historical Context of Current ethical standards: 

  1. Nuremberg code:

  1. Nuremberg trials after WW2

  2. 10 rules of human rights and ethical rules for studies 

  3. Had no enforcement 

  4. The Nuremberg Code 

B. Declaration of Helsinki: 

  1. Created in 1964 to have “teeth”: had enforcement mechanism  

  2. Published research must follow the Helsinki guidelines

C. Belmont Report 

  1. Public demand for action to protect people from mistreatment in research 

  2. The Natn’l Commission for the Protection of Humans Ss of Biomedical and Behavioral Research (1979) 

  3. Belmont Report 

The three Basic ethical Principles of the Belmont Report: 

  1. Beneficence (and non-maleficence) 

  2. Respect for Persons (aka Autonomy) 

  3. Justice 


Federal Regulations Regarding Research Ethics: 


Institutional Review Boards and Animal Use Committees:

  1. Institutional Review Boards (IRBs) 

  1. What they do 

  2. Why they were created 

  3. Who are the members? 


B. Animal care and use committees (ACUC’s) 

  1. What they do 

  2. Scope of oversight 

C. publishing or presenting research 

D. Federal regulation also requires specific educational training 

  1. Required at most colleges and universities 

Ex: CITI (collaborative IRB training initiative) 


Determining Type of IRB Review Required 

  1. Two factors that define “research” 

  1. Is the project systematic?

  2. Is the project seeking new, generalizable knowledge? 

  3. Handout 3b  IRB Decision Tree 

B. Question 1: Is the project even “Research?” 

  1. If the project ISN'T “research”.... 

  2. If the project IS “research”, then go to next question 

C. Question 2: does the study involve minimal risk or greater than minimal risk? 

  1. If the study involves greater than minimal risk… 

  2. If the study involves minimal risk, then there are three 3 possible choices for type of IRB review: 

  1. Exempt Review: 

  2. Expedited Review

  3. Limited Review 

Professional standards for Research ethics: 

APA ethical principles of psychologists and code of conduct (In eCampus) 


  1. The APA ethics Code: 

  1. A set of principles and standards that guide the behavior or psychologists in all their roles 

  2. The Five Ethical Principles: 

  1. Principle A: Beneficence/Non-Maleficence 

  2. Principle B: Fidelity and Responsibility 

  3. Principle C: Integrity 

  4. Principle D: Justice 

  5. Principle E: Respect for people’s rights and dignity 


Principle A: Beneficence and Non-maleficence: 

Definition: Researchers should maximize the benefits of research and the well-being of participants while minimizing harm or distress 

Risk-Benefit Analysis: 

  1. Potential benefits of Research: 

  1. Education     d. money/ gifts 

  2. Skills             e. Satisfaction 

  3. Treatment 

2. Potential risks: 

  1. Physical Injury 

  2. Psychological Injury 

  3. Social Injury 

Minimal Risk: Research procedures are similar to activities that the participants engage in everyday. 

Dealing with Risk: 

  1. Be honest with participants 

  2. Make sure there is help available if needed after the study 

  3. Collect data anonymously 

  4. Or ensure confidentiality 

What Do You Think? Do handout 3a, Part 1 


Principle B:  Fidelity and Responsibility: 

Definition: Psychologists should establish relationships of trust with people they work with, and act in accordance with the professional and scientific responsibilities of the people they serve. 

  • Make sure to hold up your end of the implicit contract with participants. 

Principle C: Integrity: 

Definition: Psychologists should seek to promote accuracy, honesty and truthfulness in the science, teaching and practice of psychology. 

  1. Dont cheat, steal, lie, engage in fraud or misrepresent facts. 

  2. Share your findings 

  3. Don't fabricate or falsify data 

  1. Why people might falsify data 

  2. When to be suspicious of fraud 

Ex: stephen Bruening 

D. researchers should never plagiarize. 

  1. Definition: misrepresenting someone else’s work as yours 

  2. Make sure you cite the original authors! 

  3. Plagiarism also includes using AI and passing the work off as your own!!


Principle D: justice: 

Definition: Psychologists should ensure fairness and equity for everyone in accessing the benefits of the contributions of psychology (research benefits, access to clinical care). 

  1. Ensure fairness in receiving the benefits and bearing the potential risks 

  2. Involves  equity in participant selection for research 

  1. Use scientific reasoning to select participants 

Ex: excluding women from medical research 

C. justice also involves protecting the vulnerable/ powerless 

  1. Before ethical regulations, marginalized groups were sometimes taken advantage of. 

Ex: Tuskegee Syphilis Study (1932-1972). 


Principle E: Respect for People’s Rights and Dignity (AKA Autonomy) 

Definition: Researchers should respect that participants are independent, self directed ants who are able to make their own informed decisions about whether or not participate in research 

  1. Safeguards for vulnerable populations 

  2. Requires informed consent 

  3. Impact by coercion 

  4. Impacted by the use of deception 


More on Informed COnsent Forms: 

  1.  Social contract b/t researcher and participant 

  2. Information included in a consent form: 

  1. Purpose of the research 

  2. Procedures 

  3. Risks and benefits 

  4. Compensation (if applicable) 

  5. Assurance of confidentiality 

  6. Statement of willingness 

  7. Contact information for questions 

C. ALWAYS get consent when there is anything beyond minimal risk.  

D. Special cases: 

  1. Minors (assent) 

  2. People with cognitive impairments 

  3. People who may feel coerced 

E. Research situations when you can dispense with informed consent: 

  1. Anonymous surveys with minimal risk 

  2. Archival research 

  3. Naturalistic observation and privacy is NOT expected 


More on Deception:  

  1. It is not deceptive without your research hypothesis! 

  2. Types of Deception: 

  1. Passive deception 

  2. Active deception: 

  1. Misleading people about the nature of the research 

  2. Staging events 

  3. Giving participants misinformation about themselves 

C. Deception May be necessary: 

  1. Certain research questions could not be explored without deception 

  2. Knowing influences behavior 

D. Considerations: 

  1. Is the knowledge obtained worth it? 

  2. Is there another way without deception? 

E.Debriefing= the researcher's opportunity to tell participants about the study, to clear up misconceptions & to deal with any potential negative effects of participating. 

  1. Allows participants to recover 

  2. Promotes positive feelings about the research 

  3. Can tell participants how the research benefits society.

  4. Does debriefing even work? 



Chapter 4: 

Variables: 

Introductory concepts: 

  1. Variable = any event, situation, behavior or individual characteristic that can change or can take on more than one value. 

  2. Levels: different values of a variable 

  • Gender (2 levels) 

  • Test score (infinite levels) 


Using operational definitions: 

  1. Latent (hypothetical) constructs:  

Ex: depression, anxiety, PTSD, 

  • What we need to do to test latent variables scientifically

  1. Operational definition (OD): defining a latent variable w/ the procedure or methods used to measure or manipulate it. 

  • If you can't operational define a variable, you can't study it scientifically 

  1. OD’s allow scientists to communicate: 

Ex: conceptual definition of romantic love 

Short love scale: (SLS-12): Sternburg says you need all of these things to have love: definition of romantic love

  • Passion 

  • Intimacy 

  • Commitment 

Operational Definition: scores on the short love scale 

With this definition, scientists are able to talk, communicate and debate the constructs. Need a common language. 

No single operational definition is perfect 

Importance of Establishing Construct Validity 

  1. Must establish the accuracy/ validity of an operational definition  

  2. Validity refers to accuracy in measurement 

  3. Scientists are constantly evaluating validity 

  4. Construct validity tells us how accurate our operational definition is in quantifying our latent variable 


Defining Relationships between Variables  

  1. Defining relationships between variables: 

  1. Purpose of most behavioral research 

Ex: height and weight; exercise and relative physical health, TV watched & fear of crime 


2. Linear Relationships (straight-line relationship)

  1. Positive linear relationship (direct)  

  1. Variables change in the same direction 

Ex: as study time ^, the exam grade ^ 

B. negative linear relationship: (inverse)   

  1. Variables chance in the opposite direction 

Ex: as stress ^, white blood cell count  v


3. Curvilinear Relationships: 

  1. Relationship between variables goes from negative to positive or from positive to negative 

Ex: N→ P: U-Shaped function: ex: age and # of car accidents 

Ex: P → N: Upside-down U-Shaped: Yerkes-Dodson Law: performance tends to increase with stress; but with too much stress performance tends to drop 


4. No Relationship: 

  1. No predictable relationship between the variables 

  2. No discernible pattern on a graph 




5. Reduction of Uncertainty: 

  1. Ultimate Goal of Science: to determine universal and predictable rules about how the world works

  1. Easy for physical sciences (gravity) 

  2. Not so easy for behavioral science: so many different factors for behavior, they affect everyone differently 

B. What is Uncertainty? 

  1. Uncertainty= “randomness in events”: (aka: random variability, random error, or error variance) 

  2. Behavioral research seeks to reduce uncertainty by identifying predictable relationships by identifying predictable relationships b/t variables 

Example from text: Instagram Users: 

  1. You ask 200 adults in your town whether or not they use Instagram

  2. Your best guess before you collect any data 

  3. Your best guess assumes that instagram use is random (ex: 50/50 chance of yes/ no): known as Random variability 

  4. Based on random variability, you’d be accurate in guessing preference  50% of the time 

  5. Lets say your next study, you add age to the mix and collect data from 100 adults aged 18-49 and 100 adults 50+ 

  • How accurate would you be at guessing preference? 

  • By adding the variable of age, we have reduced uncertainty 

  • Some uncertainty remains 


Non-Experimental vs. Experimental research: 

  1. Non-experimental Methods: 

  1. Definition= study of the relationship b/t variables by observing and measuring the variables of interest. 

  2. Ways to observe and Measure: 

  1. Ask people to describe their behavior (survey) 

  2. Observe people’s behavior yourself 

  3. Take physiological measures (HR, BP, EEG, MRI) 

  4. Look at public records/ archives (SAT and GPA study) 

Ex: The relationship between exercise and anxiety: 

Operational define both variables: 

Collect data: see if variables are related 

Finds inverse relationship between variables: one goes up, while one goes down: they co-vary 

 

C. Establish Covariation: the two variables change together in a predictable way   

  1. If you find a relationship between two variables, then you know that they co-vary 

  2. As one changes, the other changes in a predictable way. 

D. limitations of the Non-Experimental Method: 

  1. Cannot determine the direction of the cause and effect (cant establish temporal precedence). 

  • Doesn't know the cause and what's the effect (what came first) 

  1. The third variable Problem: 

  1. Confounding variables (extraneous variables) 

Ex: Relationship between ice cream sales and shark attacks ( third variable being warmer weather) 



2. Experimental Methods

  1. Definition= the study of the relationship between variables by manipulating one variable and measuring its effects on another variable, while holding all other variables constant . - Can establish causation!!

Example: studying the relationship between exercise and anxiety: 

  1. Manipulate exercise to see impact on anxiety: 

  2. Create two groups: exercise vs. no exercise: 

Allow us to establish covariation

Allows us to determine 

Allows us to control for EV’s in 2 ways: 

  1. Experimental control: holding all variables constant (except for the variable your manipulating)  

  2. Randomization (random assignment into groups) 

Chance becomes the great equalizer  


Three criteria to establish causation: true experiment 

  1. Covaration 

  2. Temporal precedence

  3. Ruling out all extraneous variables 


  1. Identifying variables: 

  1. Experimental methods: two/ three variables of 

  1. Independent variable: the presumed cause, the one we manipulate  

  2. Dependent variable: presumed effect, the one you measure  

B. non-experimental methods: 

  1. Descriptive research: at least 1 Dependent variable 

  2. Correlational research: at least 2 dependent variables: looking at relationship  

  3. Quasi-experiments:

  1. Quasi Independent Variable: instead of manipulating, use a characteristic of a participant: 

  • Ex: gender, race, age, socio economic  

  1. Dependent variables: the one we measure  


Flaws/ Limitations in Experimental Research: the more ways you study a phenomenon, the more you understand it

Artificallality: makes the research situation unnatural, changes responses people will make. 

  • Would participants act the same way in a lab as they would in real life? Probably not. 


Try a field experiment to prevent this!!

  • Still manipulate a independent variable but its in a natural setting 

  • Don't have the same amount of control though 

Some variables can be experimented on: either for practical reasons or ethical reasons: 

  • Cannot do a true experiment of the effects of child abuse on cognative development 

    • Why? Cause that's unethical silly!

    • Alternative: use a quasi experiment: participants are pre-assigned due to personal experiences 

Sometimes research questions don't need experiments: 

  • Some science just wants to describe phenomena: just use descriptive research! 

Piaget's Cognitive development theory:  developed his entire theory by observing 

Frued’s theory of personality development: developed theory by case studies

  • Developed his theory by observing his clients 


4. Criteria for Establishing Causation:  

  1. Covaration: 

  2. Temporal precedence: 

  3. Ruling out Alternative explanations: 


Choosing a research method

  1. Experimental control and artificiality 

  1. Cost and benefit of strict experimental control 

  1. Limits the questions you can ask

  2. Changes the responses that people would make 

B. Non-Experimental Alternative: field Experiment: 

  1. Manipulate an IV in a natural setting 

  2. Less control than a lab experiment 

Ex: Latane, compared helping behavior in men and women 


2. Ethical and practical limitations of experiments 

  1. Some variables can’t be studied experimentally 

  2. Non-experimental alternative: Quasi-Experiments (Ex Post facto Designs) 

Ex: child abuse and reading comprehension; alcholism and depression 

  • Participant variables are important to study! 

3. Some questions are not answerable by experiments: 

  1. Sometimes science simply seeks to describe phenomena 

  2. Non-experimental alternatives meet this goal:

  1. Piaget and cognitive development (observation) 

  2. Freud & personality development (case study) 

  3. Bem and Sex roles in society (survey) 

C. Sometimes science seeks to make predictions about behavior 

D. non-experimental alternatives meet this goal: 

Ex: family Hx and breast cancer 

Ex: SAT and college GPA 

5. Using multiple methods 

  1. No research method is perfect 

  2. All the methods have their advantages and disadvantages. 

  3. Ask a question using multiple methods to increase confidence in your conclusions


Evaluating the accuracy of research: Validity: 

  1.  Validity: the extent to which, given everything that is known, a conclusion that we draw from a research study is reasonably accurate. 

  2. Types of validity: 

  1. Research methods can be described and evaluated in terms of various types of validity. 

  2. The 4 that are most relevant to this chapter:  

1 Construct 

2. External 

3. Internal 

4. Conclusion  

3. Construct validity: Defining hypothetical constructs: 

  1. Refers to how accurately the operational definitions you choose accurately represent the hypothetical variables that you’re trying to study 

  2. There are multiple possible operational definition for every latent variable 

  3. We’ll discuss specific ways to establish construct validity for operational definitions more in the nest chapter: They include: 

  1. Face validity 

  2. Content validity 

  3. Criterion-related validity 


4. External validity: Generalizing results 

  1. Refers to the extent to which a study’s findings can be generalized to other populations and/or setting 

  2. Generalizing from sample to population 

  3. Discrepancy in participants’ behavior/ responses

Ex: Diffusion of responsibility (Kitty Genovese) 


5. Internal validity: Establishing caution: 

  1. Refers to the degree to which our study design allows us to determine cause and effect 

  2. Only an issue with true experiments 

  3. Experimental control, internal validity and external validity. 

  1. Which is more important? 

  2. Depends on your research goals



6.  Conclusion Validity: Making accurate Decisions 

  1. Refers to degree to which our conclusions are credible or believable based on our research design 

  2. Based partly on accuracy of our statistical inferences 

  3. When drawing statistical inferences from data we can make correct decisions or errors: 

1. Correct decisions: 

  1. Say there is a relationship and there really is one! 

  2. Say there is no relationship and there really isn't one! 

2. Errors: 

  1. Type 1 Error: Say there's a relationship and there isn't :(

  2. Type 2 Error: Say there's no relationship and there is

  3. Implications:

    • Type 1 Error can lead to false conclusions and misguided actions based on incorrect assumptions.

    • Type 2 Error may result in missed opportunities and the failure to recognize significant patterns or connections.

robot