1/85
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
variable
something that changes and can be measure
How many levels or values should a variable have?
2
construct
theoretical event, outcome, person, etc that we care about
Ex. hypertensions, stress
operational definition
set of procedures used when you measure or manipulate variables
make the construct real, quantifiable, and/or measurable
Ex. mmHg
What two things does an operational definition include?
what is observed
what is measured
What are two reasons why we operationalize our variables?
researchers want to focus on concrete terms for abstract concepts
help researcher communicate their ideas
observable vs latent variables
observable - physical variables that have obvious/direct connection to construct
latent - psychological variables that are NOT directly observable, estimated from behavior
Examples of observable variables
height or weight
Examples of latent variables
happiness, mood, intelligence
reliability
consistency or stability of a measure
yields the same results EVERY time
validity
accuracy of a measure
“measures” what it’s supposed to
reliability theory formula
observed score = true ability + random error
observed score
actual test score obtained from the measure you are using
true score
someone’s real or true value for the variable being measured; not directly measured
measurement error
difference between true and observed score
correlation
number that tells you how strong two variables are related to one another
How many values must you have to assess reliabiltiy
two scores
test-retest reliability
measure the same individuals at two different time points (type of reliability)
internal consistency
use responses from individuals at only one time point to examine their within-occasion consistency (type of reliability)
What are two statistical technique to use for internal consistency?
split-half reliability and Cronbach’s alpha
split-half reliability
correlation of total score on one half of the questionaire with the total score on the other half of the questionaire
Cronbach’s alpha
correlation of each item on the questionaire with every other item (type of reliability)
interrater reliability
extent to which at least two observeres agree in their observations
What are two types of validity?
construct and criterion
construct validity
degree to which the measure reflects the construct its supposed to (what we typically think of)
criterion validity
the degree to which measure corrrelates with a similar measure concurrently or in the future (what we use to compare)
What are the 2 types of construct validity?
convergent validity - examines if a measure is related to another measure assessed by same construct
discriminant validity - examines if measure is NOT related to another measure that assess a different construct
Example of convergent validity
wearable sensors tracking HR
Example of discriminant validity
wearable sensor for HR versus movement data
What are the 2 types of criterion validity?
predictive validity - examining if scores on a measure predict behavior or outcome it is intended to predict
concurrent validity - extent of agreement between two similar measures taken at the same time
Example of criterion predictive validity
using SAT scores to predict academic performance in college
Example of criterion concurrent validity
HR sensor relating to self-reported stress
interval scale
numeric scale where distance between two points matter
no true 0 point, ratios are impossible
Ex. celsius
ratio scale
distance between two points matters
has TRUE 0 point
Ex. height, weight
Measurement scales that include continous variables
interval and ratio scale
Measurement scales that include categorical variables
nominal and ordinal scale
nominal scale
groups with no order rank
Ex. gender, ethnicity
ordinal scale
groups with rank order
Ex. SES
Pros to questionaries
access to personal day-to-day information
Cons of questionaires
lack of clear units, biased or inaccurate
confidentiality
partipants identify only know to search them
IV
independent variable - what’s being changed
DV
dependent variable - what’s the outcome
What are the 3 major tenets of the Belmot report?
autonomy - do ppl have free will to join or leave study?
justice - is there an equal opportunity for human subjects
beneficence - does research hurt anyone? do the harms outweigh the benefits?
What are two risks in behavioral research?
loss of privacy and confidentiality
confidentiality
participant identity only know to research team
privacy
partipant information is disclosed only to research team and not to others
Who protects human research participants’ rights?
Institutional Review Board (IRB)
What was happened as result of the Nuremberg trial?
Nazi human experimentation
Nuremberg code created - principle of ethical conduct of ANY human experimentation
What was the result of the Helsinki Declaration?
provided guidelines for more medical research with humans
What was the result of Belmont report?
focused on behavioral research not just medical and provided ethical principles for both medical and behavioral research
observational research
attempts to assess responses as they unfold in real life
no manipulation
What’s the difference between quantitative vs. qualitative?
quantitative - focuses on variables that are numerical and can be counted (Ex. surverys, experiments)
qualitative - focuses on non-numerical data (Ex. text, video)
cross sectional study
data collected at one specific time point from different groups of people l
longitudinal study
data gathered for the same subjects repeatedly
What are limitations of observational research? (3)
unable to show cause and effect definitively
no control of participants or intervention
many confounding variables
What are the 3 major types of observational research?
naturalistic - make observation in natural real-world environment
systematic -
indirect (survey, questionnaire)
Is naturalistic observation research quantiative or qualitative?
it can be both! primarily qualitiative description of observation, quantitative frequences of behavior occurence
What two concepts did Mehl et al. 2007 illustrate?
natural observations get at important phenomenon than lab can
ambulatory tech can provide access to rich/real world behavior
covert/concealed participant observation
subject unaware you’re observing
you immerse yourself in activity/event/behavior
covert/concealed non-partipant observation
subject unaware you’re observing
you observe participants from a distance without being involved
overt/non-concealed partipant observation
subjects aware you are observing them
you immerse yourself in activity
overt/non-concealed non-participant observation
subjects aware you are observing them
you observe participants from a distance without being involved
Which can be operationalized manipulated or measured variables?
both
What are 3 criteria necessary for demonstrating causal relationships?
x and y are related
x comes before y
confounding variables are ruled out
What are pros of self-report?
afforable
easy to deploy
access to rich psychological experience
What are the two major forms of systematic assessment?
self-report
task paradigms
What are the cons for self-report?
bias
honesty
inaccurate recall
When would you want to do a systematic observation?
testing specific hypotheses
want to count/measure something
When would you want to do a naturalistic observation?
ethical situations
gain qualitative results
new or unknown topic
biases with observational research
if participants knows someone observing → change behavior
never able to control confounding variables
biases with survey research
restricted to choices
observer bias
What is the difference between self report and task paradigm?
self report - ppl share their behaviors pertaining to whatever being asked
task paradigm - ppl engage in behaviors under more controlled conditions where stimuli and requirements are standardized
How does reliability look like on a frequency distribution?
LESS variability, most volume in specific range
What is research integrity?
scientists ensure that data, research, intepretation are accurate and transparent
What are the good practices for research integrity?
What are bad practices of research integrity? (5)
data manipulation
hypothesizing after results are known
fraud
plagarism
“p-hacking”
When is best to use self-report?
want feelings, opinions, beliefs
something cheap and effective
What is the pros of task paradigm?
cognitive testing
decision-making
stimulus rating
What’s the difference between probability and non-probability sampling?
probability - all persons have equal probability of being chosen for sample
non-probability - all persons DO NOT have equal opportunity to be chosen
What is the least bias probability sampling method?
stratified random
stratified random (2)
list of two different groups, separately sample from each
equal number from each group
simple random
randomly select from list, more random than systematic
systematic random
picking every _5th__ from list
cluster sampling
population divided into clusters, random sample of clusters is picked