1/87
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
quasi experiment
designs in which random assignment cannot be used
true experiment
designs in which the researcher manipulates all of the IVs
what are the different methods for measuring variables?
self report, observational, physiological
self report
directly asks a particip. how they feel
observational
just observing, not manipulating any variables
physiological
records biological info from living thing??
random error
caused by the instrument being used — cannot really be avoided: different measuring tape sizes/variations
systematic error
faulty measuring scale that consistently reads differently; your measurients of the same thing will vary in predictable ways
how do you minimize error?
standardize the experimental situation
reduce observer bias
avoid measurement bias
what is reliability?
the stability or consistency of a measure
why is reliability important?
measures the consistency of results and how reliable they are
what is validity?
the degree to which a tool measures what it claims to measure
why is validity important?
it ensures that research, assessments, and data are accurate and reliable:
what does self report mean?
for a participant to report their own thinking/views etc.
why do we use a self-report measure so often?
convenient and cost effective
what biases can affect self-report data?
social desirability:
giving an answer that makes them look good
demand characteristics: a cue that makes participants potentially aware of what the researcher expects
retrospective bias: when participants view or interpret past events in an inaccurate way
why is it true that a measure cannot be valid if it is not reliable?
no, if its not giving valid results than it is not reliable. if it isnt reliable, the results arent valid.
why is it true that reliability is a characteristic of a measure regardless of the measure, but validity depends on what the measure is used for?
a measurement could be valid for one thing but not the other, where as reliability is referring to the results given
what is face validity?
whether a test or measurement appears to measure what it claims to measure, based on a superficial or subjective judgment — ie: depression survey asking about mood. It seems appropriate because mood relates to depression, even without deeper analysis.
why is face validity important?
it ensures that a measurement is actually measuring what its supposed to measure
which validity are you using when asking:
how well did the researchers measure sensitivity to tastes in this study?
construct validity: ensures that the measure truly reflects the underlying characteristic or construct being studied.
for his research methods class, felipe plans to watch how students treat other children in their classrooms who have ADHD. he will evaluate how +/- the children are treated by their classmates. this is an example of what type of measurement?
observational measurement
categorical variable
represents distinct categories or groups: labels and names rather than numerical values: gender, marital status, color
quantitative variable
can be measuered and expressed numerically.
ordinal
ranking; meaningful values but unequal intervals between units; educational level
interval
equal intervals between units but no true zero; temperature (0 degrees sdoes not mean absence of temperature)
ratio
equal intervals and a true zero; age — zero indicates a starting point; and you can compare ages (10 is twice as old as 5)
you decide to investigate how exercise affects academic performance. you are interested in people who already exercise regularly, so you head to the gym to recruit participants. this describes the ____ sampling technique.
simple random
stratified
systematic
cluster
convenience
purposive
snowball
quota
convenience sampling
a school principal wants to collect data on bullying in the school. she stands at the door to the school and selects the first 10 students from each grade level to come in the door. this describes the ____ technique.
simple random
stratified
systematic
cluster
convenience
purposive
snowball
quota
purposive: you want people who exercise regularly so you go to the gym
qualitative research
surveys and observations ( non-numerical data)
quantitative research
data consists of numbers and are analyzed using statistical techniques
what is the purpose of quantitaive research ?
examines associations between variables, predict outcomes, and make comparisons
what is the purpose of qualitative research?
obtaining an in depth account of particp. perspective of their own experiences
what is the difference between qualitative and quantitative research?
qualitative research focuses on obtaining in-depth info while quant. just wants specific pieces. qual: focuses on the person, quant: not so much about experience
top-down approach
researcher uses a theory first approach where the researcher tests preconceptions and previously established theories w the collected data.
bottom-down approach
researcher uses info provided from the participants direct experiences to develop a theory
how are top down & bottom up approaches different?
top: makes a theory based prediction first, then confirms or rejects prediction while bottom up gathers info first, and then comes up with a theory.
what are the various types of interviews?
structured
unstructured
semi-structured
critical incident
structured interview
researcher prepares specific questions and asks them in a fixed order
unstructured interview
researcher anticipates topics, but does not plan to ask specific questions or the order. more of a convo
semi-structured
combo of structured and unstructured, some are planned but is still flexible
critical incident technique
purposefully directs the interviewee to focus on a key event or specific behavior
what is interviewer bias?
bias that is due to the interviewers tone, questions, wording, body language, etc. to influence the partic. responses
how can interviewer bias affect interviews?
Interviewer bias can lead to unfair evaluations, favoritism, and inconsistent standards, impacting candidate selection and overall hiring decisions.
what are the characteristics of a good interviewer?
be fully knowledgeable, practice to answer questions clearly, and be attentive and focused.
when would you use a structured interview?
when you already have a clear understanding of your topic
when would you use a semi-structured interview?
to collect qualitative, open-ended data, to explore participant thoughts, feelings and beliefs about a particular topic; and to delve deeply into personal and sometimes sensitive issues
when would you want to use an unstructured interview?
unstructured interviews are very flexible, allowing the researcher to develop a rapport with the participant. helps the interviewer to get quality insights into the participants' beliefs, thoughts, perceptions, and experience
what is grounded theory?
where the researcher does not have any explicit theories or hypotheses to test prior research.
how do researchers use grounded theory in qualitative data analysis?
they use info from participants to generate categories and build a theory
what is convo analysis?
technique that involves an exam of the natural patterns of dialogue, which focuses on features such as turn taking, gaze direction, and how speakers sequence speech
what is content analysis?
technique that involves the systematic analysis of communication whereby researchers organize responses in order to summarize the substance of the communication
what are the types of coding used in content analysis?
continuous recording, interval recording, duration recording, and frequency-count recording.
continuous recording
documenting all behaviors throughout the day
interval recording
certain parts of the day: after 12 and before 12
duration recording
time passed between ____ and ___
frequency count recording
number of time something happens throughout the day
what are the different approaches in qualit. research?
operationalization: self-report
participants self report their own experiences
operationalization: observational
defining and measuring variables through direct observation of behaviors or phenomena in natural settings.
operationalization: physiological
involves measuring biological indicators, such as heart rate, blood pressure, or hormone levels, to assess psychological or behavioral constructs.
what are three aspects to consider regarding construct validity
choosing question formats
writing well-worded questions
encouraging accurate responses
example of an open-ended question
“what do you think of this class?”
example of a forced-choice question
yes or no: 1. i really like to be the center of attention 2. it makes me uncomfortable to be the center of attention
example of a likert scale 1-5 question
I would recommend this brand to others (strongly disagree to strongly agree) - 1 thru 5
example. of a semantic differential format question
how hard is this class? difficult 1 2 3 4 5 easy : chooses a point between two opposing ends
what are the problems in writing “well worded questions”?
can be leading, double barreled or negatively worded?
what does leading question mean?
“do you agree..” : prompts the desired answer
what does double-barreled question mean?
“do you enjoy ___ and ___” asks about more than one topic but only allows for a single answer
wht does negatively worded question mean?
“people who do not drive with a suspended license should never be punished.” disagreement would be a “good” or socially desirable answer.
what does acquiescence mean?
the tendency of respondents to agree with statements or questions, often regardless of their actual opinion; yea-saying
what does fence sitting mean?
a response pattern where individuals avoid taking a clear stance on an issue, often selecting neutral or middle options in surveys.
how does one combat fence sitting?
take away the neutral option
what is socially desirable responding?
responding in a way that is socially desirable or “makes them look good”
how does one combat socially desirable responding?
allow anonymity, neutral wording
what are some other reasons for why people give inaccurate responses?
memories of events may not be accurate (retrospective bias), self reporting more than they can know, rating products
survey items that can be completed with any response chosen by the participant are called _____ questions, whereas survey items that mustc be completed with one of the response options provided by the researchers are called ____ questions.
open ended: force response
what is the observation method?
viewing an occurrence for a scientific purpose
when do we use observational research?
high external validity
laboratory observation
data collection in a controlled setting of a lab; poor ecological validity.
naturalistic observation
technique in which researchers observe events as they occur in a natural setting; outside the lab
participant observation
an observational data collection technique in which the observer participates with those being observed; researcher gains a close familiarity with the group under observation
ecological validity
degree to which the research situation re-creates the psychological experiences that participants would have in real life.
external validity
extent to which study findings are applicable or generalize outside the data collection setting to other persons in other places at other times
internal validity
the extent to which the observed results represent the truth in the population we are studying
how do you develop a coding system and protocol for collecting observational data?
clearly define your research questions and objectives
then identify relevant behaviors to observe, categorize them into distinct codes
establish clear definitions for each code
create a standardized procedure for recording observations, including details like sampling methods and recording format, ensuring consistency and reliability in your data collection process
what are the appropriate procedures and training for recording observations?
clearly define your research questions and objectives
then identify relevant behaviors to observe, categorize them into distinct codes
establish clear definitions for each code
create a standardized procedure for recording observations, including details like sampling methods and recording format, ensuring consistency and reliability in your data collection process
what are the proper stats to use for observational research?
descriptive and inferential statistics