1/107
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Questionable research practices
Decisions researchers make when planning a study of during and after data collection that can lead to erroneous or misleading results; Not considered to be unethical because there is no intent to deceive
Data dredging
researchers collect data on a large number of variables, examine all possible relationships among them, and then focus on the relationships that are statistically significant; Conduct multiple statistical tests and increases the probability of a type I error (conclude relationship is real when it actually occurred by chance)
Data snooping
periodically checking results during data collection to see if they are statistically significant and stopping data collection when statistically significant results are found
Data trimming
Selectively discarding data that do not support a study’s hypotheses; Can get rid of data when they are suspicious of a deception like not following instruction, and for outliers
Data torturing
Improper exploitation of statistical tests; repeatedly analyzing the same data in different ways until something statistically significant emerges
Methodological tuning
Tweaking a study’s methodology until it produces statistically significant results
Accentuating the positive
Giving more weight to results that support one’s hypothesis than to results that do not support it; focusing on one’s statistically significant findings while ignoring nonsignificant findings
HARKing
Hypothesizing after results are known, presenting post hoc hypotheses in a research report as if they were, in fact, a priori hypotheses
Data forgery
report the results of experiments that were never conducted
Duplicate publication
Publish the same work in different journals, makes it appear there is more information available on the topic than there really is
Okay to publish and present at a conference
Technical article and then rewriting for a nontechnical outlet
Communicate to teachers and psychologists
Editors ask for it to be written for a journal or edited book
Piecemeal publication
Taking the data from a single study and breaking it into pieces to increase the number of resulting publications, when all the analyses are testing the same hypothesis; Okay to have two sets of data for separate hypotheses or to reanalyze an old data set to test a new hypothesis, a long term study might be published periodically to give updates on findings
Plagiarism
the act of taking someone else’s work or ideas and passing them off as one’s own
Text recycling
When sections of the same text appear in more than one of an author’s own publications
Field research
Conducted in natural settings
Field experiments
Attempt to achieve a balance between control and naturalism in research by studying people’s natural behavioral responses to manipulated independent variables in natural settings
Example - willingness to help someone who collapsed on a New York subway
Natural experiment
Are able to study an IV that could not be manipulated for ethical reasons; Are correlational studies because the researcher doesn’t manipulate the IV, cannot randomly assign research participants to conditions, and has little control over extraneous variables
Example - Compared children who had been born 26 weeks gestation to children carried to full term on cognitive tests
Quasi-experiment
Researcher attempts to achieve naturalism by manipulating an IV in a natural setting using existing groups of people as the experimental and control groups; Have a limited form of random assignment and can manipulate the IV
Nonequivalent control group design
Researcher studies two (or more) groups of people, members of one group have the experimental condition and the other are the control group; Not equivalent because the groups weren’t randomly assigned; Pretesting done to ensure the experimental and control groups are similar on the DV before the IV is introduced
Example - reading in sixth graders, no pretest we don’t know where the groups started, what if one group was already a lot better at reading, want to see the group getting help have higher scores and then group that didn’t have similar scores to before
Nested analysis of variance design
Statistical method that separates the variance in the DV that is due to the effect of the IV from that due to the effect of attending a particular school
Biased selection
Not equal between groups in individual characteristics so these characteristics are confounded with the IV
Focal local controls
Ensures that the control and treatment groups are as similar as possible
Example - get another group of children from a related school
Problems with field research
Manipulating the IV
Naturally occurring manipulation (uncontrolled intensity, duration, location, time)
Example - COVID-19 where the before and after is a naturally occurring manipulation
Operational definitions
Usually behavioral and assessed during observation
Extraneous variables
Can’t control these variables
Have to consider the other potential factors and how they could impact a research study
Less control over research participants
Unrepresentative samples
Convenience samples
A lack of random assignment
Individual differences not balanced across groups
Accosting
Selecting a person who is the target for the intervention, actual pizza place, told it would take X amount of time, would arrive early or late, reason by pizza person would say they were a fast driver or that there was a lot/little of traffic.
Experiment - IVs - reason for early (or late) pizza delivery, Driver’s ability or traffic, DV - tip amount, Tip largest when it was early and attributed to the driver’s ability, Tip lowest when it was late and attributed to the driver’s ability
Single-case research design
Intensive study of a single person, group, organization, or culture. Most common are in behavior therapy, behavior modification, and applied behavior analysis
Case study research
An in-depth, usually long-term, usually unconstrained, examination of a single case for either descriptive or hypothesis testing purposes. Select cases based on validity (heterogeneity among test cases and access or opportunity to collect), and units of analysis
Units of analysis
Level(s) of aggregation at which to collect data, specific to case studies but this can be applied more generally too.
Example
Medical student stress
Levels include class, student, or both
Follow a class throughout their time in school on their stress
Could look at an individual student’s stress levels or a couple of them
For data collection in a single-case research, research must…
Plan carefully (formulate plan before data collection for DVs, operational definitions, and sources of information), search for disconfirming evidence, and maintain a chain of evidence
Single-case experiment
Like a case study, except the experimenter exerts more control over the research situation. Obtains baseline, manipulates the IV, controls for extraneous variables, assessed the DV continuously
A-B Design
Also called baseline design. Includes assessment of a behavior at baseline and then introduces the independent variable. Used because the removal of the treatment could cause harm.
A-B-C-B design
Assesses the behavior over baseline, introduce treatment, introduce comparison condition, and reinstate the original treatment.
Example - baseline of blood alcohol test twice a week for three weeks, contingent reinforcement - gets a reward for zero BAC, contingent reinforcement - get a reward regardless of BAC, back to contingent reinforcement
A-B-A design
Also called the reversal design. Assesses behavior over a baseline period, introduces the treatment, and removes the treatment.
Example
Degree of distress 15 min after chemotherapy is the baseline, then introduce a treatment of a video game
Measured distress levels from the chemotherapy like side effects, did for the chemotherapy sessions
Had child play video games after receiving chemotherapy
Looked at over 13 chemotherapy sessions
Evidence of an effect in single-case experiments
Magnitude, immediacy, continuation during long-term follow up. Look at graphs.
Why is a stable baseline important for single-case experimental research?
Helps us to draw valid conclusions. If the baseline is unstable, it is less obvious if the treatment had an effect. Sometimes researchers will wait for baselines to stabilize before having the intervention.
Trend (in baseline data)
Baseline data that increases or decreases over time, can make it difficult to evaluate the effectiveness of the treatment. At baseline if you have a slope that continues into the treatment period is a problem because it is unclear if treatment has an effect or if the baseline is continuing to have a positive slope and treatment actually has no impact.
Variability (of baseline data)
Baseline data that has high variability, can make it difficult to evaluate the effectiveness of the treatment
Visual data analysis
The researcher plots a graph of the graph of the results and examines it to determine if the IV had an effect on the DV. Look at magnitude, immediacy, continuation at follow-up, and return to baseline to near pre-treatment levels
Qualitative research
Research method that uses of interviews, participant observation, and or document analysis to find meaning in words and texts. Has a social constructivist perspective and takes biases into account when interpreting and disseminating findings from their studies. Purpose and goals are less on prediction and more on explanation and description, also adds exploration.
Thick description (qualitative research characteristic)
Make note of and analyze rich details about scenes. Piece together these details to create a holistic understanding
Example - anthropologist field notes, try to capture as much they possibly can
Observations for individuals, multiple participants, how they interact, other things going on in the setting
Bricolage (qualitative research characteristic)
Get multiple perspective and use multiple forms of data to create a meaningful story. All the data, whatever it is, will be qualitative, won’t be numerical
Naturalistic (qualitative research characteristic)
Examine naturally occurring events in everyday settings
Narrative approach
Examine a single or a few individuals whose stories are used to illuminate larger social issues. Having a person or multiple people telling you their stories.
Phenomenological approach
Highlight several individuals’ lived experiences and what they have and don’t have in common
Example
Two college students inhabit the same role, but their day-to-day realities vary
Student A - navigate interactions with a roommate
Student B - balancing school with raising a family
Grounded theory
Strives to generate a new theory of a social process that is grounded or stems from the data
Theoretical saturation
Locate and interview participants until new data ceases to spark original theoretical ideas. More important in qualitative research than sample size
Ethnography
Immersive study of a group, community, and/or social world. Field notes may be a more common data collection tool. Think anthropology. May also have gatekeepers.
Gatekeeper
Grants or deny permission to enter or conduct research in a specific setting
Example
Get permission from tribal leaders to work with individual tribe members
Autothenography
Connect the analysis of one’s own identity, culture, feelings, and values to larger social issues
Non-probability sample
The probability of any person being chosen for participation is unknown
Maximum variation sampling
Selecting individuals and/or sites that are purposely different from each other; have a more diverse sample
Snowball sampling
People who agree to participate nominate others who they think might also be willing to participate
Highly-structured interview
Every participant is asked the same questions in the same order
Semi-structured interview
Have a list of questions that they will try to cover, might ask in different orders based on the conversation flow, ask follow up questions
Low-structured interview
Try to cover general themes but the interviewer might not have a concrete list of questions, let the interview go wherever the participant leads it
Types of interview questions
Introductory - describe an occasion when…
Probing - give an example of…
Specifying - walk through step-by-step
Direct - was the experience positive or negative to you?
Interpretation - am I understanding you correctly that…
Researcher characteristics (qualitative)
Consider whether and how researcher characteristics may influence the participants
Advisable for researcher’s basic characteristics to match the respondent’s
race/age/sexual orientation
Transcription
Create a verbatim record of the conversation. 3 hours for every 1 hour of the interview.
Memoing
Part of the transcription process, can be lost with AI doing the transcription work. Writing down ideas as you are going through the transcription process, noticing patterns and themes across people
Data analysis (qualitative research)
Qualitative data are examined early in the collection process and new data are gathered based on those examinations to help flush out potential themes and patterns. Data snooping is questionably ethical in quantitative research but it part of the process in qualitative research, it is called data analysis
Coding
Reducing the data into meaningful segments and assigning names for the segments
Open coding
Coding is unrestricted and the codes are phrases explicitly mentioned by participants. As you learn more, you might go back to earlier participants
Axial coding
After you have a good idea of what codes you are using. Open codes are combined into broader themes or subthemes and comparisons are made between them
Themes
Broad units of information that consist of several codes aggregated to form a common idea
Representation (qualitative research)
Researcher is conscious of the biases, values, and experiences they bring to qualitative research. Need for self-disclosure on the part of the research like a positionality statement
Encoding
How a piece is written, word choices, whether specific jargon is used, extent to which methods are addressed. Use wording like “Procedures” rather than “methods” or “Findings” rather than “results”
Program monitoring
Also called process evaluation. Continuing assessment of how well the program is being implemented while it is being carried out.
Formative evaluation
Used to monitor the process or development of a program. Example - evaluation during the program, could revise materials and alter procedures for recruitment after evaluation
Summative evaluation
Used to assess the overall effectiveness of the program. Example - completed at end of the program to answer the question of whether the program had the expected impact on health-related behaviors
Target population
The particular group of people, such as adolescents or substance abusers, the intervention is intended to reach
Program implementation failure sources
Lack of specific criteria and procedures for program implementation
Insufficiently trained staff members
Inadequate supervision of staff provides opportunity for treatments to drift away from intended course
Programs must be tailored to client’s needs to some degree, but it cannot be so tailored that it changes from the intended treatment form
Novel treatment program is implemented with a staff who do not believe in its effectiveness, who are used to doing things differently, or who feel threatened by the new procedure, can lead to resistance or event sabotage
Client resistance in program implementation
May be suspicious of the goals of the program or are uncertain about the effects of the program
Inaccessibility - lack of transportation, restricted operating hours, locating the service site and seen as dangerous
Threats to client dignity - demeaning procedures, intrusive questions, rude treatment
Failure to consider a client’s culture - values, lifestyle, treatment needs, and language difficulties
Unusable services - printed materials with a reading level too high for clients, written in a language in which clients are not fluent, or too small of writing for visually impaired clients
Reduce resistance by ensuring all viewpoints are considered such as using focus groups, explain any unchangeable aspects of the program that caused concern for members of the focus group
Unintended effects (program implentation)
Side effects of the program
Unintended effects can exacerbate the program a program was intending to alleviate
Example - program to reduce problem behaviors like substance use for adolescents, program effective in short term but in long run those in the group had more substance use than those in the control groups
Some unintended effects can also be positive but are less likely to be reported because they are not considered problems
Criteria for evaluating impact of the program
Degree of change
Change for each of the goals, means and effect sizes
Importance of change
Goal attainment can be defined either in terms of meeting some preset criterion of improvement or relative to the level at which an outcome is found in a criterion population
Example - 0 panic attacks over a 2 week period
Number of goals achieved
Durability of the outcomes
Cost of the program
cost-efficiency analysis
Acceptability of the program
For clients, staff, etc
The ideal strategy for evaluation research is _____ but ______ are the most commonly used evaluation research strategy
true experiment, quasi-experiment
Threats to internal validity in evaluation research
Composition of control or comparison group
Treatment diffusion
Staff compensates control group with some benefits of the treatment group
People who are aware they are int he control group might feel a rivalry with the treatment group
Resentful demoralization
Local history events
Treatment diffusion
Members of the control group learn about the treatment from members of the treatment group and try to apply the treatment to themselves
Example Students against drunk driving made their own SADD group
Resentful demoralization
Members of the control group learn they are being deprived of a program that could benefit them and so reduce any efforts they might have been making to solve their problem themselves
Pre-experimental design
When a pretest-posttest design doesn’t include a control group, it is called a pre-experimental design. Sometimes you can’t have a no-treatment control group. Should be avoided when they can be - studies have found that treatment effects were overestimated compared to true experiments and quasi-experiments
Meta-analysis
Results of a set of studies that test the same hypotheses are statistically combined
Can be used to find the average effect for a treatment
Can also estimate effects of possible moderators and what conditions in the program are more or less effective
Interpretation of null results (evaluation programs)
May be from program failure (true null) like implementation failure or from poor research like sampling error or unreliable measures. May also find null between two treatment groups, if they have the same outcome then we can consider using the less expensive or time-consuming treatment.
Cost-benefit analysis
Compare the dollar cost of operating a program to the benefits that occur when objectives are achieved. Assumption that all outcomes can be expressed in monetary terms
Target population
The group of people we want our research to apply to
Study population
People who meet our operational definition of the target population
Research sample
The people from the study population from whom we collect our data
Probability sampling
Every member of the study population has a known probability of being selected for the research sample
Sampling frame
List of all the people in the study population, such as a roster of all the students attending a particular college or university
Simple random sampling
Researcher uses a table of random numbers to select the participants, process continues until the desired number of participants is acquired
Stratified random sampling
Sampling frame is arranged in terms of the variables used to create the sample, such as with a quota matrix
Quota matrix
Each person in the sampling frame is categorized by gender, ethnicity, and class and is assigned to the appropriate cell; researchers then sample randomly from each cell in proportion to its representation in the population
Systematic sampling
Start with a sampling frame and select every nth name, where n equals the proportion of the frame that you want to sample
Cluster sampling
Identify groups or clusters of people who meet the definition of the study population; take a random sample of the clusters and use all members of the sampled clusters as research participants
Nonprobability sampling
the probability of the person’s being chosen is unknown
Convenience sample
Consists of people from whom the researcher finds it easy to collect data
Quota samples
Convenience samples are stratified using a quota matrix
Purposive sampling
Researchers use their judgment to select the membership of the sample based on the research goals, frequently used in case study research
Snowball sampling
People who are initially recruited for a study by convenience or purposive sampling nominate acquaintances they think might be willing to participate in the research
Statistical power
1-beta, probability of not making a Type II error, need adequate statistical power to avoid false negative results. Depends on factors such as alpha level, size of effect size for IV on DV, size of the research sample
Type I error
alpha level, usually set at .05, represents the probability of saying there is a relationship when one actually doesn’t exist
Type II error
beta, represents the probability of saying there isn’t a relationship when there actually is one
To determine your sample size, you should be thinking about….?
What effect size are you trying to detect
What alpha level will you use
Will you use a one-tailed or a two-tailed statistical test
What level of power do you want
Critical effect size
Target effect size for your study.
Example - Consider the smallest effect you consider important to your theory, such as critical effect size fo r = .25 so anything smaller than that is considered a correlation of 0