Looks like no one added any tags here yet for you.
Reliability
the consistency or stability of a measure.
True score
the real, “true” value on a given variable.
Measurement error
the difference between a true score and the measured score.
Pearson product-moment correlation coefficient
a common method of calculating a correlation coefficient to tell how strongly two variables are related to each other
Test-retest reliability
is assessed by measuring the same individuals at two points in time and comparing results.
A high correlation between test and retest indicates reliability.
Alternate forms of reliability
uses two forms of the same test for testing instead of repeating the same test
Internal consistency reliability
the assessment of reliability using responses at only one point in time
Because all items measure the same variable, they should yield similar or consistent results
split half reliability
the correlation of the total score on one half of the test with the total score on the other half
High correlation indicates that the questions on the test are measuring the same thing
Cronbach alpha
the average of all possible split half reliability coefficients
Item total correlations
correlations of each item score with the total score based on all items
Low correlation items are actually measuring a different variable and cane be eliminated
construct validity
if it actually measures what it is intended to measure
Face validity
the content of the measure appears to reflect the construct being measured
Predictive validity
scores on the measure predict behavior on a criterion measured at a future time
Concurrent validity
scores on the measure are related to a criterion measured at the same time
Convergent validity
scores on the measure are related to other measures of the same construct
Discriminant validity
scores on the measure are not related to other measures that are theoretically different
Reactivity
a potential problem with measuring behavior
Measures of behavior vary in terms of their potential
Interrater reliability
is the correlation between the observations of different raters.
A high correlation indicates raters agree in their ratings.
A commonly used indicator is Cohen’s kappa
Nominal scales
have no numerical or quantitative properties; categories or groups simply differ from one another.
Ordinal scales
allow us to order the levels of the variables under study
Interval scales
are numeric scales in which the intervals between numbers on the scale are equal in size
Ratio scales
have an absolute zero point that indicates the absence of the variable being measured.
Quantitative research
Tends to focus on specific behaviors that can be easily quantified
Generally uses large samples
Bases conclusions upon statistical of Data
Qualitative Research
Focuses on behavioral and natural settings
Collects data about small groups and/or in limited settings
Expresses data in numerical terms
Bases conclusions on interpretations drawn by the investigations
Naturalistic Observation
A descriptive method in which observations are made in a natural social setting
It is so called field observation
Researcher study people in social and organizational setting or animals in their natural habits
Participant observation
occurs when a researcher takes an active, insider role on what he or she is studying.
This can yield data not available to non participant observers
Limits on Naturalistic Observation
Naturalistic observation is most useful when investigating complex social settings
It is less useful for studying well defined hypothesis under precisely specified conditions or phenomena
Systematic Observation
refers to the careful observation of one or more specific behaviors in a particular setting
Observation are quantifiable
Observation are quantifiable
a set of rules used to categorize observations
Decided which behaviors are of interest
Choose a setting in which the behaviors can be observed
Methodological Issues
Equipment must be selected
Paper and pencil; or types of video and audio recording
case study
is an observational method that provides a detailed description of an individual
Psychobiography
a type of case study in which a researcher applies psychological theory to explain the life of an individual
Archival research
involves using previously complied information to answer research questions
Three types of Archival research data
Statistical records
Survey archives
Written and mass communication records
Content Analysis
The systematic analysis of existing documents
Like systematic observation, it requires researchers to devise coding systems that raters use to quantify the information.
response set
is a tendency to respond to survey questions from a particular perspective rather than answering questions directly
Have a bias in the way you answer the question
ex : Strongly agree on everything
Not fully answering the questions
social desirability
the tendency to answer questions in the way that would reflect most favorable on the respondent.
Three general types of survey question
Facts and demographics
Age,gender, education
Behaviors
Attitudes and beliefs
Yea-saying
is the tendency to agree consistently
NaySaying
is the tendency to disagree consistently
closed ended questions
a limited number of response alternative are given
opened ended questions
respondents are free to answer any way they like
This can yield valuable insights into what people are thinking
More time is required to categorize
Rating Scales
assign scores along numerical dimension and are very common in many areas of research
graphic rating scale
requires a mark along a continuous line
somatic differential scale
a measure of the meaning of concepts rating them on a series of bipolar adjectives
Nonverbal scales
appropriate for populations such as children who may have trouble understanding other scales
Interviewer bias
the interviewer can inadvertently show approval or disapproval of certain answers
A focus group
is an interview with a group of about 6 to 10 individuals brought together for 2 to 3 hours
Sample
the members of a population selected to participate in a research investigation.
Population
all individuals of interest to the researcher
Confidence interval
the level of confidence that the true population value lies within an interval of the obtained sample.
Sampling error
the potential deviation from the true population value of the value obtained using sample data.
Probability sampling
each member of the population has a specifiable probability (chance) of being chosen
Simple random sampling
every member of the population has an equal probability of being selected.
Stratified random sampling
the population is divided into subgroups (strata), and random samples are taken from each strata.
Cluster sampling
existing groups or geographic areas, called clusters, are identified; samples are taken from those clusters.
Nonprobability sampling
the probability (chance) of any particular member of the population being chosen is unknown.
Convenience sampling
“take-them- where-you-find-them” sampling.
Purposive sampling
the sample meets a predetermined criterion.
Quota sampling
the sample reflects the numerical composition of various subgroups in the population.
Confounding Variable
A variable that varies along with independent variables
Good experimental design requires eliminating all of these possible variables that could result in alternative explanations
Posttest Only Design
researcher using this design must
Obtain two equivalent groups of participants
Introduce the independent variable on the dependent variable
Selection differences
systematic differences between the participants assigned to each groups- must be avoided
Independent group design
participants are randomly assigned to the various conditions so that each participant in only one group
No siblings no twins no matched pair
Repeated measure design
all participants are in all conditions
Also called a within subjects design; comparisons are made with the same group of participants
Random Assignment
The decision about which group each participant is assigned to is completely random and not controlled by the researcher
Prevents systematic biases
Yield equivalent groups
Repeated measures Design Disadvantages: Order effect
the order of presenting the treatments affects the dependent variable
Repeated measures Design Disadvantages: Practice Effect
performance improves due to the practice gained from previous tasks
Repeated measures Design Disadvantages: Fatigue effect
performance deteriorates of fatigue boredom, or distraction
Repeated measures Design Disadvantages: Carryover effect
the effects of the first treatment carry over to influence the response to the second treatment.
Solomon four-group design
the experimental and control groups are studied with and without a pretest.
Independent groups design
participants are randomly assigned to the various conditions so that each participates in only one group
between-subjects design
comparisons are made between different groups of participants.
Repeated measures design: all participants are in all conditions
within-subjects design
comparisons are made within the same group of participants.
Random assignment
the decision about which group each participant is assigned to is completely random and not controlled by the researcher.
Matched pairs design
participants are matched based on their similarity on a measure of either the dependent variable or something that is strongly related to the dependent variable.
Straightforward manipulations
use instructions and other stimuli to manipulate an independent variable
Staged manipulations
try to create a psychological state in participants or simulate a real world situation, often with the help of a confederate who appears to be a participant
Behavioral measures
direct observations of behaviors
Strength of manipulation
the potential amount of impact of the independent variable on the dependent variable.
Self-reports
measures that require participants to describe themselves.
Behavioral measures
direct observations of behaviors.
Physiological measures
recordings of responses of the body.
Galvanic skin response (GSR)
uses the electrical conductance of the skin.
Electromyogram (EMG)
uses the electrical activity of muscles.
Electroencephalogram (EEG)
measures electrical activity in the brain.
Magnetic resonance imaging (MRI)
scans body structures, including the brain, to create images.
Functional MRI (fMRI)
measures blood flow to areas of the brain to create an image of electrical activity
Ceiling effect
a problem where the independent variable appears to have no effect on the dependent measure because participants quickly reach the maximum performance level
Floor effect
a problem that occurs when the task is so difficult that hardly anyone can perform well.
Demand characteristics
features of an experiment that could inform participants of the purpose of the study.
Filler items
unrelated items on a questionnaire, used to disguise a dependent variable
placebo group
participants in a drug study who receive an inert substance or a sham procedure instead of the experimental drug.
Expectancy effects, or experimenter bias
The impact of an experimenter’s bias on the outcome of a research study.
The experimenter might unintentionally treat participants differently in the various conditions of the study.
Single-blind experiment
the participant is unaware of whether a placebo or the actual drug is being administered.
Double-blind experiment
neither the participant nor the experimenter knows whether the placebo or actual treatment is being given.
Pilot study
the researcher does a trial run with a small number of participants.
Manipulation check
an attempt to directly measure whether the independent variable manipulation has the intended effect on the participants