1/85
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Science
systematic process for generating knowledge about the world. Has three important aspects: goals to be achieved, key values to be enacted, and perspectives on the best way to go about generating knowledge. Goals include description, understanding, prediction, and control of behavior.
Epistemology
a set of beliefs about the nature of science (and of knowledge in general)
Logical positivists
dominant epistemological position in modern Western science, knowledge can be best generated through empirical observation, tightly controlled experiments, and logical analysis of data
Humanist perspective
usually held in social sciences, science should produce knowledge that serves people, not just knowledge for its own sake; people are best understood when studied in their natural environments rather than when isolated in laboratories, and that a full understanding of people comes from empathy and intuition rather than logical analysis.
Social constructionists
Believe that people's understanding of the world is linked to a particular time and place and is influenced by the perceiver's social experiences. Scientific process is shaped by the perceiver's values and expectations, influenced by their social world, often have a more humanistic approach, Intersectionality of identities rather than isolation of gender, race, and sexuality
Theory
set of statements about relationships between variables. Most of the statements have either been verified by research or are potentially verifiable
Assumptions
beliefs that are taken as given and are usually not subject to empirical testing
Variable
any thing or concept that can take on more than one value
Independent variable
variable that is manipulated
Dependent variable
caused by another variable
Extraneous variable
other factors in a research situation that provide alternative explanations for an observed relationship
Mediating variable
Comes between two variables in a causal chain; What might reduce or increase the direct relationship between X and Y?
Moderating variable
change or limit the direct relationship between an IV and a DV; when operating, the causal relationship between the proposed cause X and the proposed effect Y depends on a third variable Z.
Hypothetical construct
Terms invented (that is, constructed) to refer to variables that cannot be directly observed (and may or may not really exist), but are useful because we can attribute observable behaviors to them
Operational definition
concrete representations of hypothetical constructs that are developed to be used in research
Unidimensional
simple constructs of only a single component
Ex - Fiedler's view of leadership style - single dimension with one end that is task oriented and the other end as relationship oriented, cannot be scored on both, either high on relationship or high on task orientation
Multidimensional
complex constructs made up of two or more independent components
Ex - another theory of leadership - views task and relationship orientation as independent from one another, someone can score high or low on either of these orientations, high on both, low on both, high on one and low on the other
Multifaceted
Constructs are correlated rather than independent from one another, can lead to problems in interpretation because we might see it as unidimensional and miss something
Ex - Type A and heart disease, correlated with each other but Type A construct has multiple parts (competitive, hostile, impatient, job involvement), if we combine these four parts of Type A together we miss that the correlations differ (range from .03 to .20, very different!)
Propositions
theories consist of statements about relationships among hypothetical constructs
Causal - "one construct causes another" ex - goal acceptance causes work motivation
Noncausal - "two constructs are correlated, but not that one causes the other"
Evaluation research
conducted to gauge the success of psychological or social interventions
Action research
combines basic, applied, and evaluation research; systematic integration of theory, application, and evaluation
basic research
Goal is to make more knowledge, regardless of what it is used for, more theoretical focused, often in lab settings and experimental design, creates a base of knowledge that can be used later in applied research
Applied research
used to find answers for a specific question or problem, uses theory but also doesn't have to, often in more naturalistic settings
Quantitative data
numerical information, such as test scores or the neural activation exhibited in response to stimuli
Qualitative data
nonnumerical information, such as descriptions of behavior or the content of people's responses to interview questions
Mixed method
Using both qualitative and quantitative methodologies in a research study
Experiment
logical positivist view this design as the superior way to do research, looks at cause and effect relationships
Three criteria for experiments
covariation of proposed cause and effect, time precedence of the proposed cause, absence of alternative explanations for the effect
Experimental condition
group where participants are given the intervention or manipulation
Control group
comparison group, get the placebo
Correlational research strategy
looks for relationships between variables that are consistent across a large number of cases; Also called the passive research strategy because you only observe and measure without manipulation
reverse causation
direction of causality might be the reverse of what we hypothesized
Reciprocal relationship
bi-directional relationship
Third-variable problem
another variable is the cause for the relationship, confound
Case study
in-depth, usually long term, examination of a single instance of a phenomenon, for either descriptive or hypothesis-testing purposes
Researcher bias
researcher expectations impacting data collection and results
Nomothetic approach
attempts to formulate general principles of behavior that will apply to most people most of the time, uses experimental and correlational research to study the average behavior of large groups of people
Idiographic approach
studies behavior of individuals, case study is an example, addresses the needs of the practitioner, who is more interested in how a particular client behaves than in how people behave in general.
Developmental research
to learn how people change as they move through the lifespan from birth, through childhood, adolescence, and adulthood, into old age
Cross-sectional
Compare groups of people who are different ages at the same time
Cohort effects
effects of differences in experience due to time of birth, can lead to ambiguity in interpreting results of cross-sectional research
Longitudinal
describes research that measures a trait in a particular group of subjects over a long period of time
Attrition
participants drop out of the study over time, can be random or nonrandom (nonrandom creates a biased sample)
Test reactivity
occurs when being asked a question about a behavior affects behavior.
Ex - In a study about dating, half the participants reported that the questions made them think about parts of their relationship that had been unknown or unexplored so this changed their behaviors (closer together in the relationship or broke up)
Test sensitization
Occurs when participants' scores on a test are affected by having taken the test earlier.
Ex - become familiar with questions over time and answer similarly
History effects
occur when events external to the research affect the behavior being studied so you cannot tell whether changes in the behavior found from one assessment to another are due to age changes or to the events.
Ex - looking at drug use in a longitudinal study, see that drug use is decreasing as the participants get older, multiple possibilities - age effect, more drug enforcement, prices are higher, less access, etc
Cohort-sequential
combines cross-sectional and longitudinal approaches by starting a new longitudinal cohort each time an assessment is made
Target population
the whole group you want to study or describe
Participant sample
the group of the target population used to estimate the behavior of the target population
Generalizability
process of applying sample-based findings to a target population
Convenience sample
choosing individuals who are easiest to reach, whoever happens to be in the setting at the time the research is completed
Theory map
includes information such as the history of the theory, information about why the theory is important, evidence supporting or refuting the theory, and (if applicable) similar and competing theories.
Boundary condition
Conditions under which the effect operates, do lab setting effects hold up in natural settings?
Literature review
Look at sources for your question, keep detailed notes. Purposes to provide context for research, avoid duplication of effort, and identify problems in conducting the research
Primary source
Original research report or presentation of a theory written by the people who conducted the research or developed the theory
Secondary source
summarizes information from primary sources, can be inaccurate
Research hypothesis
States an expectation about the relationship between two variables, expectation is derived from and answered the research question, and is grounded in prior theory and research
Statistical hypothesis
transforms research hypothesis into a statement about the expected result of a statistical test
Replication research
repeating research studies to see if equivalent results can be obtained again
Direct replication
Recreates a study as closely as possible
Conceptual replication
Researchers test the same hypothesis or concept as the original research, but use a different setting, different set of operational definitions, or a different participant population
Replication and extension
Replicates an earlier study and adds independent and dependent variables or makes other additions to the original research that expands its scope
Manifest variables
variables we can directly observe
Reliability
degree of consistency in a measure; gives the same result every time it is applied to the same person or object, barring changes in the variable being measured
Validity
degree of accuracy in a measure; assesses the trait it is supposed to, assesses all aspects of the trait, and assesses only that specific trait
Observed score
score we can see, made up of the true score and the measurement error (random and systematic error)
True score
actual degree of traits that characterize the person being assessed
Measurement error
other things that we did not want to measure, but did anyway because of the imperfections of our measuring instrument
Random error
fluctuates each time a measurement is made, sometimes it is high and sometimes it is low - observed score fluctuates because of this, instability of measurement and lower reliability estimates
example - person being distracted during research, mental or physical state of the person like mood, equipment failures
Systematic (nonrandom) error
present in every measurement
example - poorly worded questions that get a different response than wanted, measuring traits on a measure but also accidentally measuring another trait too
Test-retest reliability
assess people’s scores on a measure on one occasion, assess the same people’s scores on the same measure on a later occasion, and compute the correlation for the two assessment
Alternate forms reliability
Different forms of the same measure
Interrater reliability
Two rater scores are correlated
Cohen’s kappa
Provides the index of agreement for two raters that is corrected by chance
Split-half reliability
split items into two parts and compute the correlation between the respondents’ total scores on the two parts
Cronbach’s alpha
pattern of correlations among all the items, internal consistency
Construct validity
how confident we can be that a measure actually indicates a person’s true score on a hypothetical construct or trait
Convergent validity
extent to which different types of evidence come together to provide the basis for drawing conclusions
Discriminant validity
evidence that a measure is not assessing something it is not supposed to assess
Content validity
when the content of a measure adequately assesses all components of the construct or trait being measured; includes relevant (only assesses the trait it is attempting to assess) and representative (if the trait has multiple parts)
Structural validity
Dimensionality of a measure reflects the dimensionality of the construct it is measuring
Relational validity
how well scores on a measure correlate with scores on criteria that are conceptually relevant to the construct being measured; also called external validity; links between constructs or different measures and then see if there is evidence in terms of degree of which it provides evidence of construct validity
Substantive validity
people who score differently on a construct should respond to situational variables such as experimental manipulations in ways predicted by the theory of the construct; how people who differ on a construct react to experimental manipulation
Generalizability
evidence that a measure is equally valid across time and populations
Differential validity
lack of generalizability in one over another, measure is more valid for assessing a construct for members of one group than for members of another group; content may be more valid for one group than another (boys vs girls in a math test using baseball scenarios)
Multiple operationism
using multiple modalities to measure variables, especially for hypothetical constructs
Example - depression, negative mood and loss of interest measured on self-report, changes in eating measured using behavioral or observation modality, and sleeping measured using physiological EEG