Research Methods

0.0(0)
studied byStudied by 0 people
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/85

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

86 Terms

1
New cards

Science

systematic process for generating knowledge about the world. Has three important aspects: goals to be achieved, key values to be enacted, and perspectives on the best way to go about generating knowledge. Goals include description, understanding, prediction, and control of behavior.

2
New cards

Epistemology

a set of beliefs about the nature of science (and of knowledge in general)

3
New cards

Logical positivists

dominant epistemological position in modern Western science, knowledge can be best generated through empirical observation, tightly controlled experiments, and logical analysis of data

4
New cards

Humanist perspective

usually held in social sciences, science should produce knowledge that serves people, not just knowledge for its own sake; people are best understood when studied in their natural environments rather than when isolated in laboratories, and that a full understanding of people comes from empathy and intuition rather than logical analysis.

5
New cards

Social constructionists

Believe that people's understanding of the world is linked to a particular time and place and is influenced by the perceiver's social experiences. Scientific process is shaped by the perceiver's values and expectations, influenced by their social world, often have a more humanistic approach, Intersectionality of identities rather than isolation of gender, race, and sexuality

6
New cards

Theory

set of statements about relationships between variables. Most of the statements have either been verified by research or are potentially verifiable

7
New cards

Assumptions

beliefs that are taken as given and are usually not subject to empirical testing

8
New cards

Variable

any thing or concept that can take on more than one value

9
New cards

Independent variable

variable that is manipulated

10
New cards

Dependent variable

caused by another variable

11
New cards

Extraneous variable

other factors in a research situation that provide alternative explanations for an observed relationship

12
New cards

Mediating variable

Comes between two variables in a causal chain; What might reduce or increase the direct relationship between X and Y?

13
New cards

Moderating variable

change or limit the direct relationship between an IV and a DV; when operating, the causal relationship between the proposed cause X and the proposed effect Y depends on a third variable Z.

14
New cards

Hypothetical construct

Terms invented (that is, constructed) to refer to variables that cannot be directly observed (and may or may not really exist), but are useful because we can attribute observable behaviors to them

15
New cards

Operational definition

concrete representations of hypothetical constructs that are developed to be used in research

16
New cards

Unidimensional

simple constructs of only a single component

Ex - Fiedler's view of leadership style - single dimension with one end that is task oriented and the other end as relationship oriented, cannot be scored on both, either high on relationship or high on task orientation

17
New cards

Multidimensional

complex constructs made up of two or more independent components

Ex - another theory of leadership - views task and relationship orientation as independent from one another, someone can score high or low on either of these orientations, high on both, low on both, high on one and low on the other

18
New cards

Multifaceted

Constructs are correlated rather than independent from one another, can lead to problems in interpretation because we might see it as unidimensional and miss something

Ex - Type A and heart disease, correlated with each other but Type A construct has multiple parts (competitive, hostile, impatient, job involvement), if we combine these four parts of Type A together we miss that the correlations differ (range from .03 to .20, very different!)

19
New cards

Propositions

theories consist of statements about relationships among hypothetical constructs

Causal - "one construct causes another" ex - goal acceptance causes work motivation

Noncausal - "two constructs are correlated, but not that one causes the other"

20
New cards

Evaluation research

conducted to gauge the success of psychological or social interventions

21
New cards

Action research

combines basic, applied, and evaluation research; systematic integration of theory, application, and evaluation

22
New cards

basic research

Goal is to make more knowledge, regardless of what it is used for, more theoretical focused, often in lab settings and experimental design, creates a base of knowledge that can be used later in applied research

23
New cards

Applied research

used to find answers for a specific question or problem, uses theory but also doesn't have to, often in more naturalistic settings

24
New cards

Quantitative data

numerical information, such as test scores or the neural activation exhibited in response to stimuli

25
New cards

Qualitative data

nonnumerical information, such as descriptions of behavior or the content of people's responses to interview questions

26
New cards

Mixed method

Using both qualitative and quantitative methodologies in a research study

27
New cards

Experiment

logical positivist view this design as the superior way to do research, looks at cause and effect relationships

28
New cards

Three criteria for experiments

covariation of proposed cause and effect, time precedence of the proposed cause, absence of alternative explanations for the effect

29
New cards

Experimental condition

group where participants are given the intervention or manipulation

30
New cards

Control group

comparison group, get the placebo

31
New cards

Correlational research strategy

looks for relationships between variables that are consistent across a large number of cases; Also called the passive research strategy because you only observe and measure without manipulation

32
New cards

reverse causation

direction of causality might be the reverse of what we hypothesized

33
New cards

Reciprocal relationship

bi-directional relationship

34
New cards

Third-variable problem

another variable is the cause for the relationship, confound

35
New cards

Case study

in-depth, usually long term, examination of a single instance of a phenomenon, for either descriptive or hypothesis-testing purposes

36
New cards

Researcher bias

researcher expectations impacting data collection and results

37
New cards

Nomothetic approach

attempts to formulate general principles of behavior that will apply to most people most of the time, uses experimental and correlational research to study the average behavior of large groups of people

38
New cards

Idiographic approach

studies behavior of individuals, case study is an example, addresses the needs of the practitioner, who is more interested in how a particular client behaves than in how people behave in general.

39
New cards

Developmental research

to learn how people change as they move through the lifespan from birth, through childhood, adolescence, and adulthood, into old age

40
New cards

Cross-sectional

Compare groups of people who are different ages at the same time

41
New cards

Cohort effects

effects of differences in experience due to time of birth, can lead to ambiguity in interpreting results of cross-sectional research

42
New cards

Longitudinal

describes research that measures a trait in a particular group of subjects over a long period of time

43
New cards

Attrition

participants drop out of the study over time, can be random or nonrandom (nonrandom creates a biased sample)

44
New cards

Test reactivity

occurs when being asked a question about a behavior affects behavior.

Ex - In a study about dating, half the participants reported that the questions made them think about parts of their relationship that had been unknown or unexplored so this changed their behaviors (closer together in the relationship or broke up)

45
New cards

Test sensitization

Occurs when participants' scores on a test are affected by having taken the test earlier.

Ex - become familiar with questions over time and answer similarly

46
New cards

History effects

occur when events external to the research affect the behavior being studied so you cannot tell whether changes in the behavior found from one assessment to another are due to age changes or to the events.

Ex - looking at drug use in a longitudinal study, see that drug use is decreasing as the participants get older, multiple possibilities - age effect, more drug enforcement, prices are higher, less access, etc

47
New cards

Cohort-sequential

combines cross-sectional and longitudinal approaches by starting a new longitudinal cohort each time an assessment is made

48
New cards

Target population

the whole group you want to study or describe

49
New cards

Participant sample

the group of the target population used to estimate the behavior of the target population

50
New cards

Generalizability

process of applying sample-based findings to a target population

51
New cards

Convenience sample

choosing individuals who are easiest to reach, whoever happens to be in the setting at the time the research is completed

52
New cards

Theory map

includes information such as the history of the theory, information about why the theory is important, evidence supporting or refuting the theory, and (if applicable) similar and competing theories.

53
New cards

Boundary condition

Conditions under which the effect operates, do lab setting effects hold up in natural settings?

54
New cards

Literature review

Look at sources for your question, keep detailed notes. Purposes to provide context for research, avoid duplication of effort, and identify problems in conducting the research

55
New cards

Primary source

Original research report or presentation of a theory written by the people who conducted the research or developed the theory

56
New cards

Secondary source

summarizes information from primary sources, can be inaccurate

57
New cards

Research hypothesis

States an expectation about the relationship between two variables, expectation is derived from and answered the research question, and is grounded in prior theory and research

58
New cards

Statistical hypothesis

transforms research hypothesis into a statement about the expected result of a statistical test

59
New cards

Replication research

repeating research studies to see if equivalent results can be obtained again

60
New cards

Direct replication

Recreates a study as closely as possible

61
New cards

Conceptual replication

Researchers test the same hypothesis or concept as the original research, but use a different setting, different set of operational definitions, or a different participant population

62
New cards

Replication and extension

Replicates an earlier study and adds independent and dependent variables or makes other additions to the original research that expands its scope

63
New cards

Manifest variables

variables we can directly observe

64
New cards

Reliability

degree of consistency in a measure; gives the same result every time it is applied to the same person or object, barring changes in the variable being measured

65
New cards

Validity

degree of accuracy in a measure; assesses the trait it is supposed to, assesses all aspects of the trait, and assesses only that specific trait

66
New cards

Observed score

score we can see, made up of the true score and the measurement error (random and systematic error)

67
New cards

True score

actual degree of traits that characterize the person being assessed

68
New cards

Measurement error

other things that we did not want to measure, but did anyway because of the imperfections of our measuring instrument

69
New cards

Random error

fluctuates each time a measurement is made, sometimes it is high and sometimes it is low - observed score fluctuates because of this, instability of measurement and lower reliability estimates

example - person being distracted during research, mental or physical state of the person like mood, equipment failures

70
New cards

Systematic (nonrandom) error

present in every measurement

example - poorly worded questions that get a different response than wanted, measuring traits on a measure but also accidentally measuring another trait too

71
New cards

Test-retest reliability

assess people’s scores on a measure on one occasion, assess the same people’s scores on the same measure on a later occasion, and compute the correlation for the two assessment

72
New cards

Alternate forms reliability

Different forms of the same measure

73
New cards

Interrater reliability

Two rater scores are correlated

74
New cards

Cohen’s kappa

Provides the index of agreement for two raters that is corrected by chance

75
New cards

Split-half reliability

split items into two parts and compute the correlation between the respondents’ total scores on the two parts

76
New cards

Cronbach’s alpha

pattern of correlations among all the items, internal consistency

77
New cards

Construct validity

how confident we can be that a measure actually indicates a person’s true score on a hypothetical construct or trait

78
New cards

Convergent validity

extent to which different types of evidence come together to provide the basis for drawing conclusions

79
New cards

Discriminant validity

evidence that a measure is not assessing something it is not supposed to assess

80
New cards

Content validity

when the content of a measure adequately assesses all components of the construct or trait being measured; includes relevant (only assesses the trait it is attempting to assess) and representative (if the trait has multiple parts)

81
New cards

Structural validity

Dimensionality of a measure reflects the dimensionality of the construct it is measuring

82
New cards

Relational validity

how well scores on a measure correlate with scores on criteria that are conceptually relevant to the construct being measured; also called external validity; links between constructs or different measures and then see if there is evidence in terms of degree of which it provides evidence of construct validity

83
New cards

Substantive validity

people who score differently on a construct should respond to situational variables such as experimental manipulations in ways predicted by the theory of the construct; how people who differ on a construct react to experimental manipulation

84
New cards

Generalizability

evidence that a measure is equally valid across time and populations

85
New cards

Differential validity

lack of generalizability in one over another, measure is more valid for assessing a construct for members of one group than for members of another group; content may be more valid for one group than another (boys vs girls in a math test using baseball scenarios)

86
New cards

Multiple operationism

using multiple modalities to measure variables, especially for hypothetical constructs

Example - depression, negative mood and loss of interest measured on self-report, changes in eating measured using behavioral or observation modality, and sleeping measured using physiological EEG