Psychoanalysis Midterm

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/81

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

82 Terms

1
New cards

psychological tests

measure latent variables and hypothetical constructs

2
New cards

Hypothetical Construct

explanatory variable that we hypothesize is responsible for the behaviors we observe but cannot be directly observed

3
New cards

examiner

person who administers a test

4
New cards

examinee

person who takes a test

5
New cards

item

questions or content being measured on a test

6
New cards

scale

method to measure psychological variables

7
New cards

measure

tools and techniques used to gather data about psychological constructs

8
New cards

battery

collection of tests and assessments

9
New cards

individual test

a test that is administered to only one person at a time

10
New cards

group test

a test that is administered to multiple people at the same time

11
New cards

reliability

the consistency of test results across different administrations and contexts.

12
New cards

validity

the extent to which a test measures what it intends to measure.

13
New cards

experimental psychology

Sensation, perception (Locke Herbart, Weber, Fechner), Higher Mental Functions (Wundt and Ebbinghause), Operational Definitions (Watson and Stevens)

14
New cards

Individual Differences

Evolution/Intelligence (Darwin, Galton), Cognitive ability (Binet), Cognitive Ability (Terman and others)

15
New cards

Primary Qualities

Absolute, objective, immutable, and capable of mathematical description

16
New cards

Secondary qualities

experiences they elicit from the individual

17
New cards

Kallikak Study

Traced family lineage of a man to find the effects of genetics on intelligence. Heavily contributed to the eugenics movement.

18
New cards

Buck V Bell

Allowed the constitutionality of Virginia’s law allowing state enforced sterilizations

19
New cards

Popularity of testing and WW1

To assess the mental capacity of recruits and identify those with mental health issues

20
New cards

What lead to a decrease popularity of testing?

Nazi Germany, Civil Rights Movement

21
New cards

Historical view of testing

Tests are perfect, tests are essential tools, All psychological qualities can be measured, a test score represents all there is to know about a human being

22
New cards

Modern View

Tests are imperfect, tests are tools to be used in conjunction with other tools, measuring psychological qualities is possible but challenging, a test score is one piece of information about a human being

23
New cards

What is Scaling?

Determining the rules of correspondence between the elements in the real world (physical or psychological) and the elements of the real number system

24
New cards

What is Transforming a score?

Changing the scale of a score

25
New cards

Nominal

Data categorized into groups without ranking

26
New cards

Ordinal

ranking and order of data

27
New cards

interval

a type of quantitative measurement where the order of values is meaningful and the difference between any two values on the scale is consistent and equal

28
New cards

Ratio

a type of quantitative measurement where the order of values is meaningful and the difference between any two values on the scale is consistent and equal

29
New cards

Continuous Data

measurements that can take any value within a specific range, meaning there are infinitely many possible values.

30
New cards

Categorical Data

variables represent types of data which may be divided into groups.

31
New cards

Dichotomous Data

Data with 2 items/questions

32
New cards

Norm-referenced Tests

Scores based on performance relative to others

33
New cards

Criterion-referenced Tests

Scores based on number of objectives or knowledge achieved

34
New cards

Factor analysis

works with observed variables to discover latent variables

35
New cards

Reliability

Degree to which the observed score approximates the true score

36
New cards

Classical test theory

mean of distribution of observed scores is the true score, measurement error is an unsystematic or random, deviated of an examinee’s score from a theoretically expected true score

37
New cards

Generalizability Theory

Recognize that there are multiple sources of variance that account for the differences between true scores and observed scores. The use of ANOVA allows us to tease out the relative contribution of multiple sources of variance.

38
New cards

Item Response Theory

Through complex item analysis we can determine the relationship between an examinee’s characteristic and the probability of a correct response to an item

39
New cards

Test-Retest Reliability

Administer the test (Time 1) Score the test After the interval, administer the same test to the same students (time 2) Score the test Correlate the two sets of scores

40
New cards

Alternate forms reliability

Administer Form 1 of the Test Administer Form 2 of the Test Score both Forms 1 and 2 Correlate the two sets of scores

41
New cards

Formula for standard error of measurement

SE = \sigma / \sqrt(n)

42
New cards

Item Variance

variance due to random measurement error

43
New cards

Construct Variance

Variance due to differences in the construct being measured

44
New cards

Total Variance

Variance due to random measurement error and differences in the construct being measured

45
New cards

Reliability estimate

construct variance/Total Variance

46
New cards

Split halves method

Determine how the test is to be divided (e.g. even items vs. odd items Add up all the total score for the odd items for

each examinee and then the total score of the

even items for each examinee. Correlate the “odd-item score” with the “even-

item score” Apply Spearman-Brown formula to correct the

split-half reliability estimate thus obtaining the

reliability estimate of the whole test

47
New cards

Spearman-Brown

2r/(1+R)

48
New cards

KR20

  • Find the proportion of examinees responding

  • correctly to the item for each item (This is “p”)

  • • Calculate 1-p to get “q” or calculate the proportion

  • of examinees responding incorrectly to each item.

  • • Multiply the two for each item (pq)

  • • Sum up those products

  • • Find the variance of the examinee test scores

  • • Plug numbers into the KR20 formula

49
New cards

Cronbach’s Alpha

statistical measure used to assess the internal consistency reliability of a set of items or questions that are designed to measure the same concept. In simpler terms, it tells us how well the different items within a scale or survey measure the same underlying construct

50
New cards

Cohens Kappa

statistical measure used to evaluate the level of agreement between two or more raters who classify items into categories

<p>statistical measure used to evaluate the level of agreement between two or more raters who classify items into categories</p>
51
New cards

Interrater reliability

measures the consistency of ratings or judgments made by different raters or observers when evaluating the same phenomenon

52
New cards

How to increase reliability

  • Consistency in Scoring

  • • Group variability

  • • Managing difficulty levels

  • • Number of items

  • • Quality of items

  • • Factor analysis (unidimensionality)

  • • Correction for Attenuation

53
New cards

Validity

The test actually measures what it says it does

54
New cards

Content Validity

Construction/Review of Items, Represents the target domain adequately

55
New cards

Criterion-related Validity

Correlation between predictor and criterion. Performance on the criterion

56
New cards

Construct Validity

Multiple methods, Degree of construct possessed by examinee

57
New cards

Face Validity

The test looks professional and valid

58
New cards

Lawshe’s Content Validity Ratio (CVR)

Where ne equals the number of SME's rating an item as essential and N equals the total number of SME’s providing ratings.

<p>Where ne equals the number of SME's rating an item as essential and N equals the total number of SME’s providing ratings. </p>
59
New cards

Content Validation Steps

1. Describe and specify the domain of behaviors (knowledge) being measured.

2. Analyze the domain and subcategorize into more specific topics.

3. Draw up a set of test specification that show not only the content areas but

the emphasis placed on each area.

4. Determine how many items should be included to represent coverage of the

area

5. Create or match items to content areas

60
New cards

Content Validity Ratio Steps

1. Identify a group of subject matter experts (SME) to review the items

on the GAD. (Example: 40 clinicians who specialize in treating

depression).

2. Have your SME’s rate each item as essential, useful, or not important

to the content or construct.

3. Calculate the CVR

4. Review the ratings for each item.

61
New cards

Criterion-Related Validation Steps

1. Identify suitable criterion behavior and a way to measure it.

2. Identify an appropriate sample of examinees representative of those

for whom the test will be used.

3. Administer the test to sample and record scores.

4. When appropriate, obtain a measure of performance on the

criterion for the sample.

5. Determine the strength of the relationship between test scores and

criterion performance. (Use Pearson’s R to generate a validity

coefficient)

62
New cards

Predictive Validity

how well a test or measurement predicts future outcomes or behaviors

63
New cards

Concurrent Validity

how well a new test or assessment compares to an established, validated measure when both are taken at the same time

64
New cards

Steps of Construct Validation

  1. Formulate a hypothesis about how those who differ on the

construct should differ in other respects (behavior, scores on other

tests, etc.)

2. Select or develop a measurement instrument which consists of

items representing manifestations of this construct.

3. Gather empirical evidence that will test this.

4. Determine if the data are consistent with the hypothesis. Consider

alternative explanations.

65
New cards

Types of Evidence of Construct Validity

Developmental Changes

Intervention Changes

Group Differentiation

Convergent Evidence

Discriminant Evidence

Factor Analysis

66
New cards

What Reduces Validity?

Characteristics of the test itself

Errors in test administration and scoring

Variables affecting examinee responses

67
New cards

Factor Analysis Steps

Start with your correlation matrix

◦ Is it appropriate to compute correlations?

◦ Are relationships linear? Are item responses continuous? Are correlations

sizeable?

Extract factors

◦ Among other things, the extraction method gives rise to factor

eigenvalues, used in the next step

Decide how many factors to retain

Rotate factor pattern matrix to obtain an interpretable

solution

Interpret rotated factor pattern & structure matrix

68
New cards

Criteria for a good Criterion

Relevant, Valid, Uncontaminated

69
New cards

Steps to make a test

Defining a Test, Constructing the items, assemble the test, test the test, revise the test, publish the test

70
New cards

Objective tests

dichotomous. polytomous, matching

71
New cards

Subjective tests

short answers, fill in the blank, essay

72
New cards

BRUSO

Brief, Relevant, Unambiguous, Specific, Objective

73
New cards

Likert Format

typically presents respondents with a statement or question and a range of response options, often numbered from 1 to 5 or 1 to 7, to indicate their level of agreement or disagreement.

74
New cards

Category Format

a rating-scale format that usually uses the categories 1 to 10

75
New cards

Calculating Item Difficulty

Count the number of examinees who got the item right then divide by the total number of examinees

76
New cards

Rapport and test scores

establishing a good rapport can improve accuracy, elicit more information, and ease anxiety, potentially leading to better scores

77
New cards

Impact of race of tester on test performance

Studies suggest that Black individuals may perform worse when tested by a White examiner

78
New cards

Stereotype Threat and its remedies

Stereotype threat, the risk of confirming negative stereotypes about one's group, can be reduced through various individual and systemic strategies. Individuals can mitigate the impact by practicing self-affirmation, mindfulness, and cognitive reframing, while educators can promote inclusive classrooms by celebrating diversity, providing positive role models, and promoting a growth mindset

79
New cards

Training of Test Administrators

Properly trained test administrators can significantly impact test scores by ensuring fair and consistent test administration, which in turn enhances the validity and reliability of the results

80
New cards

Expectancy Effects (Rosenthal Effects)

describe how an observer's expectations about a person's performance can influence the person's behavior and actual outcome

81
New cards

Effects of reinforcing responses

strengthens the likelihood of a specific behavior being repeated

82
New cards

Test anxiety

a type of performance anxiety characterized by intense worry and distress before or during exams