1/297
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Abstract
A main summary of an empirical article (150-250 words) that includes the research goal, population, methods, findings, and main conclusions/implications. It is written in past tense and emphasizes what the research introduces.
According to classical test theory, how is an observed score (Xo) decomposed?
Xo = Xt + Xe, where Xt is the true score and Xe is measurement error.
According to common thresholds, what Cronbach’s alpha value is classified as \"excellent\"?
An alpha (α) greater than 0.90.
According to the attenuation principle, how does reliability limit correlations between two observed variables X and Y?
The observed correlation rXo,Yo cannot exceed the square root of reliability for X or for Y (whichever is lower).
According to the functional (goal-based) definition, what is science trying to achieve?
To know more about the world and understand how and why things work and change.
According to the lecture, how can you distinguish between an artifact and a confound?
An artifact is an alternative explanation that can be easily isolated and manipulated independently of the main independent variable, whereas a confound is an inseparable component of the manipulation that offers an alternative theoretical explanation (e.g., experimenter or participant expectations).
According to the lecture, what is the simplest way to improve an instrument’s reliability?
Collect multiple measurements and average them; more measurements tend to approximate the true score.
According to the summary, why must scientific theories always remain open to refutation?
To prevent them from becoming unquestionable authorities and to allow continual progress through new evidence.
Ad-Hoc Additions
Explanations added to a theory in retrospect to save it from refutation, which, according to Popper, makes the theory unscientific.
After finding a significant interaction, what analytical step should follow?
Examine the simple main effects to understand the pattern within each level of the moderating variable.
Artifact
An external variable that undesirably influences the results of an experiment, posing a threat to internal validity.
Authority (שיטת נסמכות)
An approach to acquiring knowledge by relying on people who are socially or politically defined as knowledgeable, such as teachers or experts.
Axiom (אקסיומה)
Basic assumptions stemming from a worldview, which are practical factual statements not tested in a study. They are more specific than paradigms and are considered foundational assumptions for research, not theories themselves.
Can an instrument possess true validity even when face validity is low? Give an example.
Yes; for example, the Rorschach test or eye-movement measures during sleep may be valid despite lacking obvious face validity.
Cause and Effect Understanding (Goal of Science)
The objective of determining which variable causes another, moving beyond simple description or prediction of correlation.
Confirmation Bias (הטיית האישוש)
A cognitive bias where one searches for information to confirm their own argument, preferring information that supports a prediction over information that falsifies it.
Confound
A situation in which an aspect of the operational independent variable, unintended by the researcher, is inadvertently linked to the independent variable, making it difficult to attribute the effect solely to the intended variable.
Construct
A theoretical concept or phenomenon (e.g., motivation, personality) that exists at an abstract level and requires operationalization to be studied empirically.
Content Validity (תוקף תוכן)
The extent to which the items in a survey or test adequately represent the entire domain or \"content world\" that the researcher intends to measure. It is assessed by logical considerations and expert judgment rather than statistics.
Contrast lab, field, and natural experiments in terms of IV manipulation.
Lab and field experiments manipulate the IV; natural experiments observe naturally occurring IV differences without manipulation.
Convergent Validity (תוקף מתכנס)
A type of construct validity that assesses whether a measurement tool tests the theoretical variable of interest. It is established by demonstrating a high correlation between the tool and other validated tools that measure the same variable or related variables.
Correlational Research (מחקר מתאמי)
A descriptive research method that observes phenomena and systematically checks how multiple variables covary or change together. It can serve as a basis for prediction but does not allow for strong causal inference.
Cost-effective/Economical (חסכוניות - Theory Evaluation)
A criterion for evaluating theories, where a simpler, more focused theory containing fewer elements is considered better. This concept is also known as Occam's Razor.
Cronbach's Alpha
A method for measuring internal consistency reliability, suitable for tools with multiple items that measure a single variable. It calculates the correlation of each item with all other items in the questionnaire. A satisfactory reliability is typically considered to be \alpha > 0.70.
Deductive Theory
A theory built from general logical arguments before observation, moving from broad principles to specific predictions.
Define ‘Beneficence’ in research ethics.
Maximizing possible benefits while minimizing potential harms to participants.
Define ‘deception’ in research.
Intentionally providing false or incomplete information to participants to avoid biasing results.
Define ‘Questionable Research Practices’ (QRPs).
Methods that inflate false positives, such as selective reporting, optional stopping, outlier manipulation, and post-hoc storytelling.
Define a \"main effect\" in a factorial design.
The overall effect of one independent variable on the dependent variable, averaged across the levels of the other IV(s).
Define a \"simple main effect.\"
The effect of one IV on the DV at a single level of another IV.
Define a mediator variable.
A variable that transmits the statistical relationship between an independent and a dependent variable; without it, the direct link weakens or disappears.
Define an \"interaction effect.\"
A situation in which the effect of one IV on the DV depends on the level of another IV.
Define ecological validity.
The extent to which laboratory findings generalize to real-world settings and behaviours.
Define external validity and cite one specific interaction that can threaten it.
External validity is the ability to generalise research findings across populations, settings, times and situations; one threat is the interaction of selection and treatment, where results apply only to participants who were initially motivated, skilled, or otherwise unrepresentative.
Dependent Variable
The outcome or response that researchers measure to see how it is influenced by the independent variable.
Describe a cross-sectional (cohort) developmental design.
Different age groups are measured at one point in time and compared, providing quick age differences without long follow-up.
Describe a longitudinal developmental design.
The same individuals are measured repeatedly at different ages to observe intra-individual change over time.
Describe the interaction found in the biased-question example.
Biased questions increased memory errors only when asked by a knowledgeable investigator; with a naïve investigator, question type had little effect.
Description (Goal of Science)
The objective of identifying and describing patterns or occurrences in reality.
Differentiate between specific and nonspecific reactivity and give an example of each.
Specific reactivity stems from knowing the researcher’s hypothesis (e.g., attitude questionnaires influencing later behavior); nonspecific reactivity comes from merely being in an experimental setting (e.g., changes during ERP studies).
Discriminant Validity (תוקף מבחין)
A type of construct validity that assesses whether a tool measures only the intended theoretical variable and does not significantly correlate with other unrelated variables. High discriminant validity is indicated by low correlations with measures of different constructs.
Discussion Section
The part of a scientific article where the authors interpret the significance of their results, link findings to previous research, and discuss theoretical and practical implications, as well as study limitations. It does not include statistical data.
Does establishing face validity require statistical analysis?
No. It is based on subjective impressions and logical assessment.
Empirical (Scientific Method Principle)
A principle stating that scientific research must be based on observations in the world and requires standardization of methods to allow for comparison and evaluation of validity. Also, a criterion for good theories, meaning the extent to which the theory can be measured in practice.
Empirical Article
A type of scientific article that reports the actual results of a research study, whether experimental or correlational. It follows a set structure including an abstract, introduction, methods, results, discussion, and references.
Ethical Considerations
Principles like beneficence, respect for persons, and justice that guide the responsible conduct of research, ensuring participant welfare, autonomy, and fair treatment. Informed consent is a key component.
Experimental Operational Definition
An operational definition detailing the manipulation performed by the researcher, usually for independent variables in an experiment.
Explain the Nonequivalent Control Group Posttest-Only design in one sentence.
Two non-randomly assigned groups are compared after one receives treatment and the other does not, without pretest data.
Explanation, Understanding, and Control (Goal of Science)
The highest goal of science, building upon description, prediction, and cause-and-effect understanding, to develop models for early detection, prevention, and treatment, and to influence behavior.
External Validity (תוקף חיצוני)
The extent to which the findings and causal relationships discovered in a research study (based on a specific sample and conditions) can be generalized to other populations, settings, and times.
Face Validity (תוקף נראה)
The extent to which a measurement tool appears, on its surface, to be testing what it is supposed to measure. It is based on subjective judgment and is a starting point but not sufficient for determining overall validity.
Falsifiability (Popper's Principle)
A fundamental principle of the scientific method, stating that for a theory to be considered scientific, it must be capable of being disproven or refuted through empirical testing. If a theory cannot be falsified, it is considered unscientific.
For what purpose is a Latin Square (counterbalancing) arrangement used?
To ensure every treatment condition appears in every ordinal position and follows every other condition equally often, controlling order effects.
Formulation (Theory Evaluation)
A criterion for evaluating theories, referring to the extent to which the theory is based on a clear and logical structure, ideally a mathematical one.
Fruitfulness (פוריות - Theory Evaluation)
A criterion for evaluating theories, referring to the extent to which a theory stimulates and advances other theories and research in its field, thereby expanding overall understanding.
Give an example of a “history” threat to internal validity.
Any event occurring between the pre-test and post-test besides the treatment—e.g., an economic recession, war, or school closure due to COVID-19—that could influence participants’ scores and mimic a treatment effect.
Give an example of a basic descriptive research question from the notes.
\"What is the absolute threshold of hearing?\"
Give an example of a natural experiment listed in the lecture.
Assessing chronic stress exposure by comparing pregnant women living in a frequently shelled region to those living in an untouched area.
Give an example reason for increasing the number of IV levels beyond two.
There may be more than two theoretically relevant conditions to compare, such as superordinate, basic, and subordinate category levels.
Give one advantage and one disadvantage of longitudinal designs.
Advantage: Detects individual developmental trajectories. Disadvantage: subject attrition (loss of participants) and potential selective dropout.
Give one example of an unobtrusive physical trace measure.
Examining the wear patterns on museum carpets to infer visitor traffic.
Give one historical example of physical harm that violated Beneficence.
The Willowbrook hepatitis study (1950s), where healthy children were intentionally infected with hepatitis.
Give one major advantage and one disadvantage of pre-registration.
Advantage: Reduces false positives and QRPs. Disadvantage: May undervalue exploratory research and favor well-known researchers.
Give one major limitation of using test–retest reliability.
Participants may remember or learn from the first administration, creating carry-over effects that bias the second test.
Give one pro and one con of using deception.
Pro: Increases realism and reduces demand characteristics. Con: May cause feelings of mistrust or exploitation.
Give two example pairs of construct, variable, and measuring tool from the lecture.
Arousal – heart rate – pulse meter; Intelligence – performance on the WISC standardized test.
Hard Operationalism (אופרציונליזם קשה)
An approach to operationalization where the theoretical term is expected to have a complete, one-to-one meaning in its operational definition, attempting to encompass all dimensions of the concept.
History (Threat to Internal Validity)
An artifact in research design referring to external events occurring between the pre-measurement and post-measurement of the dependent variable that could influence the subjects and affect results, other than the intended manipulation.
Holistic Criticism (of Popper)
A critique of Popper's falsifiability principle, arguing that a scientific experiment typically tests multiple \"background\" hypotheses along with a single main hypothesis. Therefore, if a prediction is not confirmed, it's difficult to pinpoint which specific hypothesis was falsified.
How can high face validity contribute to reactivity?
When participants realize what is being measured, their awareness may bias their behavior or responses, threatening validity.
How can increasing IV levels help neutralize confounding variables?
By including intermediate levels that control potential confounds, clarifying the true IV–DV relationship.
How can reliability be expressed using correlations?
It equals the square of the correlation between the true score and the observed score (r²Xt,Xo).
How can researchers partially compensate for lack of random assignment in natural experiments?
By matching participants in treatment and control groups on key variables (age, health, SES) and by increasing sample size to reduce variability.
How do exploratory studies differ from cumulative knowledge-building studies?
Exploratory studies are first steps that map unknown phenomena, whereas cumulative studies are planned to build systematically on earlier findings.
How do social sciences address the demand for empiricism?
By employing standardized quantitative and qualitative methods to produce measurable, comparable data.
How does a natural experiment differ from a field experiment?
In a natural experiment, the researcher does not manipulate the independent variable; instead, naturally occurring events (e.g., policy changes, disasters) create conditions comparable to experimental manipulation.
How does high within-group variance affect detection of between-group differences?
It makes it harder to detect true differences between groups because individual variability masks the effect of the IV.
How does large random error variance affect the ability to understand real differences between people?
Greater random error masks true differences, making it harder to interpret results and lowering reliability.
How does random assignment reduce systematic error (bias) between groups?
It distributes extraneous variables equally across groups, making them unlikely sources of between-group differences.
How does systematic error differ from random error?
Systematic error adds a constant bias to all scores, whereas random error varies unpredictably around zero and only affects variability.
How does the scientific approach integrate intuition and authority?
It uses them as starting points but subjects their claims to empirical testing and critical scrutiny.
How is ‘exclusivity’ (internal validity) achieved in an experiment?
By isolating the independent variable and using an equivalent control group that differs only on that variable.
How is face validity typically evaluated?
Through subjective impressions of judges, experts, or respondents rather than statistical analyses.
How is objectivity promoted in contemporary research practice?
Through teamwork in research groups and critical peer-review by fellow scientists.
How is reliability defined in terms of variances?
Reliability = True variance / Observed (total) variance.
How many experimental conditions does a 2 × 2 × 2 design include?
Eight conditions.
How many observations are recommended per phase in a basic A-B single-case design?
At least five observations (measurements) in each phase to establish stability and detect change.
Hypothesis
A specific, testable idea or question about the correctness of a particular phenomenon, often stating an expected relationship between two or more variables. It is formulated based on previous research and theoretical considerations, and is either supported or rejected, not proven.
If reliabilities for two measures are 0.8 and 0.7, what is the theoretical upper bound on their observed correlation?
√(0.8 × 0.7) ≈ 0.74.
In a 2 × 2 factorial design, how many experimental conditions are there?
Four conditions (cells).
In a repeated-measures factorial design, how are participants assigned?
Each participant experiences all levels of at least one IV, providing data for multiple cells.
In a single-case B-A design, what does \"A\" represent and what does \"B\" represent?
\"A\" is the baseline phase (pre-intervention) and \"B\" is the post-intervention phase during which the independent variable is introduced.
In an independent-groups factorial design, how are participants assigned?
Each participant is tested in only one cell of the factorial matrix (one combination of IV levels).
In analysis of variance, what do the terms ‘between-group variance’ and ‘within-group variance’ represent?
Between-group variance is systematic variance due to the manipulation; within-group variance is random error originating from individual differences and measurement noise.
In research methods, what does the term \"validity\" mean?
The extent to which a formal measurement tool actually measures what it is intended to measure and the extent to which conclusions and actions based on that measurement are appropriate and accurate.
In statistical conclusion validity, what are Type I and Type II errors?
A Type I error is falsely concluding that a relationship exists when it does not (false positive), whereas a Type II error is failing to detect a real relationship (false negative).
In the biased-question example, what main effect was found for question type?
Biased (misleading) questions produced more memory errors than unbiased questions.
In the same example, what main effect was found for investigator knowledge?
Investigators who knew the crime details elicited more memory errors than naïve investigators.
In Wason’s card-selection task, which cards should be turned to test the rule “If a card has a vowel on one side, it has an even number on the other”?
The ‘E’ card and the ‘7’ card.
Independent Variable
The predictor or explanatory factor that researchers manipulate or measure to determine its effect on another variable.