1. Descriptive Statistics: This branch of statistics involves describing and summarizing the main features of a dataset, such as measures of central tendency, variability, and shape. It provides a concise overview of the data without making any assumptions about the underlying distribution.
2. Inferential Statistics: This branch of statistics focuses on making inferences and predictions about a population based on a sample of data. It involves using probability theory to draw conclusions about a larger group or population based on a smaller subset of data.
3. Measures of Central Tendency (Mean, Median, Mode): These are statistical measures that describe the central or typical value of a dataset. The mean is the arithmetic average, the median is the middle value, and the mode is the most frequently occurring value.
4. Skew (Positive & Negative): Skew refers to the degree of asymmetry in a distribution. Positive skew indicates that the distribution is skewed to the right, while negative skew indicates that the distribution is skewed to the left.
5. Normal Curve / Bell Curve: A normal curve, also known as a bell curve, is a symmetrical distribution that is bell-shaped and has a single peak. The normal curve is a common and important distribution in statistics.
6. Measures of Variability (Standard Deviation & Range): These measures describe the spread or dispersion of a dataset. The standard deviation is a measure of how much individual data points deviate from the mean, while the range is the difference between the largest and smallest values in the dataset.
7. Frequency Distributions: A frequency distribution is a table that summarizes the number of data points that fall within each category or interval.
8. Histogram: A histogram is a graphical representation of a frequency distribution. It displays the distribution of data by dividing it into intervals and showing the number of data points that fall within each interval.
9. Scatter Plot: A scatter plot is a graphical representation of the relationship between two variables. It displays the data points as points on a graph, with one variable plotted on the x-axis and the other variable plotted on the y-axis.
10. Statistical Significance: Statistical significance refers to the probability of obtaining a particular result by chance. It is commonly used to determine whether the results of an experiment or study are unlikely to have occurred by chance and can be considered meaningful.
11. Correlational Coefficient: A correlational coefficient is a statistical measure that describes the strength and direction of the relationship between two variables. It ranges from -1 (perfect negative correlation) to +1 (perfect positive correlation).
12. "Third Variable" Problem: The third variable problem is a common issue in correlational research where a third variable may be influencing the relationship between two variables being studied, leading to a spurious correlation.
13. Correlation does not mean Causation: Correlation does not necessarily imply causation. It is possible for two variables to be correlated without one causing the other.
14. Illusory Correlation: An illusory correlation is a perceived relationship between two variables that does not actually exist. It can be caused by selective attention to instances that support the perceived relationship while ignoring instances that contradict it.
15. Percentile Rank: Percentile rank is a measure that indicates the percentage of data points below a certain value in a dataset.
16. Regression Toward the Mean: Regression toward the mean is a phenomenon where extreme values in a dataset tend to move closer to the average in subsequent measurements. This can occur due to chance or other factors.
17. Law of Averages: The law of averages is a principle stating that, over time, the average value of a random variable will tend to be the expected or average value. It is a basic concept in statistics and probability theory.