1/56
Flashcards covering key vocabulary and concepts from the Empirical Methods in Finance lectures.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Asset Returns
The change in the value of an asset over a period.
High Frequency Data
Intraday frequency, for instance, tick-by-tick, 5 minute, 15 minute, 1 hour intervals.
Low Frequency Data
Typically, daily, monthly, quarterly, annual frequencies
Opening Price
Price at which a security first trades upon the opening of an exchange on a trading day.
Closing Price
Price at which a security is traded on a given trading day, representing the most up-to-date valuation of a security.
Adjusted Closing Price
Price adjusted for corporate actions (splits, dividend payments, etc.).
Simple Return Formula
𝑅{t+1} = (𝑃{t+1} − 𝑃t) / 𝑃t
Log Return Formula
𝑟{t+1} = log(𝑃{t+1} / 𝑃t) = 𝑝{t+1} − 𝑝_t
k-period simple return Formula
Rt[k] = (Pt+k − Pt) / Pt
Total Return Index
Includes dividend payments
Price Index
Computed without dividend payments.
Excess Return
The difference between the asset return and the return on the risk-free asset.
Unconditional Volatility estimator
Estimated as the sample standard deviation.
absence of autocorrelations
(linear) autocorrelations of asset returns are often insignificant, except for very small intraday time scales (~ 20 minutes) (microstructure effects)
Heavy tails
the (unconditional) distribution of returns seems to display a power-law or Pareto-like tail, with a tail index which is finite, higher than two
Gain/loss asymmetry
one observes large drawdowns in stock prices and stock index values but not equally large upward movements.
Aggregational Normality
as one increases the time scale over which returns are calculated, their distribution looks more and more like a normal distribution. The shape of the distribution is not the same at different time scales
Intermittency
returns display, at any time scale, a high degree of variability. This is quantified by the presence of irregular bursts in time series of a wide variety of volatility estimators.
Volatility clustering
different measures of volatility display a positive autocorrelation over several days, which quantifies the fact that high-volatility events tend to cluster in time.
Conditional heavy tails
even after correcting returns for volatility clustering (e.g. via GARCH-type models), the residual time series still exhibit heavy tails. However, the tails are less heavy than in the unconditional distribution of returns.
Slow decay of autocorrelation in absolute returns
the autocorrelation function of absolute returns decays slowly as a function of the time lag, roughly as a power law. This is sometimes interpreted as a sign of long-range dependence.
Leverage effect
most measures of volatility of an asset are negatively correlated with the returns of that asset.
Volume/volatility correlation
trading volume is correlated with volatility.
Asymmetry in time scales
coarse-grained measures of volatility predict fine-scale volatility better than the other way round.
Normality of log-returns
It is a convenient assumption for many applications in Finance (cf. Black-Scholes model for option pricing). For stock-index returns, it is consistent with the Central Limit Theorem if log-returns are i.i.d.
Time independency (i.i.d., or independent and identically distributed, process)
It is, to some extent, an implication of the Efficient Market Hypothesis. In fact, the EMH only imposes unpredictability of returns.
Weak Stationarity
Most return sequences can be modeled as a stochastic process with (at least) time- invariant two first moments
Autocorrelation
The autocorrelations of asset returns 𝑅! are often insignificant, except for very small intraday time scales (≈ 20 minutes) for which microstructure effects come into play.
Correlogram
A plot of the sample autocorrelations.
Volatility Clustering
Large price changes tend to be followed by large price changes, and periods of tranquility alternate with periods of high volatility.
Volatility Asymmetry
Volatility is more affected by negative news than positive news.
Cross-Correlation
A measure of the dependence between two series.
Skewness
Measures the asymmetry of the distribution
Kurtosis
Measures tail-fatness of distribution
Jarque-Bera Test
A test based on the fact that under normality, skewness and excess kurtosis are jointly equal to zero.
Kolmogorov-Smirnov Test
Compares the empirical cdf and the assumed theoretical cdf.
Lilliefors Test
Variant of KS test where mean and variance are estimated from the data.
Strict Stationarity
The joint distribution of (𝑟{t1}, … , 𝑟{tk}) is identical to that of (𝑟{t1+h}, … , 𝑟{tk+h}) for all t and h.
Weak Stationarity (Covariance Stationarity)
The mean of 𝑟t and the covariance between 𝑟t and 𝑟_{t−k} are time-invariant.
White Noise
A time series process that is i.i.d. with a zero mean, a constant variance, and no autocorrelation.
Autocorrelation Function (ACF)
A measure of the serial correlation between 𝑟_t and its lagged values.
Linear Time Series
A time series that can be written as a linear combination of a sequence of uncorrelated random variables.
Partial Autocorrelation Function (PACF)
Measures the additional correlation between 𝑟t and 𝑟{t−k} after adjusting for the correlation with intervening lags.
Moving Average (MA) Process
A process where the current value depends on current and past error terms.
Autoregressive Moving Average (ARMA) Process
A mixed process that combines autoregressive (AR) and moving average (MA) components.
Akaike Information Criterion (AIC)
A criterion for model selection that balances model fit and complexity.
Schwarz (Bayesian) Information Criterion (BIC)
Similar to AIC but with a stronger penalty for model complexity.
Maximum Likelihood Estimation (MLE)
A method to estimate the parameters of a model by maximizing the likelihood function.
Vector Autoregressive (VAR) Model
A multivariate time series model that captures the interdependencies among multiple series.
Cross-Correlation Matrices
Describe the lead-lag relationship between multiple time series.
Granger Causality
A statistical concept that tests whether one time series is useful in forecasting another.
Impulse Response Analysis
Examines the response of a system to shocks or innovations. Used to analyze how a system reacts to sudden changes.
Variance Decomposition
Determines the proportion of the variance in one variable that can be explained by other variables in a system. Helps in understanding the relative importance of different factors.
Vector Moving-Average (VMA) Model
An extension of the moving-average model to multiple time series. Useful for capturing relationships based on how past errors affect current variables.
Weak Stationarity and Cross-Correlation Matrices
A n-dimensional time series is weakly stationary if its mean vector and covariance matrix are time invariant
Test for Granger Causality
Granger causality is defined in terms of linear predictions
Test for Granger Causality
If does not Granger cause i.i.f. the best linear prediction of given and does not depend on