N

Empirical Methods in Finance Flashcards

Course Outline and Schedule

  • The course covers various topics in financial time series analysis and econometrics.
  • Characteristics of financial time series (Weeks 1 & 2): Focus on asset returns, distributions, and time dependency.
  • Univariate and Multivariate time series analysis (Weeks 3 & 4): Explore time series models.
  • Non-stationarity and cointegration (Week 5): Analyze non-stationary data.
  • Capital Asset Pricing Model (CAPM) (Week 6): Study the Capital Asset Pricing Model.
  • Multi-factor models (Week 7): Discuss multi-factor models.
  • Efficient markets hypothesis (Week 8): Examine the efficient markets hypothesis.
  • Volatility Modeling (Weeks 9 & 10): Explore ARCH and GARCH models.
  • Non-normality and Extreme Value Theory (Weeks 11 & 12): Investigate non-normal distributions and extreme value theory.
  • Multivariate GARCH models (Week 13): Discuss multivariate GARCH models.
  • Dependence and copula models (Week 14): Analyze dependence using copula models.

Lecture 1: Characteristics of Financial Time Series – 1

Objectives

  • Describe the variables used (Asset Returns).
  • Understand main properties of asset returns:
    • Distribution (Moments).
    • Time dependency (Heteroskedasticity).
    • Correlation across assets.
  • Address issues with data.

Prices vs. Returns

  • Discusses the utility of using prices, log prices, and monthly log-returns of the S&P 500 index to analyze financial data.
  • Statistical properties of financial returns can vary significantly based on the frequency of the data.

Prices

  • Notation: P_t represents the price of a security at time t.
  • Sampling Frequency:
    • Irregular: Prices observed at specific events (e.g., public offerings, dividend payments).
    • Regular: Prices observed at fixed intervals (e.g., daily, weekly).
  • Frequency Types:
    • High frequency: Intraday data (tick-by-tick, 5-minute intervals).
    • Low frequency: Daily, monthly, quarterly, or annual data.
  • Types of Prices:
    • Opening price: First trade of the day.
    • Closing price: Last trade of the day.
    • Lowest and Highest prices.
    • Adjusted closing price: Adjusted for corporate actions like splits and dividends.
  • Typically, the end-of-period price (closing price on the last day of the period) is used.

Simple Return

  • Reasons for using returns instead of prices:
    • Investors focus on returns for investment decisions.
    • Returns are easier to handle statistically.
  • Formula:
    • R{t+1} = \frac{P{t+1} - Pt}{Pt}
    • Alternative form: P{t+1} = Pt(1 + R_{t+1})
    • Where:
      • P_t: Price of the asset at time t.
      • R_{t+1}: One-period simple return from date t to t+1.

Continuously Compounded (Log) Return

  • If an annual interest of R{t+1}(%) is split into m payments in a year, the interest rate for each payment is R{t+1}(%)/m.
  • The net value of the deposit one year later will be \left(1 + \frac{R_{t+1} \text{ (%)})}{m}\right)^m
  • In the limit case where the interest is paid continuously (m \rightarrow \infty):
    • \lim{m \to \infty} \left(1 + \frac{R{t+1} \text{ (%)})}{m}\right)^m = \exp(r_{t+1})
    • With P{t+1} = Pt \times \exp(r_{t+1})
  • The continuously compounded (or log) return r_{t+1} is:
    • r{t+1} = \log \left(\frac{P{t+1}}{Pt}\right) = p{t+1} - p_t
    • Where pt = \log(Pt) is the log price

Temporal Aggregation (Multiple-Period Return)

  • Holding the asset for k periods from t to t+k yields the k-period simple return:
    • Rt[k] = \frac{P{t+k} - Pt}{Pt}
    • P{t+k} = Pt (1 + R_t[k])
    • = Pt \prod{j=1}^{k} (1 + R_{t+j})
    • Rt[k] = \left(\prod{j=1}^{k} (1 + R_{t+j})\right) - 1
  • The k-period log return is simply the sum of continuously compounded one-period returns:
    • rt[k] = \log(1 + Rt[k]) = \log \left(\prod{j=1}^{k} (1 + R{t+j})\right) = \sum{j=1}^{k} \log(1 + R{t+j})
    • rt[k] = \sum{j=1}^{k} r_{t+j}

Contemporaneous Aggregation (Portfolio Return)

  • Let p be the portfolio of N assets with weight \alpha_i on asset i.
  • The simple portfolio return R_{p,t+1} is the weighted average of the simple returns of the assets:
    • R{p,t+1} = \sum{i=1}^{N} \alpha{i} R{i,t+1}
  • In contrast, with continuous compounding, the log portfolio return r_{p,t+1} is not the weighted average of the log returns of the assets:
    • r{p,t+1} = \log(1 + R{p,t+1}) = \log \left(1 + \sum{i=1}^{N} \alpha{i} R{i,t+1}\right) \neq \sum{i=1}^{N} \alpha{i} r{i,t+1}
  • Using simple returns or log-returns is an empirical issue, which depends in general on the problem at hand.

Dividend Payments

  • For assets with periodic dividend payments, asset returns must be redefined.
    • R{t+1} = \frac{P{t+1} + D{t+1}}{Pt} - 1
    • r{t+1} = \log(P{t+1} + D{t+1}) - \log(Pt)
    • Where D_{t+1} is the dividend payment of an asset between dates t and t+1
  • P{t+1} is the price of the asset at the end of t+1 (dividend not included in P{t+1}).
  • Most reference indices include dividend payments but not all.
  • Some reference indices (e.g., MSCI indices) are computed without (“price index”) or with dividend payments (“total return index”).
  • On Datastream, stock prices are available with both definitions.

Excess Returns

  • Excess return is simply the difference between the asset return and the return on the risk-free asset
    • Z{i,t+1} = R{i,t+1} - R_{0,t}
    • z{i,t+1} = r{i,t+1} - r_{0,t}
    • With R{0,t} and r{0,t} the simple return and log return of the risk-free asset.
  • The index t is used because the risk-free rate between t and t+1 is determined in t
  • The risk-free rate should satisfy two constraints:
    • Low risk of default (rating of the issuer)
    • Low risk of a loss of capital (choice of the maturity of the security)
  • In practice, the risk-free rate is often chosen as the return of a Treasury bill with the same maturity as the investment horizon

Definition of Volatility

  • The unconditional volatility is estimated as the sample standard deviation:
    • s = \sqrt{\frac{1}{T-1} \sum{t=1}^{T} (Rt - \hat{\mu})^2}
    • \sigma = \sqrt{\frac{1}{T} \sum{t=1}^{T} (Rt - \hat{\mu})^2}
    • Where R_t is the simple return on day t and \hat{\mu} is the sample mean over the T days.
  • s is the unbiased estimator
  • \sigma is the ML estimator, asymptotically unbiased
  • For the moment, we focus on a unique estimator. We will discuss the case of a dynamic volatility later on.

Characteristics of Financial Asset Returns (Stylized Facts)

  • Absence of Autocorrelations: Linear autocorrelations of asset returns are often insignificant, except for very small intraday time scales (~20 minutes, microstructure effects).
  • Heavy Tails: The unconditional distribution of returns displays a power-law or Pareto-like tail, with a finite tail index greater than two.
  • Gain/Loss Asymmetry: Large drawdowns are observed in stock prices, but equally large upward movements are not.
  • Aggregational Normality: As the time scale increases, the distribution of returns looks more like a normal distribution. The shape varies across time scales.
  • Intermittency: Returns display high variability, quantified by irregular bursts in time series of volatility estimators.
  • Volatility Clustering: Volatility measures display positive autocorrelation over several days, indicating that high-volatility events tend to cluster in time.
  • Conditional Heavy Tails: Even after correcting for volatility clustering (e.g., via GARCH models), residual time series still exhibit heavy tails, but less heavy than the unconditional distribution.
  • Slow Decay of Autocorrelation in Absolute Returns: Autocorrelation function of absolute returns decays slowly, roughly as a power law, suggesting long-range dependence.
  • Leverage Effect: Volatility measures of an asset are negatively correlated with returns of that asset.
  • Volume/Volatility Correlation: Trading volume is correlated with volatility.
  • Asymmetry in Time Scales: Coarse-grained measures of volatility predict fine-scale volatility better than the other way around.

Early Assumptions in Finance

  • Normality of Log-Returns: Convenient for applications like the Black-Scholes model. Consistent with the Central Limit Theorem if log-returns are i.i.d.
  • Time Independency (i.i.d. Process): Implied by the Efficient Market Hypothesis (EMH), which only requires unpredictability of returns.

Moments of a Random Variable

  • Definitions
    • Assume X (log-returns) has the following cumulative distribution function (cdf) FX(x) = Pr[X \leq x] = \int{-\infty}^{x} f_X(u) du
    • where f_X(x) is the probability distribution function (pdf)
  • Mean (Expected Value):
    • \mu = E[X] = \int{-\infty}^{\infty} x fX(x) dx
  • Variance:
    • \sigma^2 = V[X] = E[(X - \mu)^2] = \int{-\infty}^{\infty} (x - \mu)^2 fX(x) dx
  • k-th Non-Central Moment:
    • mk = E[X^k] = \int{-\infty}^{\infty} x^k f_X(x) dx
    • For k = 1, 2, …
    • The first non-central moment m_1 is the expected value of X.
  • k-th Central Moment:
    • \muk = E[(X - m1)^k] = \int{-\infty}^{\infty} (x - m1)^k f_X(x) dx
    • For k = 1, 2, …
    • By construction, \mu_1 = 0
    • The second central moment \mu2 \equiv \sigma^2 = m2 - \mu^2 is the variance of X
    • The 3rd central moment \mu3 = E[(X - m1)^3] measures the asymmetry of the distribution
    • The 4th central moment \mu4 = E[(X - m1)^4] measures its tail-fatness.

Skewness and Kurtosis

  • Standardized Skewness:
    • S[X] = E\left[\left(\frac{X - m1}{\sigma}\right)^3\right] = \frac{\mu3}{\sigma^3}
  • Standardized Kurtosis:
    • K[X] = E\left[\left(\frac{X - m1}{\sigma}\right)^4\right] = \frac{\mu4}{\sigma^4}
  • Interpretation:
    • Negative Skewness (S[X] < 0): Large negative realizations are more likely (crashes are more likely than booms).
    • Large Kurtosis (K[X]): Large realizations (positive or negative) are more likely.
  • Normal Distribution Values:
    • Skewness = 0
    • Standardized Kurtosis = 3
    • Excess Kurtosis = K[X] - 3

Measures of location and dispersion

  • Consider a log-returns \left{r_t\right}, for t = 1, …, T, which are realizations of a random variable.
  • Sample Mean (Average):
    • \hat{\mu} = \bar{r} = \frac{1}{T} \sum{t=1}^{T} rt
    • The sample average is the simpler estimate of location. If the data sample is drawn from a normal distribution, the sample average is the optimal measure of location.
  • Variance:
    • It is the optimal measure for normal returns.
    • s^2 = \frac{1}{T-1} \sum{t=1}^{T} (rt - \hat{\mu})^2
    • \hat{\sigma}^2 = \frac{1}{T} \sum{t=1}^{T} (rt - \hat{\mu})^2
    • s^2 is the unbiased estimator, while \hat{\sigma}^2 is the ML estimator.
  • Standard Deviation:
    • The square root of the variance.
  • The mean and variance are not robust to outliers.

Higher Moments

  • The sample counterpart of the standardized skewness and kurtosis are
    • \hat{S} = \frac{1}{T} \sum{t=1}^{T} \left(\frac{rt - \bar{r}}{\hat{\sigma}}\right)^3
    • \hat{K} = \frac{1}{T} \sum{t=1}^{T} \left(\frac{rt - \bar{r}}{\hat{\sigma}}\right)^4
  • For a normal distribution, the skewness and the excess kurtosis are zero.
  • If \hat{S} < 0, the distribution has a fatter left tail.
  • If \hat{S} > 0, the distribution has a fatter right tail.
  • If \hat{K} < 3, the distribution has thinner tails than the normal.
  • If \hat{K} > 3, the distribution has fatter tails than the normal.
  • If we reject normality, we should probably use robust measures for higher moments.

Tests for Normality

  • Tests:
    • Jarque-Bera test (based on the moments of the distribution).
    • Kolmogorov-Smirnov test (based on the empirical distribution function).
    • Anderson-Darling test (based on the empirical distribution function).

Jarque-Bera Test

  • Based on the fact that under normality, skewness and excess kurtosis are jointly equal to zero.
  • Hypotheses
    • H_0: Skewness = 0 and Kurtosis = 3
    • The normal distribution is defined by the first two moments, so there is no constraint on the mean and variance under the null.
  • Under normality, the sample skewness and kurtosis are mutually independent.
  • Test Statistic:
    • JB = T \left[\frac{S^2}{6} + \frac{(K - 3)^2}{24}\right]
    • Under the null hypothesis, it is asymptotically distributed as a \chi^2(2).
    • If JB \geq \chi^2_{\alpha}(2), then we reject the null hypothesis at level \alpha.

Empirical Distribution Function Goodness-of-Fit Tests

  • Compare the empirical cdf and the assumed theoretical cdf F^*(x; \theta) (here, the normal distribution).
  • The time series (r1, r2, …, rT) is associated with some unknown cdf, Fr(x).
  • As the true distribution is unknown, it is approximated by the empirical cdf, G_T(x), defined as (step function with steps of height [1/T] at each observation)
    • GT(x) = \frac{1}{T} \sum{t=1}^{T} 1 {r_t \leq x }
  • The empirical cdf G_T(x) is compared with the assumed cdf F^*(x; \theta) to see if there is good fit.
  • Hypotheses
    • H0: Fr(x) = F^*(x; \theta) for all x.
    • HA: Fr(x) \neq F^*(x; \theta) for at least one value of x.
  • EDF tests are based on the result that if Fr(X) is the cdf of X, then the variable Fr(X) is uniformly distributed between 0 and 1.

Kolmogorov-Smirnov and Lilliefors Tests

  • A measure of the difference between two cdfs is the largest distance between the two functions G_T(x) and F^*(x; \theta).
  • KS Test Statistic:
    • KS = \max{1, …, T} |F^*(x; \theta) - GT(x)|
  • Recipe for the normal distribution N(\mu, \sigma^2):
    • If \mu and \sigma^2 are unknown, estimate \hat{\mu} and s^2 from the original data (Lilliefors test)
    • Sort the sample data by increasing order and denote the new sample \left{rt\right}{t=1}^{T}, with r1 \leq r2 \leq … \leq rT. By construction, we have GT(r_t) = \frac{t}{T}.
    • Evaluate the assumed theoretical cdf F^*(rt; \theta) for all values \left{rt\right}_{t=1}^{T}. Use (\mu, \sigma^2) if they are known or (\hat{\mu}, s^2) if they are not
  • Compute the KS-Lilliefors test statistic:
    • KS = \max_{1, …, T} |F^*(x; \theta) - \frac{t}{T}|
    • Critical values: 0.805/\sqrt{T}, 0.886/\sqrt{T}, and 1.031/\sqrt{T} for a 10%, 5%, and 1% level test

Lecture 2: Characteristics of Financial Time Series – 2

Objectives

  • Addresses the time dependency of asset returns.
  • Explores the properties of correlation across asset returns.
  • Discusses issues with data.

Stationarity

  • Prices are generally non-stationary, while returns are typically stationary.
  • Prices are often non-stationary due to economic expansion, productivity increases, or financial crises.
  • Returns tend to fluctuate around a constant level, indicating a constant mean over time (mean-reverting process).
  • Most return sequences can be modeled as a stochastic process with time-invariant first two moments (weak stationarity).

Serial Correlation

  • No serial correlation: Autocorrelations of asset returns R_t are often insignificant, except for very small intraday time scales (≈ 20 minutes) where microstructure effects occur.
  • Absence of serial correlation does not mean the returns are independent.
  • Squared returns and absolute values of returns often display high serial correlation.
  • The fact that returns do not show any serial correlation does not mean that they are independent!
    • In fact, squared returns and the absolute value of returns display high serial correlation

Time Dependence of Moments

  • The time dependence of moments is crucial for testing market efficiency.
  • Assuming a weakly (or covariance) stationary time series (rt, …, rT), we have:
    • E[r_t] = \mu
    • \forall t V[r_t] = \sigma^2
    • \forall t Cov[rt, r{t-k}] = \gamma_k
    • \forall t, k
  • \gammak is the auto-covariance, measuring the dependency of rt with respect to its past observation r_{t-k}.
    • \gamma0 = V[rt] = E[(r_t - \mu)^2]
    • \gammak = \gamma{-k}
  • For a weakly stationary process, the autocovariance \gamma_k depends on the horizon k, but not on the date t.

Auto-Correlation Function (ACF)

  • The auto-correlation function (ACF) is preferred over auto-covariance as a measure of dependency because it is a bounded measure:
    • \rhok = \frac{Cov[rt, r{t-k}]}{\sqrt{V[rt]V[r{t-k}]}} = \frac{Cov[rt, r{t-k}]}{V[rt]} = \frac{\gammak}{\gamma0}
    • \rhok = \rho{-k}, \rho0 = 1, -1 \leq \rhok \leq 1
  • A consistent estimator of \rho_k is:
    • \hat{\rho}k = \frac{\frac{1}{T-k} \sum{t=k+1}^{T} (rt - \bar{r})(r{t-k} - \bar{r})}{\frac{1}{T-1} \sum{t=1}^{T} (rt - \bar{r})^2}
  • Asymptotic properties of \hat{\rho}k are complex due to its nature as a ratio of quadratic functions of rt.

Asymptotic Distribution of Serial Correlation

  • General Case:
    • The asymptotic distribution of (\hat{\rho}1, …, \hat{\rho}K) is: \sqrt{T} \begin{bmatrix} \hat{\rho}1 - \rho1 \ \vdots \ \hat{\rho}K - \rhoK \end{bmatrix} \sim N \left( \begin{bmatrix} 0 \ \vdots \ 0 \end{bmatrix}, V \right)
    • V{i,j} = \sum{s=-\infty}^{\infty} (\rho{s+i} + \rho{s-i} - 2\rhoi \rhos)(\rho{s+j} + \rho{s-j} - 2\rhoj \rhos)
  • Case 1: r_t is an i.i.d. process:
    • V[\hat{\rho}_k] = \frac{1}{T}
    • \sqrt{T} \hat{\rho}_k is asymptotically normal N(0,1) for all k > 0.
  • Case 2: r_t is a stationary moving average MA(q) process:
    • rt = \mu + \sum{i=1}^{q} \thetai \epsilon{t-i}, with \theta0 = 1 and \rhok = 0 for k > q.
    • V[\hat{\rho}k] = \frac{1}{T} \left[1 + 2 \sum{i=1}^{q} \rho_i^2 \right]
    • \sqrt{T} \hat{\rho}k is asymptotically normal N(0,V) with V = 1 + 2 \sum{i=1}^{q} \rho_i^2 for all k > q.
    • We test \hat{\rho}_k = 0 by assuming that the process is an MA(k-1) to estimate V.

Box-Pierce / Ljung-Box Tests

  • Hypothesis:
    • Null hypothesis: H0 : \rho1 = … = \rho_p = 0
    • Alternative hypothesis: H1 : \rhok \neq 0 for some k = 1,…, p.
  • Box-Pierce Q-Statistic:
    • Qp = T \sum{i=1}^{p} \hat{\rho}_i^2
    • Under the null hypothesis H0 : \rho1 = … = \rhop = 0, all the \hat{\rho}k have V[\hat{\rho}_k] = \frac{1}{T}.
    • Therefore, Q_p is asymptotically distributed as \chi^2(p).
    • If Qp \geq \chi^2{\alpha}(p), then we reject the null hypothesis at level \alpha.
  • Ljung-Box Q’ Statistic:
    • Q'p = T(T + 2) \sum{j=1}^{p} \frac{1}{T - j} \hat{\rho}_j^2
    • Has better finite-sample properties, with the same asymptotic distribution.

Volatility Clustering

  • Volatility clustering implies that large price changes (returns with large absolute values or squared returns) occur in clusters.
  • Large price changes tend to be followed by large price changes.
  • Periods of tranquility alternate with periods of high volatility.
  • Volatility clustering is a consequence of the autocorrelation of the squared returns.

Serial Correlation in Volatility

  • To test for time dependency in volatility, we need a time-varying measure of volatility.
  • Two possible measures:
    • Mean-adjusted squared returns.
    • Absolute returns.
  • Assume returns have the following dynamics:
    • rt = \mu + \epsilont with \epsilont = \sigmat z_t
      • \mu is the constant mean.
      • \epsilon_t the unexpected returns.
      • \sigma_t the time-varying volatility based on information at date t–1.
      • z_t is an N(0,1) innovation.

Proxies for Conditional Volatility

1. Conditional Expectation of Squared Returns:

  • Given information set I_{t-1}.
    • E[\epsilont^2 | I{t-1}] = E[\sigmat^2 zt^2 | I{t-1}] = \sigmat^2 E[zt^2 | I{t-1}] = \sigma_t^2
    • Because z_t^2 is distributed as a \chi^2(1).
    • \epsilon_t^2 can be viewed as a proxy for volatility at time t.

2. Conditional Expectation of Absolute Returns:

  • If \epsilont \sim N(0, \sigmat^2), then:
    • E[|\epsilont| | I{t-1}] = \sigma_t \sqrt{2/\pi}.
    • |\epsilont| / \sqrt{2/\pi} can be used as a proxy for \sigmat.
  • Both measures(1 and 2) are noisy estimators of conditional volatility.

Volatility Asymmetry

  • Volatility is more affected by negative news than positive news.
  • Captured through regressions:
    • \epsilont^2 = \omega + \beta \epsilon{t-1}^2 + \gamma \epsilon{t-1}^2 \times 1{{\epsilon_{t-1} < 0}} or
    • \epsilont = \omega + \beta \epsilon{t-1} + \gamma \epsilon{t-1} \times 1{{\epsilon_{t-1} < 0}}
      • \beta: Direct effect of past returns.
      • \gamma: Additional impact of negative return shocks.

Cross-Correlation

  • Covariance between two series r{i,t} and r{j,t} is defined as:
    • Cov[r{i,t}, r{j,t}] = \frac{1}{T} \sum{t=1}^{T} (r{i,t} - \mui)(r{j,t} - \mu_j)
      • \mui and \muj are the sample means of r{i,t} and r{j,t} .
  • Correlation between r{i,t} and r{j,t} is defined as:
    • \rho[r{i,t}, r{j,t}] = \frac{Cov[r{i,t}, r{j,t}]}{\sqrt{V[r{i,t}]V[r{j,t}]}}
    • By construction, \rho[r{i,t}, r{i,t}] = 1, \rho[r{i,t}, r{j,t}] = \rho[r{j,t}, r{i,t}] and -1 \leq \rho[r{i,t}, r{j,t}] \leq 1
  • The correlation is a measure of linear association. Series may be uncorrelated but not independent.

Estimation of Cross-Correlation

  • The correlation between r{i,t} and r{j,t} is estimated as:
    • \hat{\rho}[r{i,t}, r{j,t}] = \frac{\sum{t=1}^{T} (r{i,t} - \hat{\mu}i)(r{j,t} - \hat{\mu}j)}{\sqrt{\sum{t=1}^{T} (r{i,t} - \hat{\mu}i)^2 \sum{t=1}^{T} (r{j,t} - \hat{\mu}_j)^2}}

Rejection of Previous Assumptions

  • For actual returns, the assumptions of normality and time independency have been widely rejected.
    • Asymmetry.
    • Fat tails.
    • Aggregated normality.
    • No serial correlation.
    • Volatility clustering.
    • Asymmetric volatility.
    • Time-varying correlation.

Portfolio

  • N risky securities
  • Securities returns: 𝑅#)! = (𝑅!,#)!, … , 𝑅6,#)!)′
  • Expected returns: 𝜇#)! = 𝐸[𝑅#)!] = (𝜇!,#)!, … , 𝜇6,#)!)′ (measured in t)
  • Covariance matrix: Σ#)! = 𝐸[(𝑅#)! − 𝜇#)!)(𝑅#)! − 𝜇#)!)5] = I 𝜎!,#)! $ … 𝜎!6,#)! ⋮ ⋱ ⋮ 𝜎!6,#)! … 𝜎6,#)! $ J
  • Portfolio weights: 𝛼# = (𝛼!,#, … , 𝛼6,#)′, with 𝛼# 5𝑒 = ∑ 𝛼+,# 6 +(! = 1, 𝑒 = (1, … ,1)′
  • Ex-post portfolio return: 𝑅2,#)! = 𝛼# 5𝑅#)! = ∑ 𝛼+,#𝑅+,#)! 6 +(! (Realized return)
  • Ex-ante portfolio expected return: 𝐸b𝑅2,#)!c = 𝜇2,#)! = 𝛼# 5𝜇#)! = ∑ 𝛼+,#𝜇+,#)! 6 +(!
  • Ex-ante portfolio variance: 𝑉b𝑅2,#)!c = 𝜎2,#)! $ = 𝛼# 5Σ#)!𝛼# = ∑ ∑ 𝛼+,#𝛼-,#𝜎+