Porometric Families & Models and Non-Parametric Modeling

Porometric Families & Models

  • Output Error (OE)
    • Equation: A(q)y(t)=B(q)u(t)+e(t)A(q)y(t) = B(q)u(t) + e(t)
    • Where:
      • A,C,D,F=1A, C, D, F = 1
      • y(t)=G(q;θ)u(t)+e(t)y(t) = G(q; \theta)u(t) + e(t)
    • e(t)e(t) is white noise.
    • G(q;θ)=B(q;θ)A(q;θ)G(q; \theta) = \frac{B(q; \theta)}{A(q; \theta)}
    • B(q)=b<em>0+b</em>1q1++b<em>n</em>bqnbB(q) = b<em>0 + b</em>1q^{-1} + … + b<em>{n</em>b}q^{-n_b}
    • IIR assumption
    • Iterative optimization is required.
    • y^(tt1;θ)=G(q;θ)u(t)\hat{y}(t|t-1; \theta) = G(q; \theta)u(t)
    • J(θ)=1Nt=1N[y(t)y^(tt1;θ)]2J(\theta) = \frac{1}{N} \sum_{t=1}^{N} [y(t) - \hat{y}(t|t-1; \theta)]^2
    • J(θ)J(\theta) is generally a nonlinear function of θ\theta, thus non-convex.

Auto-Regressive with Exogenous Input (ARX)

  • Equation: A(q)y(t)=B(q)u(t)A(q)y(t) = B(q)u(t)
  • Where:
    • C,D,F=1C, D, F = 1
    • y(t)=B(q)A(q)u(t)y(t) = \frac{B(q)}{A(q)}u(t)
    • Some poles, no zeros; similar to GG
    • y^(tt1;θ)=B(q;θ)A(q;θ)u(t)\hat{y}(t|t-1; \theta) = \frac{B(q; \theta)}{A(q; \theta)}u(t)
    • A(q)=1+a<em>1q1++a</em>n<em>aqn</em>aA(q) = 1 + a<em>1q^{-1} + … + a</em>{n<em>a}q^{-n</em>a}
    • J(θ)=1Nt=1N[A(q;θ)y(t)B(q;θ)u(t)]2J(\theta) = \frac{1}{N} \sum_{t=1}^{N} [A(q; \theta)y(t) - B(q; \theta)u(t)]^2
    • [1+a<em>1q1++a</em>n<em>aqn</em>a]y(t)[b<em>0+b</em>1q1++b<em>n</em>bqnb]u(t)[1 + a<em>1q^{-1} + … + a</em>{n<em>a}q^{-n</em>a}]y(t) - [b<em>0 + b</em>1q^{-1} + … + b<em>{n</em>b}q^{-n_b}]u(t)

ARX Model Details

  • Equation: y(t)=b<em>0u(t)+b</em>1u(t1)+a<em>1y(t1)a</em>n<em>ay(tn</em>a)y(t) = b<em>0u(t) + b</em>1u(t-1) + … - a<em>1y(t-1) - … - a</em>{n<em>a}y(t-n</em>a)
  • Linear in parameters
    • θ=[b<em>0,b</em>1,,a<em>1,a</em>na]\theta = [b<em>0, b</em>1, …, a<em>1, … a</em>{n_a}]
    • J(θ)J(\theta) is quadratic in parameters.
    • Globally convex, global minimum.

Auto-Regressive Moving Average with Exogenous Input (ARMAX)

  • Equation: A(q)y(t)=B(q)u(t)+C(q)e(t)A(q)y(t) = B(q)u(t) + C(q)e(t)
  • Where: F=1F = 1
    • y^(tt1)=B(q)A(q)u(t)+C(q)A(q)A(q)y(t)\hat{y}(t|t-1) = \frac{B(q)}{A(q)}u(t) + \frac{C(q) - A(q)}{A(q)}y(t)
    • C(q)=1+c1q1+C(q) = 1 + c_1q^{-1} + …
    • Shares poles with GG, independent zeros
    • J(θ)J(\theta) is generally non-convex.

Box-Jenkins (BJ)

  • Equation: y(t)=B(q)F(q)u(t)+C(q)D(q)e(t)y(t) = \frac{B(q)}{F(q)}u(t) + \frac{C(q)}{D(q)}e(t)
  • Where: A=1A = 1
  • Most general LTI model.

Non-Parametric Modeling

  • Data-Generating Process:
    • y(t)=G0(q)u(t)+ν(t)y(t) = G_0(q)u(t) + \nu(t)
      • Where: G0(q)G_0(q) is the ground truth, and ν(t)\nu(t) is zero-mean noise (not necessarily white).
  • Goal: Learn G0(q)G_0(q) without parametrization.
  • Approach: G0(q)=g(0)+g(1)q1+g(2)q2+G_0(q) = g(0) + g(1)q^{-1} + g(2)q^{-2} + …
    • Learn the impulse response.
    • Ground truth.

Possible Approaches

  • Impulse-Response Analysis
    • y(t)=G0(q)u(t)+ν(t)y(t) = G_0(q)u(t) + \nu(t)
    • u(t)=δ(t)u(t) = \delta(t)
    • Noise-to-signal ratio (SNR) is infeasible.
    • May cause saturation/nonlinearity.
    • g^(t)=y(t)\hat{g}(t) = y(t)
    • g^(t)=0\hat{g}(t) = 0
    • If t < 0

Impulse Response Analysis Details

  • Recall: G0(q)=g(0)+g(1)q1+g(2)q2+G_0(q) = g(0) + g(1)q^{-1} + g(2)q^{-2} + …
  • Coefficients are impulse response (I.R.).
  • ν(t)=y(t)y^(t)\nu(t) = y(t) - \hat{y}(t)
  • y^(t)=G<em>0(q)u(t)=</em>k=0g(tk)u(k)\hat{y}(t) = G<em>0(q)u(t) = \sum</em>{k=0}^{\infty} g(t-k)u(k)
  • Step-Response Analysis
    • u(t)={0amp;tlt;t<em>0 1tt</em>0u(t) = \begin{cases} 0 &amp; t &lt; t<em>0 \ 1 & t \geq t</em>0 \end{cases}
    • Analyze overshoot, gain, rise time, settling time.

Step Response Analysis Equations

  • Equation: y(t)=G0(q)u(t)+ν(t)=[g(0)+g(1)q1+g(2)q2+]u(t)+ν(t)y(t) = G_0(q)u(t) + \nu(t) = [g(0) + g(1)q^{-1} + g(2)q^{-2} + …]u(t) + \nu(t)
  • y(t)=g(0)+g(1)+g(2)+y(t) = g(0) + g(1) + g(2) + …
  • y(t)=k=0tg(k)+ν(t)y(t) = \sum_{k=0}^{t} g(k) + \nu(t)
  • Δy(t)=y(t)y(t1)\Delta y(t) = y(t) - y(t-1)
  • First difference ~ derivative
  • y(t)y(t1)=g(t)+ν(t)ν(t1)y(t) - y(t-1) = g(t) + \nu(t) - \nu(t-1)
  • g(t)=y(t)y(t1)g(t) = y(t) - y(t-1)

Step Response Analysis Considerations

  • Equation: g(t)=y(t)y(t1)g(t) = y(t) - y(t-1)
  • This still needs work to get the exact result.
  • Potential problem: differentiating noise.
  • Δν(t)=ν(t)ν(t1)\Delta \nu(t) = \nu(t) - \nu(t-1)
  • First difference (LTI)
  • Φ<em>Δν(ω)=F(ejω)2Φ</em>ν(ω)\Phi<em>{\Delta \nu}(\omega) = |F(e^{j\omega})|^2 \Phi</em>{\nu}(\omega)
  • Where: F(q)=1q1F(q) = 1 - q^{-1}
  • F(ejω)=1ejω=1cos(ω)jsin(ω)F(e^{j\omega}) = 1 - e^{-j\omega} = 1 - cos(\omega) - jsin(\omega)
    • F(ejω)2=[1cos(ω)]2+[sin(ω)]2=22cos(ω)|F(e^{j\omega})|^2 = [1 - cos(\omega)]^2 + [-sin(\omega)]^2 = 2 - 2cos(\omega)
  • High-pass filter.

Correlation Analysis

  • Open-loop experiment:
    • y(t)=G0(q)u(t)+ν(t)y(t) = G_0(q)u(t) + \nu(t)
    • Let u(t)u(t) be any quasi-stationary stochastic process (independent of ν(t)\nu(t)).
    • y(t)=k=0g(k)u(tk)+ν(t)y(t) = \sum_{k=0}^{\infty} g(k)u(t-k) + \nu(t)
    • y(t)u(tτ)=k=0g(k)u(tk)u(tτ)+ν(t)u(tτ)y(t)u(t-\tau) = \sum_{k=0}^{\infty} g(k)u(t-k)u(t-\tau) + \nu(t)u(t-\tau)
    • E[y(t)u(tτ)]=k=0g(k)E[u(tk)u(tτ)]+E[ν(t)u(tτ)]\mathbb{E}[y(t)u(t-\tau)] = \sum_{k=0}^{\infty} g(k)\mathbb{E}[u(t-k)u(t-\tau)] + \mathbb{E}[\nu(t)u(t-\tau)]
    • Averaging over time/stochasticity.

Correlation Definitions

  • Auto-correlation: Ry(τ)=E[y(t)y(tτ)]R_y(\tau) = \mathbb{E}[y(t)y(t-\tau)]
  • Cross-correlation: Ryu(τ)=E[y(t)u(tτ)]R_{yu}(\tau) = \mathbb{E}[y(t)u(t-\tau)]
  • Back to the equation:
    • E[y(t)u(tτ)]=k=0g(k)E[u(tk)u(tτ)]+E[ν(t)u(tτ)]\mathbb{E}[y(t)u(t-\tau)] = \sum_{k=0}^{\infty} g(k)\mathbb{E}[u(t-k)u(t-\tau)] + \mathbb{E}[\nu(t)u(t-\tau)]
    • R<em>yu(τ)=</em>k=0g(k)Ru(τk)R<em>{yu}(\tau) = \sum</em>{k=0}^{\infty} g(k)R_u(\tau-k)
    • R<em>yu(τ)=G</em>0(q)Ru(τ)R<em>{yu}(\tau) = G</em>0(q)R_u(\tau)

Correlation Analysis Assumptions

  • Assumption: E[ν(t)u(tτ)]=0\mathbb{E}[\nu(t)u(t-\tau)] = 0
  • 1Nt=0N1[ν(t)u(tτ)]=0\frac{1}{N} \sum_{t=0}^{N-1} [\nu(t)u(t-\tau)] = 0
  • So, R<em>yu(τ)=G</em>0(q)Ru(τ)R<em>{yu}(\tau) = G</em>0(q)R_u(\tau)
  • Recall: If u(t)u(t) = zero mean white noise, then Ru(τ)=λδ(τ)R_u(\tau) = \lambda \delta(\tau)
  • Therefore: Ryu(τ)=g(τ)R_{yu}(\tau) = g(\tau)

Practical Considerations

  • Question: What if u(t)=δ(t)u(t) = \delta(t)
    • Then y(t)=h(t)y(t) = h(t)
  • Note: In practice, we need to approximate.
    • R^<em>yu=1N</em>t=τN1y(t)u(tτ)\hat{R}<em>{yu} = \frac{1}{N} \sum</em>{t=\tau}^{N-1} y(t)u(t-\tau)
    • R^<em>u(τ)=1N</em>t=τN1u(t)u(tτ)\hat{R}<em>{u}(\tau) = \frac{1}{N} \sum</em>{t=\tau}^{N-1} u(t)u(t-\tau)
  • These become very inaccurate when τN\tau \sim N

Linear System of Equations

  • R^<em>yu(τ)=G</em>0(q)R^u(τ)\hat{R}<em>{yu}(\tau) = G</em>0(q)\hat{R}_u(\tau)
  • This is a linear system of equations for τ=0,,M\tau = 0, …, M