Porometric Families & Models and Non-Parametric Modeling
Porometric Families & Models
- Output Error (OE)
- Equation: A(q)y(t) = B(q)u(t) + e(t)
- Where:
- A, C, D, F = 1
- y(t) = G(q; \theta)u(t) + e(t)
- e(t) is white noise.
- G(q; \theta) = \frac{B(q; \theta)}{A(q; \theta)}
- B(q) = b0 + b1q^{-1} + … + b{nb}q^{-n_b}
- IIR assumption
- Iterative optimization is required.
- \hat{y}(t|t-1; \theta) = G(q; \theta)u(t)
- J(\theta) = \frac{1}{N} \sum_{t=1}^{N} [y(t) - \hat{y}(t|t-1; \theta)]^2
- J(\theta) is generally a nonlinear function of \theta, thus non-convex.
- Equation: A(q)y(t) = B(q)u(t)
- Where:
- C, D, F = 1
- y(t) = \frac{B(q)}{A(q)}u(t)
- Some poles, no zeros; similar to G
- \hat{y}(t|t-1; \theta) = \frac{B(q; \theta)}{A(q; \theta)}u(t)
- A(q) = 1 + a1q^{-1} + … + a{na}q^{-na}
- J(\theta) = \frac{1}{N} \sum_{t=1}^{N} [A(q; \theta)y(t) - B(q; \theta)u(t)]^2
- [1 + a1q^{-1} + … + a{na}q^{-na}]y(t) - [b0 + b1q^{-1} + … + b{nb}q^{-n_b}]u(t)
ARX Model Details
- Equation: y(t) = b0u(t) + b1u(t-1) + … - a1y(t-1) - … - a{na}y(t-na)
- Linear in parameters
- \theta = [b0, b1, …, a1, … a{n_a}]
- J(\theta) is quadratic in parameters.
- Globally convex, global minimum.
- Equation: A(q)y(t) = B(q)u(t) + C(q)e(t)
- Where: F = 1
- \hat{y}(t|t-1) = \frac{B(q)}{A(q)}u(t) + \frac{C(q) - A(q)}{A(q)}y(t)
- C(q) = 1 + c_1q^{-1} + …
- Shares poles with G, independent zeros
- J(\theta) is generally non-convex.
Box-Jenkins (BJ)
- Equation: y(t) = \frac{B(q)}{F(q)}u(t) + \frac{C(q)}{D(q)}e(t)
- Where: A = 1
- Most general LTI model.
Non-Parametric Modeling
- Data-Generating Process:
- y(t) = G_0(q)u(t) + \nu(t)
- Where: G_0(q) is the ground truth, and \nu(t) is zero-mean noise (not necessarily white).
- Goal: Learn G_0(q) without parametrization.
- Approach: G_0(q) = g(0) + g(1)q^{-1} + g(2)q^{-2} + …
- Learn the impulse response.
- Ground truth.
Possible Approaches
- Impulse-Response Analysis
- y(t) = G_0(q)u(t) + \nu(t)
- u(t) = \delta(t)
- Noise-to-signal ratio (SNR) is infeasible.
- May cause saturation/nonlinearity.
- \hat{g}(t) = y(t)
- \hat{g}(t) = 0
- If t < 0
Impulse Response Analysis Details
- Recall: G_0(q) = g(0) + g(1)q^{-1} + g(2)q^{-2} + …
- Coefficients are impulse response (I.R.).
- \nu(t) = y(t) - \hat{y}(t)
- \hat{y}(t) = G0(q)u(t) = \sum{k=0}^{\infty} g(t-k)u(k)
- Step-Response Analysis
- u(t) = \begin{cases} 0 & t < t0 \ 1 & t \geq t0 \end{cases}
- Analyze overshoot, gain, rise time, settling time.
Step Response Analysis Equations
- Equation: y(t) = G_0(q)u(t) + \nu(t) = [g(0) + g(1)q^{-1} + g(2)q^{-2} + …]u(t) + \nu(t)
- y(t) = g(0) + g(1) + g(2) + …
- y(t) = \sum_{k=0}^{t} g(k) + \nu(t)
- \Delta y(t) = y(t) - y(t-1)
- First difference ~ derivative
- y(t) - y(t-1) = g(t) + \nu(t) - \nu(t-1)
- g(t) = y(t) - y(t-1)
Step Response Analysis Considerations
- Equation: g(t) = y(t) - y(t-1)
- This still needs work to get the exact result.
- Potential problem: differentiating noise.
- \Delta \nu(t) = \nu(t) - \nu(t-1)
- First difference (LTI)
- \Phi{\Delta \nu}(\omega) = |F(e^{j\omega})|^2 \Phi{\nu}(\omega)
- Where: F(q) = 1 - q^{-1}
- F(e^{j\omega}) = 1 - e^{-j\omega} = 1 - cos(\omega) - jsin(\omega)
- |F(e^{j\omega})|^2 = [1 - cos(\omega)]^2 + [-sin(\omega)]^2 = 2 - 2cos(\omega)
- High-pass filter.
Correlation Analysis
- Open-loop experiment:
- y(t) = G_0(q)u(t) + \nu(t)
- Let u(t) be any quasi-stationary stochastic process (independent of \nu(t)).
- y(t) = \sum_{k=0}^{\infty} g(k)u(t-k) + \nu(t)
- y(t)u(t-\tau) = \sum_{k=0}^{\infty} g(k)u(t-k)u(t-\tau) + \nu(t)u(t-\tau)
- \mathbb{E}[y(t)u(t-\tau)] = \sum_{k=0}^{\infty} g(k)\mathbb{E}[u(t-k)u(t-\tau)] + \mathbb{E}[\nu(t)u(t-\tau)]
- Averaging over time/stochasticity.
Correlation Definitions
- Auto-correlation: R_y(\tau) = \mathbb{E}[y(t)y(t-\tau)]
- Cross-correlation: R_{yu}(\tau) = \mathbb{E}[y(t)u(t-\tau)]
- Back to the equation:
- \mathbb{E}[y(t)u(t-\tau)] = \sum_{k=0}^{\infty} g(k)\mathbb{E}[u(t-k)u(t-\tau)] + \mathbb{E}[\nu(t)u(t-\tau)]
- R{yu}(\tau) = \sum{k=0}^{\infty} g(k)R_u(\tau-k)
- R{yu}(\tau) = G0(q)R_u(\tau)
Correlation Analysis Assumptions
- Assumption: \mathbb{E}[\nu(t)u(t-\tau)] = 0
- \frac{1}{N} \sum_{t=0}^{N-1} [\nu(t)u(t-\tau)] = 0
- So, R{yu}(\tau) = G0(q)R_u(\tau)
- Recall: If u(t) = zero mean white noise, then R_u(\tau) = \lambda \delta(\tau)
- Therefore: R_{yu}(\tau) = g(\tau)
Practical Considerations
- Question: What if u(t) = \delta(t)
- Note: In practice, we need to approximate.
- \hat{R}{yu} = \frac{1}{N} \sum{t=\tau}^{N-1} y(t)u(t-\tau)
- \hat{R}{u}(\tau) = \frac{1}{N} \sum{t=\tau}^{N-1} u(t)u(t-\tau)
- These become very inaccurate when \tau \sim N
Linear System of Equations
- \hat{R}{yu}(\tau) = G0(q)\hat{R}_u(\tau)
- This is a linear system of equations for \tau = 0, …, M