Optimization Techniques – Comprehensive Exam Notes

Course Logistics

  • Faculty & Department: Accra Technical University – Applied Mathematics & Statistics
  • Programme / Course Code: M.Tech Data Science & Industrial Analytics – \text{MDS 616}
  • Title / Credits / Schedule: “Optimization Techniques”, 3-credit Saturday course (7 a.m.–10 a.m., KF9)
  • Facilitator: Dr. M. O. Amoamah
  • Delivery & Grading
    • Lectures (core theory + worked examples)
    • Assignments / exercises 10 %
    • Presentations 10 %
    • Mid-semester exam 20 %
    • End-semester exam 60 %

Learning Outcomes

  • Apply core principles of optimization across engineering, economics & data-science tasks.
  • Formulate & solve linear and non-linear models (constrained & unconstrained).
  • Deploy specialized sub-fields: integer programming, meta-heuristics, network & assignment models.
  • Operate industry software (Excel-Solver, Python/Matlab, etc.) to obtain, analyse & interpret optimal solutions.

Reading Benchmarks

  • Bazaraa et al., Linear Programming & Network Flows (5 ed.)
  • Nocedal & Wright, Numerical Optimization (2 ed.)
  • Boyd & Vandenberghe, Convex Optimization
  • Hillier & Lieberman, Introduction to Operations Research (10 ed.)

1. Foundations: What is Optimization?

  • Definition: Mathematical process for choosing the best (maximum or minimum) value of an objective function under given constraints.
  • Basic Terminology
    • Objective Function f(x) → maximise or minimise.
    • Decision Vector x=(x1,\dots,xn).
    • Constraints gi(x)\le,=,\ge bi.
    • Feasible Region = all x satisfying constraints.
  • Examples / Industries
    • Business profit maximisation, portfolio risk/return, airline crew scheduling, bridge design (weight vs strength), routing & transportation, inventory, project scheduling.

2. Linear Optimization (Part I)

2.1 Linear Programming (LP) – Model Form

  • Standard form (primal):
    \begin{aligned}
    \text{Optimise}\; &Z = \sum{j=1}^{n} cj xj\ \text{s.t.}\; &\sum{j=1}^{n} a{ij}xj \le (=,\ge)\; bi,\; i=1,\dots,m\ &xj\ge 0\; (\text{or integer/binary})\end{aligned}
  • Resources b_i interpreted broadly (time, budget, capacity…).
  • Steps to formulate
    1. Identify decision variables.
    2. Collect numerical data (constants/parameters).
    3. Translate real-world limits into linear constraints.
    4. Write linear objective (max profit / min cost).

2.2 Graphical Method (2-variable)

  • Plot each inequality as a half-plane; feasible region = polygon intersection.
  • Optimal corner = any vertex with best Z (convexity of LP ensures optima on extreme points).
  • Procedure bullets: define x,y → list constraints → convert to equalities → draw → shade feasible zone → enumerate corner points → evaluate Z.

2.3 Simplex Algorithm

  • Iterative tableau procedure that moves from one basic feasible solution to an adjacent one with improved Z until no positive (max) or negative (min) reduced cost remains.
  • Slack / surplus variables convert “≤” or “≥” to equalities.
  • Key tableau entries
    • C_B: coefficients of current basis.
    • Zj, Cj-Z_j: optimality test.
    • Ratio column \theta=\dfrac{\text{RHS}}{\text{pivot column}^{+}} selects leaving variable.
  • Computational support: Excel Solver → choose Simplex LP.

2.4 Duality & Sensitivity

  • Every LP has a dual; dual variables have economic meaning (shadow prices).
  • If primal is max, dual is min (and vice-versa).
  • Correspondence:
    • Primal variables \leftrightarrow Dual constraints.
    • Primal RHS b \leftrightarrow Dual objective coefficients.
  • Shadow price: change in optimal Z per unit increase in a RHS value (valid within allowable range).
  • Post-optimality / sensitivity: analyse how changes in cj, bi, or new variables/constraints affect optimality without re-solving from scratch (via final tableau or Solver sensitivity report).

3. Non-Linear Optimization (Part II)

3.1 Problem Classes

TypeObjective/ConstraintsExample
LinearBoth linearResource allocation LP
Non-linearAt least one non-linear termPortfolio variance 0.06x1^2+0.04x2^2+0.04x1x2
ConstrainedEquality/inequality presentBeam design h^2w\ge 200
UnconstrainedNo constraints on xLeast-squares regression

3.2 Unconstrained Single-Variable

  • Necessary condition: f'(x^*)=0.
  • Second-derivative test:
    • f''(x^*)>0 → local minimum.
    • f''(x^*)<0 → local maximum.
  • Graph notions: local/global extrema, inflection points.

3.3 Unconstrained Multivariable

  • Gradient \nabla f(x)=\left[\dfrac{\partial f}{\partial x1},\dots,\dfrac{\partial f}{\partial xn}\right].
  • Hessian H(x) is matrix of second partials.
    • Positive definite → local minimum.
    • Negative definite → local maximum.
    • Indefinite → saddle point.
  • Definiteness tests: leading principal minors (Sylvester criteria).

3.4 Constrained NL Optimization

Lagrange Multipliers (Equality)

  • Build L(x,\lambda)=f(x)-\sum{i=1}^{m}\lambdai g_i(x).
  • Solve \nablax L=0 & gi(x)=0 simultaneously.
  • \lambda_i measures sensitivity of optimum to RHS of constraint.

Kuhn–Tucker (Inequality)

  • Introduce slack si\ge0, set up Lagrangian with multipliers \lambdai\ge0.
  • Complementary slackness: \lambdai gi(x)=0 at optimum.

3.5 Convexity & Global Optimality

  • Set S convex ⇔ any two points x,y\in S imply \theta x+(1-\theta)y\in S for \theta\in[0,1].
  • Function convex ⇔ H(x) positive (semi)-definite ∀ x.
  • In convex problems any local optimum is global.

3.6 Software (Excel Solver)

  • Use GRG Nonlinear (gradient-based) for smooth models.
  • Use Evolutionary when model is non-smooth, discontinuous or multi-modal.
  • Options: multistart, tolerance, convergence diagnostics.

4. Integer Programming (Part III)

  • Variables restricted to integers (0/1, counts, on/off).
  • Types
    1. Pure IP – all variables integer.
    2. Mixed IP – some integer, others continuous.
    3. Zero-One (Binary) – variables \in{0,1}.
  • Solution Methods
    • Branch-and-Bound (systematically explore tree, prune via bounds).
    • Cutting Planes (add linear cuts to remove fractional solutions).
  • Typical Applications: capital budgeting, facility location, crew assignment, project selection, knapsack, scheduling.

5. Meta-Heuristic / Heuristic Algorithms (Part IV)

  • Why? Exact methods may be infeasible on huge or highly non-convex spaces.
  • Core concepts: randomisation, neighbourhood search, acceptance of worse moves to escape local minima.
  • Simulated Annealing (SA)
    • Temperature parameter T controls acceptance probability \exp\left(-\Delta f/T\right).
    • Gradual cooling → convergence to near-global optimum.
  • Genetic Algorithm (GA)
    • Population encoding, fitness evaluation, selection, crossover, mutation.
    • Good for parameter tuning, feature subset selection.

6. Classic Applications & Worked Forms

6.1 Transportation Model

  • Balanced if \sum ai = \sum bj; else add dummy source/destination with zero cost.
  • Formulation variables x{ij} units shipped from i to j; minimise \sum c{ij}x_{ij} subject to row (supply) & column (demand) equalities.
  • Initial solution heuristics: North-West Corner, Least-Cost (Minimum-Cost), Vogel Approximation; test optimality via MODI / stepping-stone.

6.2 Assignment Problem

  • Special transportation with unit supply/demand; binary variables x_{ij}\in{0,1}.
  • Hungarian Method or LP with binary constraints; cost/profit matrix.

6.3 Travelling Salesman Problem (TSP)

  • Visit each city once, return to start, minimise total distance.
  • LP with subtour-elimination (Miller–Tucker–Zemlin variables u_i) or heuristics (nearest-neighbour, 2-opt, etc.).

6.4 Knapsack (0-1)

  • \text{Maximise}\;\sum vi xi
    \text{s.t. }\sum wi xi \le W,\; x_i\in{0,1}.
  • Exact: dynamic programming, branch & bound; heuristic: greedy by vi/wi ratio.

7. Excel-Solver Operational Guide

  1. Activate add-in: File → Options → Add-ins → Manage Excel Add-ins → Solver.
  2. Sheet layout: decision variables in cells, formulas for objective & constraints in separate cells.
  3. Dialog setup: Set Objective (max/min), By Changing variables, Add constraints, choose method (Simplex LP / GRG Nonlinear / Evolutionary).
  4. Reports:
    • Answer (values, binding status).
    • Sensitivity (shadow prices, reduced costs, allowable ranges).
    • Limits (variable bounds with others fixed).
  5. Interpretations:
    • Binding ⇒ zero slack.
    • Reduced cost 0 on basic vars; non-basic vars show improvement needed to enter.
    • Shadow price valid within allowable increase/decrease of RHS.

8. Practice Problem Map (extracted sets)

  • LP Formulations: data-cleaning vs feature-engineering, ensemble model mix, social-media campaigns, model-training scheduling, forecasting accuracy vs cost, churn-reduction, tech-product mix, text-analytics task assignment, software subscription, experimentation budgeting.
  • Non-linear Excel Tasks: portfolio variance minimisation, Cobb-Douglas production, logistic curve fitting, quadratic advertising returns, circular area vs perimeter.
  • Unconstrained Multivariate: 10 analytical gradient/Hessian exercises.
  • Convexity Proofs: prove convexity & verify optimality at (0,0) for 10 given functions.
  • Integer & Network Applications:
    • Detailed transportation, assignment, TSP & knapsack datasets (warehouse–store, worker–task, city tours, backpack capacity).
    • Students expected to state decision variables → objective → constraints → solve graphically/analytically → confirm with Solver.
  • Solver Report Interpretation: understand answer & sensitivity snapshots (final Z, shadow prices, allowable ranges, reduced costs).

9. Ethical & Practical Insights

  • Optimal solutions must respect real-world rules (labour laws, safety, budgets).
  • Shadow-price info aids negotiation & resource valuation.
  • Integer and heuristic methods accommodate indivisibilities & computational hardness prevalent in industry datasets.

10. Quick-Reference Equations

  • LP Duality Pair (max primal, min dual)
    \begin{aligned}
    \max\; &c^T x & \quad \min\; b^T y \
    \text{s.t.}&\; Ax \le b & \text{s.t. } A^T y \ge c \
    & x\ge 0 & y \ge 0\end{aligned}
  • Portfolio Variance (two assets)
    \sigma^2 = \sigmaA^2 x1^2 + \sigmaB^2 x2^2 + 2\,\text{Cov}{AB}\,x1x2 with x1+x_2=1.
  • Lagrange First-Order Conditions
    \nabla f(x^) - \sum{i}\lambdai \nabla gi(x^) = 0,\quad gi(x^*)=0.
  • Kuhn-Tucker Complementarity
    \lambdai\ge0,\; gi(x^)\le0,\; \lambdai gi(x^)=0.

11. Exam Tips

  1. Always begin with clear definition of variables; dimensionally check units.
  2. For LP proofs, sketch feasible space even when not required—visual intuition helps.
  3. Memorise standard forms & dual conversion rules (≤ ↔ ≥, max ↔ min).
  4. When using Solver, keep raw coefficients in dedicated cells to leverage sensitivity report.
  5. For non-linear calculus questions: compute gradient, set =0, then Hessian test.
  6. In integer models, justify integrality (people, servers, projects) before applying IP tools.
  7. Interpret every numerical answer (profit, cost, risk) in context—marks awarded for insight, not just numbers.