1/31
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Model Drift
AI trained on old data becomes progressively less accurate as the world changes — no visible failure, just gradual degradation.
Architectural Fragmentation
Teams independently build incompatible AI systems with unclear ownership, making enterprise-wide governance nearly impossible.
Shadow AI
Employees use unauthorized AI tools outside official channels — signals a gap between what people need and what the org provides.
Cascade Failure
One component's failure propagates across every dependent system. The 2021 AWS outage took down Amazon, Ring, airlines, and banks simultaneously.
Operational Governance
Who monitors AI in real time? What triggers human review? Failure example: Meridian's drift alerts routed to an unassigned inbox for 8 months.
Structural Governance
Who has authority to change AI goals, parameters, or training? Failure example: No one at Meridian held authority to update the carrier model post-deployment.
Accountability Governance
Who is responsible when AI causes harm? Can responsibility be traced? Failure example: No one at Meridian could answer who was responsible for carrier deprioritization.
Governance Theater
Policies, committees, and dashboards exist on paper but produce no actual control. The audit passes; governance doesn't function.
Regulatory Capture
Oversight is led by the same teams that built the system. The watchdog is structurally dependent on what it's supposed to watch.
Governance Lag
Oversight was designed for the system at deployment and never updated as the system evolved or expanded into new uses.
Accountability Diffusion
Responsibility is spread across so many roles and vendors that no single actor can be held accountable. Not intentional — just the default outcome of complex shared systems.
Alert Fatigue
Monitoring generates so many signals that reviewers start ignoring them all. Meridian's forecasting model flagged drift for 8 months — nobody acted.
Scope Creep Without Review
An AI system gets applied to new use cases without triggering a governance review. The original oversight was never designed for the current application.
Historical Data Bias
Training data encodes past decisions and inequalities. Meridian's forecasting model was trained on established markets and underestimated demand in new ones.
Proxy Bias
A neutral-seeming variable correlates with a disadvantaged group. Meridian's model optimized for API capability, which correlated with carrier size — discriminating without intending to.
Feedback Loop Bias
Biased outputs become future training data, amplifying the original pattern over time. More routing to large carriers → more favorable data for large carriers → stronger preference at retraining.
Aggregation Bias
A model trained on aggregated data performs worse for subgroups that weren't separately represented. Good average performance hides bad group-level performance.
Evaluation Bias
Validation uses aggregate accuracy metrics, so bias passes testing undetected. The reason most bias only surfaces in production at scale.
Metric Substitution
Leaders track cycle time, throughput, and recovery rates and treat improvement as proof of transformation success. System health goes unmeasured.
Champion Exhaustion
A few internal advocates carry the AI initiative. They absorb resistance, manage vendors, defend the project — and eventually burn out, taking institutional knowledge with them.
Narrative Lock-In
Leaders make public commitments that the transformation is succeeding. Once made, those commitments filter out evidence of problems.
Silence Normalization
Employees see problems but stop escalating because past concerns produced no response. The silence looks like acceptance. It is not.
Skill Atrophy
Humans who defer to AI stop practicing independent judgment and lose the ability to evaluate when the AI is wrong.
Authority Normalization
The AI output becomes the default. Overriding requires justification. The burden of proof flips — humans must explain why they're NOT following the AI.
Psychological Capitulation
Humans who override and are wrong a few times learn to distrust their own judgment relative to the system. Even correct overrides go unlearned without feedback loops.
Override Capability vs. Override Capacity
Capability = the mechanism exists. Capacity = humans have the knowledge, confidence, and authority to actually use it. Capability stays intact while capacity erodes quietly.
Q1: What is actually changing?
Not just the technical update — what is changing in terms of authority, accountability, and risk distribution? CLARITY threshold changes shifted decision authority, not just settlement speed.
Q2: Who is affected, and how?
Map all stakeholders before deciding. Effects outside your measurement system are not absent — they're just uncounted. Customers moved to auto-settlement were never consulted.
Q3: What could go wrong at scale?
AI harms are often distributed across many small incidents, not one visible failure. Consider how systematic failure would manifest and whether monitoring would catch it.
Q4: How will we know if we were wrong?
The most-skipped question. Forces monitoring design before deployment — what data, what granularity, reviewed by whom. If you can't answer this, you can't learn from the decision.
The Pause Condition
Pause an AI system if you can't confidently explain to a regulator, journalist, or affected customer what it's doing and why. The trigger is insufficient visibility, not confirmed harm.
Compliance vs. Governance
Compliance = passing an external audit. Governance = actually being in control. Meridian passed its compliance audit with zero real control over its systems.