Comprehensive AI Governance and Bias Management in Enterprise Systems

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/31

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 7:19 PM on 4/22/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

32 Terms

1
New cards

Model Drift

AI trained on old data becomes progressively less accurate as the world changes — no visible failure, just gradual degradation.

2
New cards

Architectural Fragmentation

Teams independently build incompatible AI systems with unclear ownership, making enterprise-wide governance nearly impossible.

3
New cards

Shadow AI

Employees use unauthorized AI tools outside official channels — signals a gap between what people need and what the org provides.

4
New cards

Cascade Failure

One component's failure propagates across every dependent system. The 2021 AWS outage took down Amazon, Ring, airlines, and banks simultaneously.

5
New cards

Operational Governance

Who monitors AI in real time? What triggers human review? Failure example: Meridian's drift alerts routed to an unassigned inbox for 8 months.

6
New cards

Structural Governance

Who has authority to change AI goals, parameters, or training? Failure example: No one at Meridian held authority to update the carrier model post-deployment.

7
New cards

Accountability Governance

Who is responsible when AI causes harm? Can responsibility be traced? Failure example: No one at Meridian could answer who was responsible for carrier deprioritization.

8
New cards

Governance Theater

Policies, committees, and dashboards exist on paper but produce no actual control. The audit passes; governance doesn't function.

9
New cards

Regulatory Capture

Oversight is led by the same teams that built the system. The watchdog is structurally dependent on what it's supposed to watch.

10
New cards

Governance Lag

Oversight was designed for the system at deployment and never updated as the system evolved or expanded into new uses.

11
New cards

Accountability Diffusion

Responsibility is spread across so many roles and vendors that no single actor can be held accountable. Not intentional — just the default outcome of complex shared systems.

12
New cards

Alert Fatigue

Monitoring generates so many signals that reviewers start ignoring them all. Meridian's forecasting model flagged drift for 8 months — nobody acted.

13
New cards

Scope Creep Without Review

An AI system gets applied to new use cases without triggering a governance review. The original oversight was never designed for the current application.

14
New cards

Historical Data Bias

Training data encodes past decisions and inequalities. Meridian's forecasting model was trained on established markets and underestimated demand in new ones.

15
New cards

Proxy Bias

A neutral-seeming variable correlates with a disadvantaged group. Meridian's model optimized for API capability, which correlated with carrier size — discriminating without intending to.

16
New cards

Feedback Loop Bias

Biased outputs become future training data, amplifying the original pattern over time. More routing to large carriers → more favorable data for large carriers → stronger preference at retraining.

17
New cards

Aggregation Bias

A model trained on aggregated data performs worse for subgroups that weren't separately represented. Good average performance hides bad group-level performance.

18
New cards

Evaluation Bias

Validation uses aggregate accuracy metrics, so bias passes testing undetected. The reason most bias only surfaces in production at scale.

19
New cards

Metric Substitution

Leaders track cycle time, throughput, and recovery rates and treat improvement as proof of transformation success. System health goes unmeasured.

20
New cards

Champion Exhaustion

A few internal advocates carry the AI initiative. They absorb resistance, manage vendors, defend the project — and eventually burn out, taking institutional knowledge with them.

21
New cards

Narrative Lock-In

Leaders make public commitments that the transformation is succeeding. Once made, those commitments filter out evidence of problems.

22
New cards

Silence Normalization

Employees see problems but stop escalating because past concerns produced no response. The silence looks like acceptance. It is not.

23
New cards

Skill Atrophy

Humans who defer to AI stop practicing independent judgment and lose the ability to evaluate when the AI is wrong.

24
New cards

Authority Normalization

The AI output becomes the default. Overriding requires justification. The burden of proof flips — humans must explain why they're NOT following the AI.

25
New cards

Psychological Capitulation

Humans who override and are wrong a few times learn to distrust their own judgment relative to the system. Even correct overrides go unlearned without feedback loops.

26
New cards

Override Capability vs. Override Capacity

Capability = the mechanism exists. Capacity = humans have the knowledge, confidence, and authority to actually use it. Capability stays intact while capacity erodes quietly.

27
New cards

Q1: What is actually changing?

Not just the technical update — what is changing in terms of authority, accountability, and risk distribution? CLARITY threshold changes shifted decision authority, not just settlement speed.

28
New cards

Q2: Who is affected, and how?

Map all stakeholders before deciding. Effects outside your measurement system are not absent — they're just uncounted. Customers moved to auto-settlement were never consulted.

29
New cards

Q3: What could go wrong at scale?

AI harms are often distributed across many small incidents, not one visible failure. Consider how systematic failure would manifest and whether monitoring would catch it.

30
New cards

Q4: How will we know if we were wrong?

The most-skipped question. Forces monitoring design before deployment — what data, what granularity, reviewed by whom. If you can't answer this, you can't learn from the decision.

31
New cards

The Pause Condition

Pause an AI system if you can't confidently explain to a regulator, journalist, or affected customer what it's doing and why. The trigger is insufficient visibility, not confirmed harm.

32
New cards

Compliance vs. Governance

Compliance = passing an external audit. Governance = actually being in control. Meridian passed its compliance audit with zero real control over its systems.