1/10
Week 6
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
AI - Definition
Artificial intelligence are the technologies that improve our ability to find patterns in data, make predictions, and recommend actions without explicit human instruction.
XAI - Definition
Explainable AI is an emerging field that aims to make the black-box of AI transparent for the client to trust the AI gathered insight.
XAI - Answered questions
XAI answers questions like:
Why did the AI system make a specific prediction or decision?
Why didn’t the AI system do something else?
XAI - Solved AI characteristics
AI is opaque
AI is malleable
AI is iffy
AI is unproven
XAI - AI is opaque
Issue: AI models rely on sophisticated math and stats that make it hard for users to understand models’ mechanics/outputs.
Solution: Decision tracing
XAI exposes how the model makes decisions about individual cases or a subset of cases and resolves unfairness, errors, etc.
XAI - AI is malleable (easily influenced)
Issue: AI models learn from the data they are given - good or bad - without pushback or judgment. Therefore, they can produce biased results.
Solution: Bias remediation
Use high-quality training data to avoid biased results
Ensure that training data represents reality (Includes both common occurrences and rare events)
XAI - AI is iffy
Issue: AI models produce probabilistic results. They apply patterns and insights they learn from training data to. new instances with some degree of confidence.
Solution: Boundary setting
Determine how AI model outcomes need to be scoped, limited, or interpreted.
XAI - AI is unproven
Issue: Managers are uncertain about how and if their company can create meaningful returns from AI investments.
Solution: Value formulation
Articulate how the AI model outcomes influence decisions, processes, and actions (look at value, cost, and risk)
Scalable AI - Definition
Scaling AI means growing the value created by a trained model (scaling up) and its adaptations (scaling out).
Scalable AI - Scaling up
Increase volume of core model use
Model should be accurate and trusted.
How has the model been deployed?
How many users use it?
How often is it used?
How often is it retrained?
Scalable AI - Scaling out
Increase number of recontextualized models
Model should be transferable and inspiring.
Does the model apply to new contexts?
Geographies?
Subject areas?
Different products?