1/134
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Ethics in Engineering & AI
Engineers must consider the ethical implications of AI systems, focusing on people, safety, fairness, and accountability.
Predictive maintenance
Using AI to predict when machines will fail, with ethical concerns about overreliance on imperfect predictions.
Cybersecurity
AI's role in defending or attacking systems, with potential misuse or bias endangering user data.
Water quality / environmental monitoring
Concerns about sensor failures or algorithm misreporting that could harm people.
Traffic optimization
AI must balance efficiency and fairness in determining priority.
Job displacement
Automation may replace workers, necessitating consideration of social impacts.
Stakeholders
All parties affected by AI systems, including workers, the public, and regulators.
Duration of concern
The balance between short-term benefits and long-term consequences, such as data privacy erosion.
Predictability
The challenge of unforeseen outcomes requiring caution and transparency in AI systems.
Prioritization
The process of ranking what matters most in tradeoffs between efficiency and fairness.
Knowledge vs. Data
Data are facts or measurements, while knowledge is meaning extracted from data through inference and interpretation.
Quantitative data
Data derived from sensors, surveys, budgets, and experiments.
Qualitative data
Data obtained from interviews, reports, comments, and policy documents.
Data trustworthiness
Questions to assess data validity, honesty, completeness, noise, and bias.
Data Cleaning & Repair
Methods to handle unreliable data, including managing outliers, interpolating missing values, and smoothing noisy data.
Data representation formats
Different ways to represent data, including matrices, graphs, and tables (DataFrames).
Example of knowledge extraction
Data: sensor readings from a bridge; Knowledge: 'Bridge stress is increasing; failure is likely in 3 years.'
Self-driving car data bias
A self-driving car trained mostly on data from sunny cities will perform poorly in snow, highlighting that honest data can still be biased.
Engineers' tradeoffs
Engineers must weigh the tradeoffs between availability, quality, and cost of data.
Nominal
Labels, identifiers
Ordinal
Ordered categories
Interval
Ordered with consistent spacing, no true zero
Ratio
Interval + meaningful zero
Data Science
About understanding data — extracting patterns, meaning, and knowledge.
Artificial Intelligence
About acting intelligently — making decisions or predictions from knowledge.
Data Preparation
Essential for ethical and high-performing AI.
Cleaning
Remove errors, duplicates, outliers
Labeling
Assign correct outcomes or categories
Augmentation
Generate synthetic data to reduce bias or balance classes
Feature Engineering
Combine or transform variables to uncover relationships AI wouldn't see on its own.
Data Science EDiCtS Cycle
Adapted from the AI E.D.I.T. mindset (Explore, Direct, Imagine, Test).
Exploratory Data Analysis (EDA)
Goal: understand the shape and structure of your data.
Univariate Analysis
Look at one variable at a time.
Multivariate Analysis
Examine relationships between multiple variables.
Unsupervised Methods
Used when there are no labels.
Clustering
Groups similar data points (e.g., K-Means)
Dimensionality reduction
Simplifies datasets (e.g., PCA)
Association rules
Find 'if-then' relationships (e.g., market basket analysis)
Data Augmentation
Artificially expand data using transformations (rotation, scaling, synthesis).
Lists
Ordered data
Matrices
2D arrays (NumPy)
DataFrames
Labeled tables (pandas)
File I/O
Reading/writing data files
AI
Acts on a model of the world made up of states and actions.
Perception
senses what's happening (camera, sensor, input)
Knowledge
stores what's known about the world
Reasoning
computes what to do next
Action
performs behavior to change the world
State
external description of the world at a moment
Example of State
A robot's state includes position and orientation; its knowledge includes a map of the environment.
Abstraction
means throwing out information — but doing it thoughtfully.
Example of Abstraction
A self-driving car ignores tree colors but tracks road lines and nearby objects.
Good State Characteristics
Should completely describe everything necessary for the AI to make a decision.
Unambiguous State
multiple world configurations shouldn't map to the same state (if possible).
Compact State
avoid redundancy.
Example Errors in State
Missing data (sensor error → incomplete state), Redundant attributes (wastes computation), Overly complex representation (slows search)
Chess State Representation
All piece locations + turn info {White_King:E1, Black_Queen:D8, Turn:White}
Robot Pathing State Representation
X, Y, and direction (15, 22, North)
Warehouse Robot State Representation
Position, empty/full, goal location (15,22,Empty=True,Goal=(107,62))
Traffic Management State Representation
Time, event status, vehicle count, avg speed (14:43, Event=False, 578, 27.2)
AI as Search
AI problems are often formulated as search problems: Most of AI is state search or traversal.
Search Examples
Pathfinding: robot moving from start to goal, Planning: sequence of actions (manufacturing schedule, game moves), Optimization: minimizing cost or risk, Machine learning: searching over parameters or functions
Search Trade-offs
More Knowledge vs. Less Knowledge, Faster computation vs. Slower computation, Expensive to build vs. Cheaper, easier to build, More assumptions vs. More flexible
Search as Planning
To formalize a problem as a search: Define what you're optimizing (time, energy, cost, risk), Define your state representation, Define your cost function (how you compare options), Define what to keep in memory (visited states, paths), Define how quickly you must solve it (real-time vs. offline)
Ethics
the study of what is the right thing to do (rules, duties, consequences, fairness).
Moral Epistemology
how we know it's right — what evidence, reasoning, or procedures justify our claim.
Engineering mindset
The engineering process (systematic, iterative problem solving) transfers directly to ethical analysis.
Define problem and stakeholders
Identify who is affected by a problem and how (users, bystanders, regulators).
Model trade-offs
Evaluate competing factors such as safety vs. cost and accuracy vs. privacy.
Iterate & test
Validate assumptions and designs through repeated testing and refinement.
Concrete example of predictive maintenance model
Identify workers (safety), owners (cost), and regulators (compliance) while weighing false negatives vs. false positives.
Training data bias
Occurs when the sample isn't representative, such as a face dataset with few darker-skinned individuals leading to poor performance.
Objective / proxy bias
Happens when the optimization target is a proxy that correlates with protected attributes, like using past spending as a proxy for creditworthiness.
Non-technical root causes of bias
Include design choices, labeling decisions, deployment context, and feedback loops.
EV battery predictor example
A 2% low prediction may seem small, but systematic underestimation can lead to customer distrust or incorrect range estimates affecting travel safety.
Tank identification urban legend
A dataset contained pictures of tanks on different backgrounds; the model learned the background instead of the tanks.
COMPAS / recidivism risk
An example where algorithmic predictions affected sentencing, with proxies correlated with race/socioeconomics leading to unfair outcomes.
Testing for bias
Measure performance across subgroups (precision/recall per group) and use counterfactual tests to change only the protected attribute.
Bias severity
A bias can be harmless in isolation but harmful in context, affecting certain groups disproportionately or triggering cascading decisions.
Privacy risks in Big Data
Seemingly innocuous telemetry can reconstruct location traces, leading to privacy risks.
Digital divide
Data-driven tools can advantage those who can pay for data/compute, disadvantaging smaller entities and creating systemic inequity.
Professional vs. software ethics
Professional engineers are bound by legal codes and duty to public safety, while software engineers often lack universal legally binding licenses.
Legal & contractual limits
Most EULAs disclaim warranties and limit damages, shifting risk to users while not absolving engineers morally.
E.D.I.T. framework
A process for responsible AI development: Explore, Direct, and actionable steps to ensure ethical practices.
Explore in E.D.I.T.
Learn the tool including dataset, architecture, and failure modes; run probes for edge cases.
Actionable in E.D.I.T.
Generate adversarial or out-of-distribution inputs; look for hallucinations, biases, and missing coverage.
Direct in E.D.I.T.
Constrain/model behavior through prompt engineering, model guardrails, post-processing filters, and rule-based checks.
Direct
Constrain/model behavior: prompt engineering, model guardrails, post-processing filters, rule-based checks.
Actionable
Add fairness constraints, thresholding, human-in-the-loop points.
Imagine
Ask: what should ideal output look like? Consider stakeholders and possible misuse.
Test
Rigorous testing: metrics across subgroups, stress tests, monitoring in production. Record procedures and results.
Exam tip
The EDIT acronym is an easy 4-step answer for a short-response question on responsible AI.
Define problem & scope
What system, what decisions.
List stakeholders
Primary, secondary.
Enumerate harms & benefits
Safety, privacy, fairness, economic.
Identify evidence
Data sources, validation steps, metrics to measure harms.
Mitigation plan
Data augmentation, fairness constraints, human oversight, opt-outs.
Testing plan
Per-group metrics, counterfactual tests, simulated deployment.
Ethics vs moral epistemology
Definition & example.
E.D.I.T. steps
Explore, Direct, Imagine, Test.