AI
What is AI?
Definition: AI is the branch of computer science focused on the automation of intelligent behavior, akin to human/animal/living behavior.
Understanding Intelligence in AI
Turing Test: Proposed by Alan Turing in 1950 as a measure of a machine's ability to exhibit intelligent behavior indistinguishable from that of a human.
Example: Interaction showcasing a machine's limitations in mathematical calculations ("I can't even multiply two-digit numbers!").
Agents in AI
Definition: Artificial intelligence studies rational agents, which are entities that
Perceive their environment through sensors
Act upon the environment through actuators.
Types of Agents:
Robotic Agent: Uses cameras and motors
Software Agent: Uses keystrokes and files
Human-Agent: Utilizes senses and body parts for interaction.
Rational Agent: Performs actions that are deemed as the right actions based on a performance measure that defines success criteria.
Examples of Agents
Self-Driving Car:
Performance: Safety, time, comfort
Environment: Roads, vehicles, signs, pedestrians
Sensors: Camera, GPS, speedometer, accelerometer
Actuators: Steering, accelerator, brakes, signals
Game of Sokoban:
Performance, Environment, Sensors, and Actuators need to be defined for the agent.
Types of Agents
Simple Reflex Agent: Operates based on defined condition-action rules without considering past actions or states.
Model-Based Agent: Maintains an internal representation of the environment and makes decisions accordingly.
Goal-Based Agent: Identifies paths to reach a defined goal state from various alternatives.
Utility-Based Agent: Uses a utility function to determine the best path based on preferences and goals.
Learning Agent: Capable of adjusting actions based on past experiences and feedback.
Sokoban Game as a Case Study
Elements: Player, boxes, walls, storage locations.
Rules: Boxes can only be pushed forward, and players can move in four directions.
Representation: Utilizes a 2D array with distinct values to represent various game elements (e.g., box, wall, person).
Search Space: A visualization of different configurations and states in the game.
Search Space Concepts
State Space: The set of all possible states in a search problem.
Goal State: The desired end state of the search.
Branches and States: The various actions leading from one state to another.
Search Strategies
Depth-First Search (DFS): Explores the deepest nodes first, implemented using a LIFO stack.
Breadth-First Search (BFS): Explores the shallowest nodes first, implemented using a FIFO queue.
Properties of Search Algorithms: Completeness, optimality, and time and space complexity.
Heuristic Search
Definition: A search strategy that uses heuristics to guide the search process more intelligently than blind search.
Heuristic Functions: Functions that estimate how close a state is to the goal, with examples including Manhattan and Euclidean distances.
Traveling Salesman Problem: A classic example illustrating the complexity of searches and possible heuristics to optimize the search, such as selecting the nearest city next (greedy approach).
Key Heuristic Concepts
Admissibility: A heuristic that guarantees finding the shortest path to the goal if it exists.
Consistency: A condition where heuristic values obey a particular property related to neighbors in the search space.
A-Search*: Combines heuristic and cost functions to efficiently find the optimal path in a state search.