Artificial Intelligence – Quick Revision
Page Two – Artificial Intelligence Overview
Definition: Artificial Intelligence (AI) is a branch of computer science focused on creating intelligent systems that can perform tasks typically requiring human intelligence. These tasks include problem-solving, decision-making, pattern recognition, and natural language processing.
Main application areas:
Robotics: AI is used to control robots for tasks in manufacturing (e.g., assembly lines), exploration (e.g., Mars rovers), and even surgery. Self-driving cars are another prominent example.
Natural Language Processing (NLP): Enables computers to understand, interpret, and generate human language. Examples include language translation (e.g., Google Translate), speech recognition (e.g., Siri, Alexa), and sentiment analysis.
Expert systems: AI programs designed to mimic the decision-making ability of a human expert in a specific domain, often used in medical diagnosis or financial advising.
Machine learning: A core method of AI that allows systems to learn from data without being explicitly programmed for every scenario.
Machine learning: A subset of AI; it allows computers to learn and improve performance from data by identifying patterns and relationships, rather than following rigid, pre-programmed instructions. For example, an ML system can learn to detect spam emails by analyzing a large dataset of emails labeled as 'spam' or 'not spam'.
Categories of AI
Weak (narrow) AI: Also known as narrow AI, this type is designed and trained for a specific task. It cannot perform tasks outside its specialty. Examples include game-playing AI (like chess programs), recommendation systems (e.g., Netflix suggestions), and virtual assistants that perform specific commands (setting alarms, answering factual questions).
Strong AI / Artificial General Intelligence (AGI): This aims to create AI that can understand, learn, and apply intelligence to any intellectual task that a human can, across various domains. AGI would possess human-like cognitive functions, including reasoning, problem-solving, and abstract thinking. It is currently theoretical.
Super-intelligence: A hypothetical stage where AI would surpass human capability in every cognitive domain, including creativity, general knowledge, and problem-solving. This level of AI is currently beyond our technological reach.
Advantages:
Greater efficiency: AI systems can automate repetitive, tedious, or dangerous tasks, allowing humans to focus on more complex or creative work. They can operate 24/7 without fatigue.
Accuracy: AI can process vast amounts of data with high precision, reducing human error in analysis and decision-making, particularly in fields like data analysis or quality control.
Scalability: AI systems can be deployed to handle large volumes of work simultaneously, processing more data and performing more tasks than humans, leading to increased productivity and output.
New Opportunities: AI drives innovation, enabling new products, services, and entire industries (e.g., autonomous vehicles, personalized medicine).
Disadvantages:
Possible job displacement: Automation by AI might lead to job losses in sectors involving routine, repetitive tasks, requiring a shift in the workforce towards new roles.
Biased decisions: AI systems learn from data; if the data used for training contains inherent human biases (e.g., historical racial or gender biases), the AI can perpetuate or even amplify these biases in its decisions, leading to unfair or discriminatory outcomes.
Ethical concerns: Issues arise regarding privacy (how AI collects and uses personal data), accountability (who is responsible when an AI makes a mistake or causes harm?), and the potential for misuse (e.g., autonomous weapons).
High costs: The development, deployment, and maintenance of advanced AI systems can be very expensive.
Lack of human-like empathy/intuition: AI cannot fully understand or replicate complex human emotions, social nuances, or intuitive judgments, which are crucial in many human interactions and decision-making processes.
Core characteristics
Collection of extensive data plus rule-based processing: AI systems require large datasets to learn from and often employ predefined rules or sophisticated algorithms to process this information and derive insights.
Ability to reason logically: AI can follow a sequence of logical steps and rules to draw conclusions, make deductions, or solve problems, particularly evident in expert systems.
Capacity to learn and adapt, altering rules and data from experience: Through machine learning techniques, AI systems can automatically modify their internal parameters, improve their understanding, and refine their decision-making over time as they are exposed to new data and experiences.
Page Three – Components of AI Systems
Expert systems
Knowledge base: This is the repository of domain-specific facts, rules, and heuristics (rule-of-thumb knowledge) gathered from human experts. For example, in a medical diagnosis expert system, it would contain information about symptoms, diseases, and treatments.
Rule base: A set of IF-THEN rules that represent the logical relationships and decision-making processes used by the human expert. For instance, "IF patient has a fever AND cough THEN consider flu."
Inference engine: This is the processing component of the expert system. It applies the rules from the rule base to the facts in the knowledge base (and any new input from the user) to draw conclusions, provide explanations, or make recommendations. It uses methods like forward chaining (data-driven) or backward chaining (goal-driven).
User interface: The component that allows the user to interact with the system, input queries or data, and receive the system's conclusions and explanations in a user-friendly format.
Machine-learning systems
Automatically modify internal processes or data: ML algorithms are designed to adjust their internal models or parameters without human intervention in the adjustment process itself. This allows them to improve their performance as they encounter more data.
Algorithms analyse data to find patterns and improve over time: Machine learning algorithms employ statistical and mathematical methods to identify correlations, classify data points, or detect trends within large datasets. The more data they process, the more accurate and robust their predictions or classifications become.
Training modes:
Supervised learning: This mode uses "labelled data," where each input data point is paired with a corresponding "correct" output label. The algorithm learns by finding patterns that map inputs to their known outputs. For example, training a system to recognize different animals using images already tagged as "cat," "dog," or "bird."
Unsupervised learning: This mode operates with "unlabelled data," meaning the algorithm works with raw data that has no predefined output categories. The algorithm explores the data itself to discover inherent structures, patterns, or groupings within it, such as clustering similar customers based on purchasing behavior without prior knowledge of customer segments.