CS P3 (General Vocab_

General Chatbot Concepts

  • Chatbot: A software application that interacts with users using natural language.

  • Latency: The delay between a user's query and the chatbot’s response.

  • Natural Language Processing (NLP): A field of AI that enables machines to understand and respond to human language.

  • Natural Language Understanding (NLU): A component of NLP focused on interpreting user inputs.

  • Critical Path: The shortest sequence of models required to process a chatbot query.

  • Dataset: A collection of data used to train and evaluate machine learning models.

Latency Reduction

  • Complex NLP Models: Large AI models that can slow response time due to high computational complexity.

  • High Query Volume: When a chatbot receives too many simultaneous requests, increasing latency.

  • Dependencies Among ML Models: If one model in the pipeline is slow, it delays the entire chatbot response.

  • GPUs (Graphical Processing Units): Specialized hardware that speeds up complex computations.

  • TPUs (Tensor Processing Units): Hardware optimized for deep learning tasks, improving response speed.

Linguistic Nuances

  • Linguistic Nuances: Subtle differences and complexities in language that affect chatbot responses.

  • Emotion and Tone Detection: Identifying the emotional context of user messages.

  • Contextual Understanding: Taking past interactions into account for coherent chatbot responses.

  • Ambiguity Handling: The ability to manage statements with multiple possible meanings.

  • Lexical Analysis: Breaking text into individual words and sentences.

  • Syntactic Analysis (Parsing): Analyzing the grammatical structure of a sentence.

  • Semantic Analysis: Understanding the meaning of words and sentences.

  • Discourse Integration: Integrating a sentence into the broader conversation.

  • Pragmatic Analysis: Considering social, legal, and cultural context for accurate responses.

Chatbot Architecture

  • Chatbot Architecture: The structure and components that enable a chatbot to process and generate responses.

  • Recurrent Neural Networks (RNNs): A type of neural network designed to handle sequential data.

  • Long Short-Term Memory (LSTM) Networks: A type of RNN that overcomes the vanishing gradient problem.

  • Transformer Neural Networks: A modern architecture that uses self-attention for parallel processing.

  • Self-Attention Mechanism: A method in transformers that helps capture relationships between words.

Backpropagation & Learning

  • Backpropagation: A training algorithm for neural networks where errors are propagated backward to update weights.

  • Forward Pass: The phase where input data moves through the network to generate a prediction.

  • Loss Function: A measure of the difference between the chatbot’s predicted response and the actual correct response.

  • Gradient Descent: An optimization algorithm that adjusts neural network weights to minimize the loss function.

  • Vanishing Gradient Problem: A challenge in deep learning where gradients become too small to update earlier layers in deep networks.

Datasets & Biases

  • Dataset Diversity: Inclusion of various topics, languages, and user intents for better generalization.

  • Bias in AI: Systematic errors in data or algorithms that lead to unfair outcomes.

  • Synthetic Data: Artificially generated data to supplement real-world data.

  • Data Augmentation: Methods like paraphrasing or synonym replacement to expand training datasets.

  • Historical Bias: Bias introduced by outdated or historically skewed training data.

  • Selection Bias: Errors caused by non-randomly chosen training data.

Processing Power & Optimization

  • Processing Power: The computational capacity of a system to perform tasks efficiently.

  • Central Processing Unit (CPU): The main processor responsible for executing chatbot-related tasks.

  • Graphical Processing Unit (GPU): Hardware optimized for parallel computation, crucial for NLP tasks.

  • Tensor Processing Unit (TPU): Specialized AI hardware designed to accelerate deep learning tasks.

  • Cloud Computing: Using remote servers to store and process data.

  • Distributed Computing: Parallel processing across multiple machines to handle high query loads.

  • Parallel Processing: Splitting tasks into smaller parts to process them simultaneously.

Ethical Considerations

  • Data Privacy: Ensuring user data remains confidential and secure.

  • Fairness in AI: Ensuring chatbots provide equitable service to all users.

  • Accountability in AI: Determining who is responsible for chatbot actions and decisions.

  • Transparency in AI: Providing clarity on how chatbots make decisions.

  • Misinformation in AI: Preventing the spread of false or misleading information.

  • Explainable AI: Making AI decision-making processes understandable for users.

robot