1/250
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Data-Driven Decisions
Relying on insights from data rather than intuition or guesswork.
Performance Improvement
Enhancing efficiency, productivity, and outcomes using measurable evidence.
Predictive Power
Forecasting future trends or behaviors (e.g., customer churn, sales, demand).
Competitive Differentiation
Gaining an edge through smarter operations, marketing, customer targeting, or innovation.
Descriptive analytics
What happened (reports, stats, visualizations).
Diagnostic analytics
Why it happened (causal inference vs. correlation).
Predictive analytics
What will happen (forecast models).
Prescriptive analytics
What should we do (e.g., dynamic pricing recommendations, optimal supply chain routes).
Customer Segmentation
Grouping customers based on shared characteristics to tailor marketing or product strategies.
Personalization
Delivering individualized experiences, often powered by AI recommendations.
Demand Forecasting
Predicting future customer demand using historical data and AI models.
Algorithmic Trading
Automated financial trading using pre-defined AI rules and models.
Fraud Detection
Using data and machine learning to identify unusual patterns that may indicate fraud.
Strong AI
Refers to a computer system that is able to simulate human reasoning and behavior aiming to match cognitive abilities.
Weak AI
Refers to computer systems that are designed to perform specific tasks traditionally done by humans, without replicating general reasoning.
Competitive Advantage
A condition that puts a company in a favorable position.
Porter's Five Forces
A framework for analyzing competitive environments: rivalry, threat of new entrants, threat of substitutes, bargaining power of buyers, and bargaining power of suppliers.
Competitive Rivalry
Assesses the intensity of competition among existing firms in the industry.
Threat of New Entrants
Looks at how easy or difficult it is for new competitors to enter the market.
Bargaining Power of Suppliers
Evaluates how much power suppliers have to drive up prices or reduce the quality of goods/services.
Bargaining Power of Buyers
Measures the ability of customers to influence pricing and terms.
Threat of Substitutes
Considers the availability of alternative products or services that can perform the same function.
Network Effects
Occur when the value of a product, service, or platform increases as more people use it.
Direct Network Effects
Value increases as more users join the same side of the platform.
Indirect Network Effects
Value increases as participation grows on the other side of the platform.
Learning Effects
Value grows as data accumulates and systems improve.
Complementor Contributions
External firms or individuals add value by creating content, apps, or services that enhance the platform.
Factors Affecting Appropriability
Control of key assets - data algorithms, user base; Switching and multihoming costs - if users or suppliers can easily move elsewhere; Regulation and bargaining power - governments can limit how much value platforms extract.
MultiHoming
Refers to the practice of users or firms participating in more than one platform at the same time.
Example of MultiHoming
A consumer might use both Uber and Lyft to compare prices; A content creator might post videos on both YouTube and TikTok.
Disintermediation
When platforms cut out middlemen, creating direct producer-consumer connection and often reshaping entire industry.
Example of Disintermediation
Travel Booking, online platforms like Expedia or Airbnb; Music: streaming services like Spotify reduce need for record stores.
Network Bridging
Refers to when a platform connects two or more otherwise separate networks, allowing value to flow between them.
Examples of Network Bridging
LinkedIn, Amazon Marketplace, Apple's App Store.
Rethinking The Firm
Firms must shift from human centric to algorithm-centric workflows, demanding reimagining roles, processes, and accountability.
AI Readiness
Refers to an organization's ability to successfully adopt, implement, and scale artificial intelligence technologies to enhance operations and value creation.
Digital Infrastructure
Having the cloud platforms, computing power, and integration tools to support AI systems.
Data Maturity
Accessible, high-quality, well-organized data pipelines that AI systems can learn from.
Talent & Culture
Teams with AI/ML skills and a culture that supports experimentation, agility, and data-driven decision-making.
Leadership Commitment
Executives who understand AI's strategic importance and invest accordingly.
Experimentation & Learning
A mindset and capability to test, measure, and refine AI applications over time.
AI Readiness and the 350 Firm Study
Based on digital infrastructure, data integration, analytics use, and AI deployment—and demonstrated a strong positive correlation between higher AI maturity and superior financial performance.
AI Maturity Index
Built from about 40 business processes, tracked progression from siloed data to integrated AI factories.
Performance Metrics
Leaders in AI maturity significantly outperformed laggards in metrics like gross margin, net income, and earnings before taxes (e.g., top firms had 55% gross margin vs. 37% for laggards).
Digital Infrastructure (AI)
Scalable, cloud-based systems for real-time data + AI deployment.
Data Accessibility & Quality
Centralized, governed, secure data for AI use.
Talent & Leadership (AI)
Cross-functional teams aligned to drive transformation.
Experimentation (AI)
Agile testing, modular design, culture of learning + adaptation.
Traditional Operating Model
Optimized for efficiency in production & coordination, key features include physical supply chains and human decision making.
AI-Driven Operating Model
Core = AI factory: data → algorithms → learning → action; Operations embedded in digital platforms.
Growth in AI-Driven Model
Growth comes from user interaction generating data and automated decisions at scale.
Scale Economies
Cost advantages that companies gain as they increase production, often enhanced by AI automation.
Scope Economies
Efficiencies formed by variety (offering multiple products or services), where AI can help leverage shared data and infrastructure.
Example of Scale Without Mass
Ant Financial → handles millions of loans without adding staff.
Example of Scope Without Complexity
Amazon uses AI for retail, AWS, logistics, streaming.
Continuous Learning Model
Run frequent A/B tests on products, pricing, or interfaces.
Positive Feedback Loops
More users → more data → better service → more users.
Agility as a Competitive Advantage
Firms can pivot quickly as markets, customer preferences, and technologies change.
Data-Informed Strategy
Strategic choices are validated with real-world results, not assumptions.
Scalable Learning Loops
Experimentation feeds back into product design, operations, and business models.
Reduced Risk of Large Failures
Small, fast experiments minimize costly mistakes while accelerating innovation.
Cultural Shift
Leaders and teams adopt a mindset where 'failing fast' is acceptable if it creates learning.
Removing the Human Bottleneck
AI removes bottlenecks caused by human limits to facilitate speed, scale, and consistency.
Tacit use of AI (Incremental, Operational)
Using AI for specific, short term goals. Operations improvements instead of overall strategy.
AI as a Tool
Value is immediate but limited; it doesn't fundamentally change the business model.
Cross-Modal Embeddings
Representations that link semantic meaning across modalities (e.g., linking an image to a descriptive sentence).
Modality Encoder
The component in MM-LLMs responsible for converting input from a specific data modality (e.g., audio, video) into structured features the model can understand.
Input Projector
The mechanism that aligns encoded features from non-text modalities with the LLM's native text space, allowing multimodal integration for processing.
LLM Backbone
The central neural architecture responsible for understanding, inferring, and generating outputs based on input representations across different modalities.
Modality Generator
The output component that produces multimodal content such as images, audio, or video using tools like Stable Diffusion or AudioLDM, guided by the LLM's decisions.
LLM Hallucinations
Situations where the model generates plausible but false or fabricated content, often due to gaps in training data or the probabilistic nature of text prediction.
LLM Architecture Components
The five key parts of MM-LLMs: the modality encoder, input projector, LLM backbone, output projector, and modality generator—each contributing to multimodal understanding and generation.
Generative AI
A type of AI that creates new content (text, audio, images) based on patterns learned from training data.
Predictive AI
AI that forecasts outcomes from historical data.
Artificial Intelligence
Broad term in the ability of a machine to show human ability.
Machine Learning
The set of algorithms that make intelligent machines capable of improving.
Deep Learning
Type of ML based on 'deep' neural networks made of multiple layers of processing.
Language Model (LM)
A core branch of GenAI focused on predicting and generating text based on patterns in language data.
Large Language Model (LLM)
Scaled-up LMs trained on massive datasets with billions of parameters, enabling advanced capabilities in reasoning, summarization, translation, and dialogue.
Multimodal Large Language Model (MLLM)
refers to a special kind of LLM that can work with more than just text—it can also process and produce images, audio, and video.
Game Theory
two players—the generator and the discriminator—compete: The generator tries to create realistic data. The discriminator tries to distinguish real data from generated (fake) data.
Data-Driven Workflow
The process starts with diverse datasets(text, image, sounds). Training involves iterative learning of patterns from this data. Fine tuning further adapts models to specific tasks or domains.
Real-World application path
After training and fine tuning the model is used for inference, generating outputs from new inputs. These outputs can power apps APIs and digital platforms.
1960s Origins
The earliest chatbots were rule-based systems using predefined keyword responses from expert knowledge bases (e.g., ELIZA). Not scalable or flexible—responses were rigid and failed in open-ended or dynamic conversations.
Rise of Statistical AI (1990s)
Introduced machine learning for pattern recognition from labeled text. Enabled more adaptive and context-aware text classification.
Neural Networks & NLP Breakthroughs (2010s)
Deep learning and Recurrent Neural Networks (RNNs) enhanced language understanding. Improved contextual awareness in sentence-level processing.
Tokenization
is the process of breaking text into tokens (subwords or characters), which are then used as inputs for LLMs to understand and generate language.
Attention Mechanisms
allow models to focus on the most relevant parts of an input when generating outputs—critical for LLMs and multimodal systems.
GPU
is a specialized processor designed to accelerate the rendering of images, animations, and video for display on a computer screen.
Open Source Generative AI
GenOS
is a comprehensive tracker that ranks and evaluates open-source generative AI projects across various modalities and applications.
Connecting GPU(engine) to GenOS(driver)
A Generative AI Operating System (GenOS) builds on GPU-enabled model power.
Data Collection
Training begins with collecting massive, diverse, and high-quality datasets from sources like web text, books, code, and forums to ensure the model learns varied language patterns.
Pretraining
undergoes unsupervised learning using objectives like next-word prediction (causal language modeling) or masked word prediction (as in BERT), enabling it to learn general language understanding.
Fine Tuning
training on smaller labeled dataset (supervised learning).
Reinforcement Learning with Human Feedback (RLHF)
a loop where human preference guides the model response.
RHLF vs Prompt Engineering
Prompt engineering builds on RLHF: Once the model has been aligned with RLHF, prompt engineering is how users leverage that alignment in real-world queries.
Prompt engineering
How users leverage model alignment in real-world queries after the model has been aligned with RLHF.
PreTrain
Build model from scratch with new data.
Add non-parametric knowledge
Supplement the model with external tools, databases, or retrieval methods.