1/15
These flashcards cover key concepts and foundational knowledge for the course on Applied Artificial Intelligence, focusing on data handling, statistical methods, and machine learning principles.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
What is the main focus of ACCT 331: Introduction to Applied Artificial Intelligence?
Exploration of artificial intelligence concepts, foundational skills, and applications in business.
What are the types of data in AI?
Structured, unstructured, qualitative, and quantitative.
What is a data set in the context of AI?
A collection of entities described in terms of attributes organized in an n*m data matrix.
Why is data quality important in AI?
Data quality impacts system performance and the reliability of machine learning models.
What does EDA stand for, and what is its purpose?
Exploratory Data Analysis; it's used to analyze data sets to summarize their main characteristics.
What role does probability play in AI?
Probability helps in handling uncertainty, making predictions, and modeling randomness in data.
What is linear regression used for in AI?
It models the relationship between a dependent variable and one or more independent variables for predictions.
What is the purpose of the Sigmoid function in logistic regression?
It transforms a linear prediction into a probability between 0 and 1.
What are the components of a linear regression model?
Dependent variable, independent variables, and coefficients representing the relationship strengths.
How does conditional probability relate to AI applications?
It is used to update the likelihood of an outcome based on new evidence, crucial for decision-making in AI systems.
What is the difference between linear regression and logistic regression?
Linear regression predicts continuous outcomes, while logistic regression predicts binary outcomes.
What technique is commonly used to minimize error in linear regression?
Least Squares method, minimizing the residual sum of squares.
What is overfitting?
It occurs when a model learns the noise in the training data instead of the underlying pattern.
What is the role of hypothesis testing in statistical analysis for AI?
It is used to assess whether the observed data provides enough evidence to accept or reject a hypothesis.
Why is the Central Limit Theorem important in statistics for AI?
It allows the application of normal distribution properties to sample means, facilitating inference.
What challenges can 'messy' data pose in AI?
It can lead to missing values, coding differences, inconsistent data, and inaccuracies.