Artificial intelligence (AI): Technology designed to meet or exceed human intelligence.
Generative AI (GenAI): A subset of artificial intelligence that uses generative models to produce text, video, images, or other data forms.
AI application integration: The process of combining AI capabilities with existing software systems to enhance functionality. Includes search engines (Google Search + Gemini), email applications, and graphics editors (generate AI images to enhance designs).
Appropriate AI use: Ethical/useful ways to use artificial intelligence - includes processing large data sets, automation, healthcare diagnostics, translation, and proofreading.
Inappropriate AI use: Unethical/illegal ways to use AI - includes fraud/deepfake creation, plagiarism, and invasion of privacy through unauthorized data access.
AI bias: The tendency of AI systems to produce unfair or prejudiced outcomes, often due to biased training data or flawed algorithms that do not represent diverse perspectives.
AI hallucinations: AI misinterpreting data or generating false information that is not grounded in reality, which can lead to misinformation and reduced trust in AI systems.
AI accuracy: The degree to which AI systems correctly interpret data and produce reliable outputs, influencing user confidence and the overall effectiveness of AI applications.
Public AI: Openly available generative AI systems. Examples include ChatGPT and Google Gemini.
Private AI: Internal AI engines/generative AI systems. Organizations have complete control over the AI model.
Data security (AI): Refers to information in the AI engine being retrieved by others (e.g., passwords, encryption keys). Private AI sources can limit these breaches.
Data source (AI): Refers to how accessible AI training data is - private data can only be accessed by a single entity, while public data is openly accessible.
Data privacy (AI): Refers to the quantity of data AI collects about an organization/entity. AI models may know where you live and your habits.