CompTIA SecAI+ Certification Exam Objectives
CompTIA SecAI+ Certification Exam Objectives CY0-001 V1
About the Exam
The CompTIA SecAI+ certification exam validates a candidate's knowledge and skills across several critical areas related to Artificial Intelligence (AI) and cybersecurity.
Key Skills Certified:
Understanding fundamental AI concepts.
Securing AI systems through diverse technical controls.
Leveraging AI to bolster corporate security posture and automate security tasks.
Comprehension of how Governance, Risk, and Compliance (GRC) influence AI technologies globally.
Experience Equivalent: This certification is designed for professionals with an equivalent of years of IT experience, including approximately years of hands-on cybersecurity experience.
Content Examples: The provided examples serve to clarify test objectives and are not an exhaustive list of all content on the examination.
Exam Accreditation and Development
Accreditation: The CompTIA SecAI+ exam is accredited by the American National Standards Institute (ANSI) in adherence to the International Organization for Standardization (ISO) standard. This ensures regular reviews and updates to exam objectives.
Development: CompTIA exams are developed through subject matter expert workshops and industry-wide surveys to reflect the current skills and knowledge required of IT professionals.
CompTIA Authorized Materials Use Policy
CompTIA Certifications, LLC strictly prohibits the use of content from unauthorized third-party training sites (often referred to as “brain dumps”).
Candidates found using such materials will face revocation of their certifications and suspension from future testing, in line with the CompTIA Candidate Agreement.
Candidates are advised to review all CompTIA Certification Exam Policies before studying. For questions regarding unauthorized study materials, contact CompTIA at
examsecurity@comptia.org.
Exam Content Updates
The bulleted lists of examples within the exam objectives are not exhaustive; other relevant technologies, processes, or tasks may appear on the exam.
CompTIA continuously reviews and updates exam content and questions to maintain currency and security.
Updated exams, when published, will be based on existing objectives, ensuring all related exam preparation materials remain valid.
Test Details
Required Exam: SecAI+ CY0-001
Number of questions: Undisclosed, but implied to be standard CompTIA exam quantity.
Types of questions: Multiple-choice and performance-based.
Length of test: Undisclosed.
Recommended experience: years of IT experience and approximately years of hands-on cybersecurity experience.
Exam Objectives (Domains)
The examination is structured around four main domains, each weighted by percentage:
Domain 1.0: Basic AI Concepts Related to Cybersecurity - 17 ext{%}
Domain 2.0: Securing AI Systems - 40 ext{%}
Domain 3.0: AI-assisted Security - 24 ext{%}
Domain 4.0: AI Governance, Risk, and Compliance - 19 ext{%}
Total: 100 ext{%}
1.0 Basic AI Concepts Related to Cybersecurity
1.1 Compare and contrast various AI types and techniques used in cybersecurity.
Types of AI:
Generative AI
Machine learning (ML)
Statistical learning
Transformers
Deep learning
Natural language processing (NLP)
Large language models (LLMs)
Small language models (SLMs)
Generative adversarial networks (GANs)
Model training techniques:
Model validation
Supervised learning
Unsupervised learning
Reinforcement learning
Fine-tuning
Epoch: A complete pass through the entire training dataset.
Pruning: Reducing the size of a model by removing less important connections/neurons.
Quantization: Reducing the precision of numbers in a model, often to integers, to decrease memory and computational requirements.
Prompt engineering:
System prompts: Instructions given to the AI system to guide its overall behavior.
User prompts: Input provided by the end-user to the AI.
One-shot prompting: Providing a single example within the prompt to guide the AI's response.
Multi-shot prompting: Including multiple examples in the prompt for improved guidance.
Zero-shot prompting: Asking the AI to perform a task without any examples.
System roles: Predefined roles or personas assigned to the AI system.
Templates: Pre-structured formats for prompts to ensure consistency and guide output.
1.2 Explain the importance of data security in relation to AI.
Data processing:
Data cleansing: Processes to detect and correct (or remove) corrupt or inaccurate records from a dataset.
Data verification: Ensuring the accuracy and truthfulness of data.
Data lineage: Tracking the origin and journey of data from source to consumption.
Data integrity: Maintaining and assuring the accuracy and consistency of data over its entire lifecycle.
Data provenance: Recording the history of data, including who accessed, modified, and used it.
Data augmentation: Increasing the amount of data by adding slightly modified copies of already existing data or newly created synthetic data.
Data balancing: Adjusting class distributions in imbalanced datasets to prevent models from being biased towards majority classes.
Data types:
Structured data: Highly organized and formatted data that resides in fixed fields within a record or file (e.g., relational databases).
Semi-structured data: Data that does not conform to a formal data model but has some organizational properties (e.g., JSON, XML).
Unstructured data: Data that does not have a predefined format or organization (e.g., text documents, images, audio, video).
Watermarking: Embedding covert identifiable information into data (e.g., images, text) to ascertain its origin or detect unauthorized use.
Retrieval-augmented generation (RAG): An AI architecture that enhances generative models by retrieving relevant information from an external knowledge base to inform its output.
Vector storage: Databases optimized for storing and querying high-dimensional vectors (embeddings).
Embeddings: Numerical representations of text or other data, capturing semantic meaning, allowing for similarity comparisons.
1.3 Explain the importance of security throughout the life cycle of AI.
Business use case: Ensuring the AI solution aligns with corporate objectives and ethical guidelines.
Data collection: Focus on trustworthiness and authenticity of data sources to prevent biased or malicious input.
Data preparation: Securely transforming and cleaning data, often involving anonymization and privacy controls.
Model development/selection: Choosing or developing models that are robust, secure, and resilient against attacks.
Model evaluation: Rigorous testing of the model for performance, accuracy, fairness, and security vulnerabilities.
Deployment: Securely integrating the AI model into production environments.
Validation: Continuous testing and verification of the deployed model's performance and security.
Monitoring and maintenance: Ongoing observation of the model's behavior, identifying anomalies, and addressing performance degradation or security issues.
Feedback and iteration: Incorporating user feedback and performance data to improve the model and its security over time.
Human-centric AI design principles:
Human-in-the-loop: Designing systems where humans intervene to review, validate, or override AI decisions.
Human oversight: Maintaining human responsibility and control over AI systems, even in autonomous operations.
Human validation: Ensuring human experts verify the AI's logic, outputs, and ethical adherence.
2.0 Securing AI Systems
2.1 Given a scenario, use AI threat-modeling resources.
Open Web Application Security Project (OWASP) Top 10:
LLM Top 10: Specific to vulnerabilities in Large Language Models.
Machine Learning (ML) Security Top 10: Focused on general ML system vulnerabilities.
Massachusetts Institute of Technology (MIT) AI Risk Repository: A database of risks associated with AI systems.
MITRE Adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS): A knowledge base of adversary tactics, techniques, and common knowledge for AI systems.
Common Vulnerabilities and Exposures (CVE) AI Working Group: A group dedicated to identifying and standardizing AI-specific vulnerabilities.
Threat-modeling frameworks: Structured approaches like STRIDE, DREAD, or PASTA applied to AI systems to identify potential threats.
2.2 Given a set of requirements, implement security controls for AI systems.
Model controls:
Model evaluation: Regular assessment of model robustness, bias, and performance under adversarial conditions.
Model guardrails: Implementing programmatic and policy-based constraints on a model's behavior and output.
Prompt templates: Structured inputs that guide the model to produce desired outputs and avoid unwanted ones.
Gateway controls: Security measures implemented at the entry/exit points of an AI system.
Prompt firewalls: Filters that analyze and sanitize user prompts before they reach the AI model, blocking malicious or unsafe inputs.
Rate limits: Restricting the number of requests an entity can make to the AI system within a specific time frame to prevent abuse or DoS attacks.
Token limits: Restricting the maximum number of tokens (words/sub-words) in an AI's input or output to manage computational resources and prevent prompt injection.
Input quotas: Limiting the amount of data or number of inputs allowed.
Data size: Restricting the total volume of data.
Quantity: Limiting the number of inputs.
Modality limits: Restricting the types of media or input formats an AI can process.
Endpoint access controls: Securing the API endpoints through which users or other systems interact with the AI.
Guardrail testing and validation: Continuously testing the effectiveness of implemented guardrails against known and emerging attack vectors.
2.3 Given a scenario, implement appropriate access controls for AI systems.
Model access: Restricting who can view, modify, or retrain the AI model.
Data access: Controlling who can access the training data, inference data, and generated data.
Agent access: Managing permissions for AI agents or automated processes that interact with the system.
Network/application programming interface (API) access: Securing the network paths and API endpoints used to communicate with the AI system.
2.4 Given a scenario, implement data security controls for AI systems.
Encryption requirements: Protecting data confidentiality.
In transit: Encrypting data as it moves across networks.
At rest: Encrypting data stored on disks or in databases.
In use: Protecting data while it is being processed in memory (e.g., homomorphic encryption, confidential computing).
Data safety: Techniques to minimize risks associated with sensitive data.
Data anonymization: Removing or encrypting personally identifiable information (PII) from datasets.
Data classification labels: Tagging data based on its sensitivity to enforce appropriate handling policies.
Data redaction: Permanently removing sensitive information from a document or dataset.
Data masking: Obscuring sensitive data with realistic but false information for testing or development purposes.
Data minimization: Collecting and processing only the absolutely necessary data required for a specific purpose.
2.5 Given a scenario, implement monitoring and auditing for AI systems.
Prompt monitoring:
Query: Analyzing user input prompts for malicious intent, sensitive information, or policy violations.
Response: Examining the AI's generated outputs for undesirable content, hallucinations, or data leakage.
Log monitoring: Collecting and analyzing system logs for security events, performance issues, and anomalies.
Log sanitization: Removing sensitive or personally identifiable information from logs before storage or analysis.
Log protection: Securing log files from unauthorized access or tampering.
Response confidence level: Tracking the AI model's certainty in its predictions or generated output to identify potential errors or weaknesses.
Rate monitoring: Tracking the frequency of requests or operations to detect Denial of Service (DoS) attacks or misuse.
AI cost monitoring: Tracking resource consumption associated with AI operations.
Prompts: Cost associated with processing user inputs.
Storage: Cost of storing datasets, models, and outputs.
Response: Cost associated with generating AI outputs.
Processing: Computational cost of model training and inference.
Auditing for quality and compliance: Regularly assessing the AI system against predefined standards.
Hallucinations: Detecting instances where the AI generates plausible but false information.
Accuracy: Verifying the correctness of the AI's predictions or outputs.
Bias and fairness: Testing for and mitigating unfair or discriminatory outcomes.
Access: Auditing who accessed the AI system, when, and for what purpose.
2.6 Given a scenario, analyze the evidence of an attack and suggest compensating controls for AI systems.
Attacks:
Prompt injection: Crafting malicious inputs to override or manipulate the AI's intended behavior.
Poisoning:
Model poisoning: Introducing malicious data during training to compromise the model's integrity or performance.
Data poisoning: Injecting corrupted or misleading data into the training dataset.
Jailbreaking: Bypassing an AI's safety filters or ethical guardrails to elicit prohibited responses.
Hallucinations: AI generating confident but incorrect or fabricated information.
Input manipulation: Tampering with input data to deceive the model.
Introducing biases: Deliberately feeding biased data into a model to produce unfair outcomes.
Circumventing AI guardrails: Finding ways around the protective measures implemented in an AI system.
Manipulating application integrations: Exploiting vulnerabilities in how an AI system connects with other applications.
Model inversion: Reconstructing sensitive training data from the model's outputs.
Model theft: Illegally acquiring or copying an AI model.
AI supply chain attacks: Targeting any stage of the AI development and deployment pipeline (e.g., data, libraries, infrastructure).
Transfer learning attacks: Exploiting models trained on one task and fine-tuned for another.
Model skewing: Causing the model to drift in its predictions over time due to manipulated data or environment.
Output integrity attacks: Tampering with the AI's output after generation but before it reaches the user.
Membership inference: Determining if a specific data record was part of the model's training dataset.
Insecure output handling: Improper storage or display of AI outputs, leading to vulnerabilities.
Model denial of service (DoS): Overwhelming an AI model with requests or complex inputs to reduce its availability or performance.
Sensitive information disclosure: AI inadvertently revealing confidential data.
Insecure plug-in design: Vulnerabilities arising from poorly designed or implemented AI plug-ins.
Excessive agency: AI agents performing actions beyond their intended scope or authority.
Overreliance: Humans excessively trusting or depending on AI outputs without critical judgment, leading to errors.
Compensating controls:
Prompt firewalls: Filtering and validating prompt inputs.
Model guardrails: Implementing behavioral constraints and safety policies.
Access controls: Limiting who can interact with the AI model and its data.
Data integrity controls: Ensuring the accuracy and consistency of data throughout its lifecycle.
Encryption: Protecting data in all states (at rest, in transit, in use).
Prompt templates: Guiding valid inputs and preventing malicious ones.
Rate limiting: Controlling the frequency of interactions to prevent abuse.
Least privilege: Granting only the minimum necessary permissions to users and systems.
3.0 AI-assisted Security
3.1 Given a scenario, use AI-enabled tools to facilitate security tasks.
Tools/applications:
Integrated development environment (IDE) plug-ins: AI tools integrated into coding environments for security analysis.
Browser plug-ins: AI-powered tools within web browsers for security-related tasks.
Command-line interface (CLI) plug-ins: AI tools accessible and operable from the command line.
Chatbots: Conversational AI agents assisting with security queries, initial incident response, or information retrieval.
Personal assistants: AI-powered digital assistants for security-related tasks, scheduling, or alerts.
Use cases:
Signature matching: Automated detection of known malware signatures.
Code quality and linting: AI-assisted analysis of code for errors, style violations, and potential vulnerabilities.
Vulnerability analysis: Identifying security weaknesses in systems and applications.
Automated penetration testing: AI-driven simulations of attacks to find exploitable vulnerabilities.
Anomaly detection: Identifying unusual patterns that may indicate a security incident.
Pattern recognition: Discovering recurring behaviors or data sequences indicative of threats.
Incident management: Automating parts of the incident response lifecycle, such as triage and escalation.
Threat modeling: Assisting in identifying potential threats and vulnerabilities in systems.
Fraud detection: Identifying fraudulent transactions or activities using behavioral analysis.
Translation: Translating security intelligence or incident reports across languages.
Summarization: Condensing large security logs or reports into concise summaries.
3.2 Explain how AI enables or enhances attack vectors.
AI-generated content (deepfake):
Impersonation: Creating realistic fake audio/video to impersonate individuals for scams or social engineering.
Misinformation: Spreading inaccurate or misleading information.
Disinformation: Deliberately spreading false information to deceive.
Adversarial networks: Using GANs or similar models to generate adversarial examples or learn optimal attack strategies.
Reconnaissance: AI automating collection and analysis of target information (e.g., open-source intelligence).
Social engineering: AI crafting highly convincing phishing emails, voice messages, or social media interactions.
Obfuscation: AI generating complex and evasive code or techniques to hide malicious activity from security tools.
Automated data correlation: AI systems quickly identifying relationships and patterns across vast datasets to find vulnerabilities or attack pathways.
Automated attack generation: AI creating sophisticated and novel attacks.
Attack vector discovery: Identifying new ways to exploit systems.
Payloads: Generating custom malware payloads.
Malware: Creating polymorphic or metamorphic malware to evade detection.
Honeypot: AI designing and managing deception environments to trap attackers or gather intelligence.
Distributed denial of service (DDoS): Orchestrating and coordinating large-scale attacks from distributed sources.
3.3 Given a scenario, use AI to automate security tasks.
Scripting tools:
Low-code: Tools allowing developers to create applications with minimal manual coding, often using visual interfaces, enabling faster security automation.
No-code: Tools allowing non-developers to create applications with no coding, empowering broader security automation.
Document synthesis and summarization: AI-powered tools to automatically generate security reports or summarize incident logs.
Incident response ticket management: AI automating the creation, categorization, assignment, and tracking of security incident tickets.
Change management:
AI-assisted approvals: AI reviewing change requests for potential security risks and recommending approvals or rejections.
Automated deployment/rollback: AI managing the automated deployment of security patches or configuration changes and enabling quick rollbacks in case of issues.
AI agents: Autonomous AI entities performing security tasks like monitoring, threat hunting, or vulnerability remediation.
Continuous integration/continuous deployment (CI/CD): Integrating AI into the software development pipeline to enhance security automation.
Code scanning: Automated analysis of source code for security vulnerabilities.
Software composition analysis (SCA): Identifying known vulnerabilities in open-source and third-party components.
Unit testing: Automating tests for individual code components.
Regression testing: Ensuring new code changes don't break existing functionalities or introduce new vulnerabilities.
Model testing: Automated testing of AI models for biases, robustness, and performance.
Automated deployment/rollback: AI managing the automated deployment of applications and ensuring secure rollbacks if problems arise.
4.0 AI Governance, Risk, and Compliance
4.1 Explain organizational governance structures that support AI.
Organizational structures:
AI Center of Excellence: A dedicated team or department focused on guiding AI strategy, best practices, and governance across an organization.
AI policies and procedures: Formal documents outlining rules, guidelines, and processes for the responsible and secure use of AI.
AI-related roles:
Data scientist: Designs and implements AI models, focusing on data analysis and model development.
AI architect: Designs the overall architecture and infrastructure for AI systems.
Machine learning engineer: Builds, deploys, and maintains ML models in production.
Platform engineer: Develops and maintains the underlying infrastructure and platforms that support AI development and deployment.
MLOps engineer: Specializes in operationalizing ML models, focusing on CI/CD, monitoring, and scaling of AI systems.
AI security architect: Designs and implements security measures for AI systems and infrastructure.
AI governance engineer: Ensures AI systems comply with ethical guidelines, regulations, and internal policies.
AI risk analyst: Identifies, assesses, and mitigates risks associated with AI deployment.
AI auditor: Evaluates AI systems for compliance, fairness, and performance against established standards.
Data engineer: Develops and maintains data pipelines and data infrastructure to ensure data availability and quality for AI applications.
4.2 Explain risks associated with AI.
Responsible AI: A framework emphasizing ethical and societal considerations in AI development and deployment.
Fairness: Ensuring AI systems do not produce discriminatory or biased outcomes.
Reliability and safety: Ensuring AI systems perform consistently and safely without unintended harm.
Transparency: Making AI decision-making processes understandable and explainable.
Privacy and security: Protecting sensitive data and securing AI systems from attacks.
Explainability: The ability to describe why an AI system made a particular decision or prediction.
Inclusiveness: Designing AI systems to be accessible and beneficial to all users, regardless of background.
Accountability: Establishing clear responsibility for the actions and impacts of AI systems.
Consistency: Ensuring AI systems behave predictably and do not deviate arbitrarily.
R* Risks:Introduction of bias: AI models reflecting and amplifying biases present in training data.
Accidental data leakage: Inadvertent exposure of sensitive data by AI systems.
Reputational loss: Damage to an organization's public image due to AI's unethical behavior or security failures.
Accuracy and performance of the model: Issues leading to incorrect decisions or failures, potentially with severe consequences.
Intellectual Property (IP)-related risks: AI systems potentially infringing on existing IP or having their own IP stolen.
Autonomous systems: Risks associated with AI systems operating independently, including unexpected behavior or loss of human control.
4.3 Summarize the impact of compliance on business use and development of AI.
European Union (EU) AI Act: Landmark regulation aiming to ensure AI systems used in the EU are safe, transparent, non-discriminatory, and environmentally friendly.
Organisation for Economic Co-operation and Development (OECD) standards: International principles and recommendations for responsible AI innovation and stewardship.
ISO AI standards: International Organization for Standardization documents providing guidance and frameworks for AI systems, including quality, security, and ethics.
National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF): A voluntary framework that helps organizations manage risks associated with AI systems.
Corporate policies: Internal guidelines dictating AI usage.
Sanctioned vs. unsanctioned: Differentiating between officially approved and unapproved AI tools/practices.
Private vs. public models: Guidelines for using internal, proprietary models versus publicly available ones.
Sensitive data governance: Policies for handling, storing, and processing sensitive data with AI.
Third-party compliance evaluations: Assessments by external auditors to ensure an organization's AI practices meet regulatory and ethical standards.
CompTIA SecAI+ Acronym List
AI: Artificial Intelligence
API: Application Programming Interface
ATLAS: Adversarial Threat Landscape for Artificial-Intelligence Systems
CI/CD: Continuous Integration/Continuous Deployment
CLI: Command-line Interface
CVE: Common Vulnerabilities and Exposures
DDoS: Distributed Denial of Service
DoS: Denial of Service
EU: European Union
GAN: Generative Adversarial Network
GPU: Graphics Processing Unit
GRC: Governance, Risk, and Compliance
IDE: Integrated Development Environment
IP: Intellectual Property
ISO: International Organization for Standardization
LAN: Local Area Network
LLM: Large Language Model
MIT: Massachusetts Institute of Technology
ML: Machine Learning
NIST: National Institute of Standards and Technology
NLP: Natural Language Processing
OECD: Organisation for Economic Co-operation and Development
OWASP: Open Web Application Security Project
RAG: Retrieval-augmented Generation
RMF: Risk Management Framework
SCA: Software Composition Analysis
SLM: Small Language Model
CompTIA SecAI+ Hardware and Software List (Sample)
Equipment:
Laptops
Cloud VMs (Virtual Machines)
Graphics processing units (GPUs)
NVidia Jetson Nano Orin
Mobile devices
Sandbox environment
Local area network (LAN)
Software:
Virtual containers
Large data sets
Test data sets
Python environment
R environment
IDE (Integrated Development Environment)
Jupyter environment
Chatbots
LLMs (Large Language Models)
Open-source tools:
GitHub
Ollama
Cloud-based environment
Cloud-based AI studios
Vector database
NoSQL Database
Neo4j Graph Database