1/39
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
You are training a deep learning model to recognize faces. What type of neural network is best suited for this task?
Recurrent Neural Network (RNN)
Feedforward Neural Network (FNN)
Autoencoder
Convolutional Neural Network (CNN)
Face recognition is based on the feature extraction given an image. CNN is well suited for this purpose.
Convolutional Neural Network (CNN)
Which OCI Vision feature is useful for identifying whether a document is an invoice, receipt, or resume, based on its appearance and keywords?
OCR (Optical Character Recognition)
Document classification
Image classification
Table extraction
Document classification
How does Select AI enhance the interaction with Oracle Autonomous Database?
O By providing advanced data visualization tools
O By improving network security protocols
O By enabling natural language prompts instead of SQL code
O By eliminating the need for database administrators
Select AI enhances interaction by enabling natural language prompts instead of SQL code, allowing users to query databases more intuitively without needing to learn complex query languages.
Select AI simplifies database queries by allowing users to interact with Oracle Autonomous Database using natural language prompts instead of writing SQL code. This enables faster data retrieval and makes querying more accessible to non-technical users.
Which of these is NOT a common application of unsupervised machine learning?
O Spam detection
O Outlier detection
O Targeted marketing campaigns
O Customer segmentation
Spam detection is a supervised machine learning problem and NOT a unsupervised learning example.
You need a suitable GPU for small or medium scale AI training and inference workloads.
Which NVIDIA GPU you are most likely to choose?
GB200
H200
B200
A100
The NVIDIA A100 GPU is widely used for small to medium-scale AI training and inference workloads, offering high-performance compute capabilities, tensor cores, and scalability. While H200 is a newer high-memory variant, the A100 remains a strong choice for efficient AI workloads.
A100
You need a suitable GPU for massive-scale (HPC) AI training and inference workloads. Which NVIDIA GPU you are most likely to choose?
O A10
O GB200
O H100
OA100
The GB200 GPU is a next-generation Grace Blackwell GPU designed for exascale AI and HPC workloads, making it more suitable for massive-scale AI training rather than standard large-scale AI workloads.
You are working on a deep learning project to generate music. Which type of neural network is best suited for this task?
O Recurrent Neural Network (RNN)
O Autoencoder
O Feedforward Neural Network (FNN)
O Convolutional Neural Network (CNN)
Predicting a next note in music needs context of prior notes. For this RNN is well suited.
You are writing poems. You need your computer to help you complete your lines by suggesting right words. Which deep learning model is best suited for this task?
Convolutional Neural Network (CNN)
Variational Autoencoder (VAE)
Recurrent Neural Network (RNN)
Generative Adversarial Network (GAN)
Prediction of a next word given a sequence of words needs to use a context of prior words in a sequence. RNN is well suited for this.
What type of data is most likely to be used with deep learning algorithms?
Complex data with non-human interpretable features
Time series data
Only string data
Data with human interpretable features
Deep Learning is suitable for complex data like images, audio or video. These type of data is usually not human interpretable, e.g. if we pick a few pixels out of the image, we may or may not be interpret these individually.
Complex data with non-human interpretable features
What is the purpose of the Model Catalog in OCI Data Science?
It is used to deploy models as API endpoints.
It only stores raw datasets for training ML models.
It functions as a real-time data processing engine.
It serves as a repository for storing, tracking, and managing machine learning models.
The Model Catalog in OCI Data Science acts as a centralized repository where users can store, track, and share models with their team. It enables collaboration and easy retrieval of models for future use.
You're developing an image classification software that can identify specific objects. Which AI subset would you use?
Deep Learning
Reinforcement Learning
Machine Learning $
Natural Language Processing
Object detection is implemented using Deep Learning. Hence the answer is Deep Learning.
A streaming service wants to recommend TV shows based on user behavior. Which machine learning approach should be used?
Deep Learning
Reinforcement Learning
Supervised Learning
Unsupervised Learning
Recommendations are given based on the user's past choices or similar user or product choices. Hence it is an example of a Supervised Machine Learning.
Supervised Learning
Which technique involves providing explicit examples in a prompt to guide an LLM’s response?
Zero-shot learning
Few-shot prompting
Self-supervised learning
Chain-of-thought prompting
Few-shot prompting involves giving k examples of a task before asking the model to generate an answer, helping improve its performance. For example, showing translations before requesting a new one.
How does normalization improve the readability of transcriptions in OCI Speech?
It translates transcriptions into multiple languages.
It replaces all uppercase letters with lowercase.
It converts elements like numbers, dates, and URLs into standard readable formats.
It removes unnecessary words from the transcription.
Normalization ensures that addresses, times, numbers, and URLs are formatted correctly, making the text more readable and suitable for further processing.
Emma works for a media company that produces video content for online platforms. She needs to add closed captions to their videos for accessibility. Which OCI Speech feature should Emma use?
Batching support
SRT file support
Profanity filtering
Confidence scoring
For closed captions, OCI Speech s SRT file support is the best choice. This allows captions to be added easily to videos in industry-standard format.
What is the primary function of the inference process in machine learning?
Labeling the training data
Adjusting the weights of a neural network
Predicting outcomes from new data points
Collecting training data
Q Inference refers to predicting an outcome given an input using a trained model.
What role do tokens play in Large Language Models (LLMs)?
They determine the size of the model's memory.
They are used to define the architecture of the model's neural network.
They are individual units into which a piece of text is divided during processing by the model.
They represent the numerical values of model parameters.
Q Tokens are the fundamental units of text that Large Language Models (LLMs) process. A token can be a word, subword, or character, depending on the tokenization method used. The model interprets and generates text based on these tokens rather than entire sentences or paragraphs at once.
A self-driving car needs to detect pedestrians and make safe lane changes. Which AI concept is being applied here?
Machine Learning
Deep Learning
Artificial Intelligence
Natural Language Processing
Detecting pedestrians and making lane changes is similar to a human behaviour. Hence the answer is Artificial Intelligence.
Artificial Intelligence
T-Few fine-tuning in OCI Generative AI Service reduces cost and training time as compared to traditional fine-tuning. Which statement correctly explains the reason behind it?
It selectively updates only a fraction of the model's weights.
It does not allow customization of pretrained models.
It trains the entire model from scratch for each task.
It requires manually configuring each layer of the model for optimization.
T-Few fine-tuning adds new layers to the pre-trained model and modifies only a fraction of the model s weights, significantly reducing the time and cost compared to full fine-tuning.
It selectively updates only a fraction of the model's weights.
Which statement best describes the pretraining process of a Generative AI model?
It learns from labeled data and maps inputs to corresponding labels.
It exclusively relies on reinforcement learning from human feedback.
It learns patterns in unstructured data without requiring labeled training data.
It only memorizes examples and reproduces them exactly.
Generative AI models do not require labeled data in the pre-training stage. Instead, they learn patterns from vast amounts of unstructured data, enabling them to generate new, unique outputs.
It learns patterns in unstructured data without requiring labeled training data.
What is the primary goal of fine-tuning a Large Language Model (LLM)?
To train the model from scratch using a completely new dataset
To adjust the pretrained model's parameters using a smaller, task-specific dataset, improving its performance on specific tasks
To convert the model into a different type of neural network architecture
To decrease the number of parameters in the model to reduce computational cost
Q Fine-tuning involves adapting a pre-trained LLM to a specific domain or task by training it on a smaller, specialized dataset. This allows the model to retain its general knowledge while improving accuracy and relevance for particular applications.
To adjust the pretrained model's parameters using a smaller, task-specific dataset, improving its performance on specific tasks
John has successfully trained a machine learning model using OCI. He now needs to deploy it for real-time predictions where it can process user inputs and generate responses. Which OCI service should he use for deployment?
OCI Object Storage
OCI Data Science
OCI Speech
OCI Language
Q Once a model is trained, it needs to be deployed for real-time inferencing using OCI Data Science and GPU Compute. This allows the model to process new data efficiently.
OCI Data Science
How does Oracle Database 23ai allow the use of pretrained AI models for vector search?
By loading ONNX models directly into the database
By restricting models to only Oracle-provided APIs
By manually converting AI models into SQL queries
By storing only raw image files for direct comparison
Q Oracle Database 23ai allows ONNX models to be loaded into the database, enabling vector embedding generation and similarity searches.
By loading ONNX models directly into the database
Which algorithm is a non-parametric approach for supervised learning?
Linear Regression
Decision Trees
Random Forest
K-Nearest Neighbors (KNN)
Q K-Nearest Neighbors (KNN) is considered a non-parametric algorithm: Unlike parametric models (e.g., linear regression, neural networks), KNN doesn't have any parameters that need to be learned from the data. The only parameter to tune is the number of nearest neighbors (K).
K-Nearest Neighbors (KNN)
How does Select AI generate SQL queries from natural language questions?
It only works with structured datasets that contain predefined SQL commands.
It provides the best SQL query based on predefined templates.
It requires users to manually input query parameters.
It connects to an LLM, infers the query intent, and formulates the SQL command.
Q Select AI translates natural language into SQL by leveraging large language models (LLMs) to infer intent and construct the required SQL query.
It connects to an LLM, infers the query intent, and formulates the SQL command.
What is the primary limitation of Recurrent Neural Networks (RNNs) when processing long sequences?
RNNs process words in parallel, making them less efficient than transformers.
RNNs can only process numerical data, not text.
RNNs can only understand words that appear at the beginning of a sentence.
RNNs struggle with long-range dependencies due to the vanishing gradient problem.
Q RNNs process text sequentially and struggle to retain context from distant words due to vanishing gradient issues. This limits their ability to capture long-range dependencies.
RNNs struggle with long-range dependencies due to the vanishing gradient problem.
Lisa runs an automated security system that monitors parking lots using cameras. She wants to locate and label vehicles and license plates in each frame. Which OCI Vision feature should she use?
Speech-to-text
Object detection
Image classification
Document Classification
Q To detect and label specific objects (vehicles and license plates) within images, object detection is the best choice.
Object detection
Which statement best describes the primary difference between Large Language Models (LLMs) and traditional Machine Learning (ML) models
Traditional ML models are better at understanding and generating natural language, while LLMs are primarily used for numerical data analysis.
LLMs have a limited number of parameters compared to ML models.
LLMs are pretrained on a large text corpus whereas ML models need to be trained on custom data.
LLMs require extensive feature engineering, while traditional ML models can generate human-like text with minimal feature engineering.
O Large Language Models (LLMs) are trained on massive datasets containing diverse text sources, enabling them to generalize across a wide range of language tasks. In contrast, traditional ML models often require training on specific, task-oriented datasets and may not generalize as well across different domains without retraining.
LLMs are pretrained on a large text corpus whereas ML models need to be trained on custom data.
Which of these components is NOT a part of OCI AI Infrastructure?
RDMA Network
OCI Storage
NVIDIA GPUs
OCI Vault
Q OCI AI Infrastructure includes NVIDIA GPUs, OCI Storage, and RDMA Networking for high-performance AI and ML workloads. However, OCI Vault is primarily used for securing and managing cryptographic keys and secrets, not AI infrastructure.
OCI Vault
What is the purpose of the hidden layer in an artificial neural network?
This layer applies filters to the image data.
This layer is optional, and it processes and transforms inputs from the network's weights and activation functions. O O
This layer receives and passes input data to the subsequent layers; it does not perform any computations.
This layer produces the final outputs of the network based on the processed information from the other layers.
Q Hidden layers, take the input from input or other hidden layer and multiples it through weights and activations. Input layer accepts input and output layer outputs the final result.
This layer is optional, and it processes and transforms inputs from the network's weights and activation functions. O O
What is the role of the loss function in supervised learning algorithms?
It evaluates the model's complexity.
It quantifies the cost of incorrect predictions.
It measures the similarity between predictions and actual targets.
It helps in feature scaling.
Loss function checks what is the difference between actual value and predicted value.
It quantifies the cost of incorrect predictions.
A company wants to automate its email filtering system to reduce spam. Which AI technique would you recommend?
Natural Language Processing
Deep Learning
Reinforcement Learning
Machine Learning
Q Detecting spam is a classification problem. Hence Machine Learning can be used for Supervised Machine Learning.
Machine Learning
What technique is used to predict the price of a house based on its features?
Clustering
Classification
Regression
Time Series Analysis
Predicting a house price which is a numerical value is an example of a supervised machine learning, more specifically Regression algorithm.
Regression
David is transcribing a customer support call using OCI Speech. The call contains some profane language, and he wants to retain the original words but mark them as inappropriate rather than discarding them. Which profanity filtering option should David use?
Normalization
Masking
Tagging
Removing
Q To retain the words but mark them, tagging is the correct choice. This method leaves the words in place while adding labels to indicate profanity.
Tagging
. John works in a news aggregation platform and wants to automatically categorize articles into topics like "Politics", "Technology", and "Sports". Which feature of OCI Language would help him?
Text classification
Language detection
Named entity recognition
Sentiment analysis
Q The text classification feature of OCI Language is the best choice. It assigns content to one of 600+ categories.
Text classification
You are training a deep learning model to predict stock prices. What type of data is this an example of?
Time series data
Image data
Text data
Sequential data
Q Stock data consist of a series time stamp and a stock price pairs, hence it is considered as a time series data.
Time series data
Which of these summarizes the three guiding principles for AI to be trustworthy?
AI should be cost-effective, cloud-based, and user-friendly.
AI should replace human oversight, be independent, and highly scalable.
AI should be fast, unbiased, and autonomous.
AI should be lawful, ethical, and robust
Q For AI to be trustworthy, it must comply with laws and regulations (lawful), adhere to ethical values (ethical), and be reliable both technically and socially (robust).
AI should be lawful, ethical, and robust
Mark is analyzing customer receipts and wants to automatically find and save details such as merchant name, transaction date, and total amount for record-keeping. Which OCI Vision feature should he use?
OCR (Optical Character Recognition)
Key-value extraction
Table extraction
Document classification
Q To extract specific values from receipts, key-value extraction is the best feature, as it identifies and extracts predefined fields like merchant name and transaction date.
Key-value extraction
What is the role of a target variable in supervised learning?
It is used to split the dataset.
It contains the desired output or class labels.
It helps in feature selection.
It represents the input data.
Q The target variable refers to desired outcome. It could be a numerical value or a label, e.g. spam or not spam or predicted rainfall in millimeters.
It contains the desired output or class labels.
John needs to analyze the accuracy of OCI Speech transcriptions for a legal case. He wants to evaluate how sure the model is about each word in the transcription. Which feature should he use?
Profanity filtering
Normalization
Confidence scoring
Batch support
Q To analyze accuracy, confidence scoring is the best option. It provides word-by-word and overall transcription confidence levels.
Confidence scoring