1/88
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
artificial intelligence (AI)
a broad field focused on the development of intelligent computer systems capable of performing humanlike tasks
machine learning (ML)
a type of AI for traning machines to perform complex tasks without explicit instructions
how does ML work
training finds the patterns in historical data to make a ML model, which can then bee applied to new data to make predictions based on the patterns it’s learned
Q: Machine learning (ML) is a type of AI for training machines to perform complex tasks without explicit instructions. This training process involves finding patterns in vast amounts of historical data.
What is produced as a result of the ML training process?
ML training produces a model that can be applied to new data to make predictions or decisions based on the patterns it is learned.
ML business use cases
predict trends
make decisions
detect anomalies
AWS AI/ML solutions
AI services — pre-built models that’re already trained to perform specific functions
ML services — a more customized approach with Amazon SageMaker AI where you build, train, and deploy your own ML model with fully managed infrastructure
ML frameworks and infrastructure — a completely custom approach to building models using purpose-built chips that integrate with popular ML frameworks
Tier 1: pre-built AWS AI services
manged services that’re pre-built, ready-to-use models that’re already trained to perform specific functions
Amazon Comprehend
a tier 1 language service that uses NLP to extract key insights from documents
Amazon Comprehend use cases
content classification
customer sentiment analysis
compliance monitoring
Q: The owner of a car dealership wants to determine why her service department has lost business over the past year. She wants to analyze a large number of documented customer comments to better understand customer sentiment.
Which AWS service would work well for this use case?
Amazon Comprehend can extract key insights, such as customer sentiment, from documents. This can help the owner better understand her customers.
Amazon Polly
a tier 1 language service that converts text into lifelike speech
Amazon Polly use cases
virtual assistants
e-learning applications
accessibility enhancements
Q: An instructional designer is developing a new course on customer service skills. He wants to include several simulated calls to reinforce the learning. Because he doesn't have access to a recording studio, he needs a quick way to convert his scripts to speech.
Which service would work well for this use case?
Amazon Polly converts text into lifelike speech. It supports multiple languages, different genders, and a variety of accents. It is an ideal match for this use case.
Amazon Transcribe
a tier 1 language service that converts speech to text
Amazon Transcribe
customer call transcription
automated subtitling
metadata generation for media content
Amazon Translate
a tier 1 language service that translates text
Amazon Translate use cases
document translation
multi-language application integrations
Amazon Kendra
a tier 1 computer vision and search service that uses NLP to search for answers within large amounts of enterprise content
Amazon Kendra use cases
intelligent search
chatbots
application search integration
Amazon Rekognition
a tier 1 computer vision and search service that analyzes images and videos stored in S3
Amazon Rekognition use cases
content moderation
ID verification
media analysis
home automation experiences
Amazon Textract
a tier 1 computer vision and search service that detects and extracts handwritten text found in documents, forms, and even tables within documents
Amazon Textract use cases
financial, healthcare, and government form text extraction for quick processing
Amazon Lex
a tier 1 conversational AI and personalization service that adds voice and text conversation interfaces to your applications
uses natural language understanding (NLU) and automatic speech recognition (ASR) to create lifelike conversations
Amazon Lex use cases
virtual assistants
natural language search for FAQs
automated application bots
Q: A healthcare company wants to add a conversational interface to its customer support application using a ready-made solution.
Which AWS service could they choose?
With Amazon Lex, the company can add voice and text conversational interfaces to their applications to create lifelike conversations. It can enhance the healthcare company's customer support app.
Amazon Personalize
a tier 1 conversational AI and personalization service that uses historical data to build intelligent applications with personalized recommendations for your customers
Amazon Personalize use cases
personalized streaming, product, and trending recommendations
Q: An e-commerce company wants to add a product recommendation engine to its online application to increase sales. The development team wants the recommendations to be relevant for each individual customer.
Which pre-built AWS AI service would work well for this use case?
Amazon Personalize can be used to add personalized customer recommendations to applications. It is a good choice for this use case.
Tier 2: ML services
provides a more customized approach for customers who want more control over their ML solutions without having to manage infrastructure
Amazon SageMaker AI
a tier 2 service, which provides an IDE that provides simplified access control and transparency over your ML projects
Amazon SageMaker AI benefits
your choice of ML tools
fully managed infrastructure
repeatable ML workflows
Q: Which AWS service can be used to build, train, and deploy a customized machine learning (ML) model without worrying about the underlying infrastructure?
Amazon SageMaker AI provides a customized approach to ML model development without having to worry about the underlying infrastructure.
Q: A small tech company wants to develop their own customized machine learning (ML) model without managing the underlying infrastructure. The company is looking for a solution that both their data scientists and business analysts can use.
Which AWS service should they choose?
They can use SageMaker AI to develop their ML models without worrying about infrastructure. Data scientists can use the IDE, and business analysts can use the no-code interface.
Tier 3: ML frameworks and infrastructure
for organizations that need complete conrol over the ML training process
ML framework
a software library or tool that provides experienced ML practitioners with pre-built, optimized components for building ML models
AWS supports ML frameworks like PyTorch, Apache M-X Net, TensorFlow
AWS ML infrastructure
infrastructure such as ML-optimized Amazon EC2 instances, Amazon EMR, and Amazon ECS can support these custom solutions
Q: A team of machine learning (ML) engineers is developing a new ML model for a highly specialized application. They need complete control over the ML training process. So, they are developing their own custom solution using the PyTorch ML framework.
What is an ML framework?
An ML framework is a software library or tool that provides experienced ML practitioners with pre-built, optimized components for building machine learning models.
deep learning (DL)
a subset of ML where models are trained using layers of artificial neurons that mimic the human brain
each layer of the neural network sums up and feeds information to the next layer until a final model is made
generative AI (gen AI)
a type of DL powered by extremely large ML models (foundational models)
foundational model (FM)
a model pre-trained on vast collections of data and can be adapted to perform multiple tasks
Q: Generative AI is a type of deep learning powered by extremely large ML models that are pre-trained on vast collections of data.
What are these models called?
Generative AI is powered by extremely large ML models known as foundation models (FMs). FMs are pre-trained on vast collections of data. FMs can be adapted to perform multiple tasks.
Q: Generative AI is a type of deep learning powered by extremely large machine learning (ML) models known as foundation models (FMs).
What are characteristics of FMs? (Select TWO.)
FMs are programmed with explicit rules.
FMs can be adapted to perform multiple tasks.
FMs are only used to create images.
FMs are trained to perform singular tasks.
FMs are pre-trained on vast collections of data.
FMs can be adapted to perform multiple tasks.
FMs are pre-trained on vast collections of data.
large language model (LLM)
a popular type of FM trained to use language
Amazon SageMaker JumpStart
a ML hub with foundational models and pre-built ML solutions deployable with a few clicks
offers a library of pre-built ML solutions across various domains which can be fine-tuned to suit your specific needs
Amazon SageMaker JumpStart use cases
rapid ML model deployments
custom fine-tuned solutions
ML experiments and prototypes
Amazon Bedrock
a fully managed service for adapting and deploying foundational models from Amazon and other leading AI companies thorugh a single unified API
designed for working with large foundation models and building gen AI applications
Amazon Bedrock use cases
enterprise-grade gen AI
multimodal content generation
advanced conversational AI
Q: Amazon Bedrock is a fully managed service that was specifically designed for working with large foundation models (FMs) and building generative AI applications.
What does the service provide to access FMs from Amazon and leading AI startups?
Amazon Bedrock provides access to FMs from Amazon and leading AI startups, such as Claude and Stable Diffusion, all through a single unified API.
Q: A large advertising agency wants to quickly integrate a new content generation feature into its existing enterprise-wide design application. The new feature needs to be able to generate both text and images. The agency doesn't want to manage any new infrastructure.
Which service would work best for this use case?
Amazon Bedrock would work well for fully managed, enterprise-grade, multimodal generative AI.
Amazon Q
an interactive gen AI assistant that can be integrated with a company’s information repositories
Amazon Q Business
can answer questions, help solve problems, and take actions using the data and expertise found in your company’s information repositories
Amazon Q Business use cases
information requests
automated workflows
insight extraction
Amazon Q Developer
provides code recommendations to accelerate coding development
Amazon Q Developer use cases
faster code generation
improved reliability and security
automated code reviews
Q: A large healthcare organization wants to improve employee productivity. The company is searching for a pre-built generative AI assistant that can answer questions, help solve problems, and take actions using the data and expertise found in its information repositories.
Which AWS service would work well for this use case?
Amazon Q Business can answer pressing questions, help solve problems, and take actions using the data and expertise found in your company's information repositories. It is an ideal choice for this use case.
Q: A software development company is working on a new product with a very tight deadline. The company needs a way to develop code faster without sacrificing reliability or security.
Which service could best help this company meet its deadline?
Amazon Q Developer provides code recommendations to accelerate development of C#, Java, JavaScript, Python, and TypeScript applications. It's a good fit for this use case.
data pipelines for ETL processes
a process used to get clean and accessible data in a format that’s usable by analytics tools and AI algorithms
how data pipelines for ETL processes work
Extract the data from various sources and store it
Transform it into a consistent, usable format for downstream tools to consume
Load it into a destination system (e.g., a data warehouse, an analytics platform)
data pipeline
an automated assembly line used to make the ETL process efficient and repeatable
data analytics
the process of transforming raw historical data to uncover valuable insights and trends
Q: Data pipelines are automated assembly lines used to make the ETL process efficient and repeatable.
What does ETL stand for?
In data analytics, ETL stands for extract, transform, load. Sometimes, an ELT process is followed in which the transform step is last.
Q: The extract, transform, load (ETL) process is often used to provide clean and accessible data in a format that is usable by analytics tools and AI algorithms.
How does a data pipeline improve this process?
Data pipelines are automated assembly lines used to make the ETL process efficient and repeatable. AWS has a suite of integrated services so you can build your own data pipelines.
data ingestion services
services that move data from source systems into your chosen storage solution
real-time ingestion
for when data is needed immediately
batch ingestion
for when some data latency is tolerable
Amazon Kinesis Data Stream
a serverless data ingestion service that does real-time ingestion of terabytes of data from applications, streams, and sensors
provides automatic provisioning and scaling
delivers data within seconds to destination systems
Q: A financial services company is developing an application to analyze real-time stock data so its team of analysts can make immediate trading decisions. The company needs to ingest real-time stock market data without worrying about servers or scaling capacity.
Which AWS service would meet their needs?
This serverless service can be used for real-time ingestion of terabytes of data from applications, streams, and sensors. It even provides automatic provisioning and scaling.
Amazon Data Firehouse
a fully managed service that can do near real-time data ingestion
provides automatic provisioning and scaling
delivers data within seconds to destination systems
data storage services
data is commonly consolidated into a singular location
flexible data lakes
store vast amounts of raw data
structured data warehouses
optimized for business intelligence
Amazon S3
a popular data lake choice that can store any amount of structured/unstructured data
automatically scales
Amazon Redshift
a fully managed data warehouse service that can store petabytes of structured/semistructured data
scalable and pay-as-you-go
Q: Data can come from many different sources. To provide insights, the data must be consolidated in a single location. There are two storage options for this. Data lakes store vast amounts of raw data, and data warehouses are optimized for business intelligence.
Which AWS services are typically used as a data lake and data warehouse?
Amazon S3 can store virtually any amount of structured or unstructured data, making it a good choice for data lakes. Amazon Redshift is a fully managed data warehouse service optimized for business intelligence.
data cataloging services
services that catalog data with metadata
AWS Glue Data Catalog
a centralized, scalable, and managed metadata repository that enhances data discovery
data processing services
services that clean and transform the data before analysis
Amazon Glue
a fully managed ETL data processing service that makes data preparation simpler, faster, and more cost-effective
can use the AWS Glue Data Catalog to get metadata to help inform transformations
Amazon EMR
a data processing service that automatically handles infrastructure provisioning, cluster management, and scaling
data analysis and visualization services
services that provide queries and visualization tools to help you develop important insights about your data
Amazon Athena
a fully managed serverless data analysis service that can be used to run SQL queries to analyze data in relational, nonrelational, object, and custom data sources
can access data on S3, on-premise, or multi-cloud environments
Amazon Redshift
a fully managed data warehouse solution, whose columnar storage and massively parallel processing architecture make it ideal for analyzing large datasets
Amazon QuickSight
a data visualization service that can create dashboards and reports from various data sources without managing infrastructure
Amazon OpenSearch Service
a data analysis and visualization service where you can search for relevant content through precise keyword matching and natural language queries
unified dashboards provide real-time data visualization
Q: A data analytics team is creating an automated data pipeline on AWS.
Which AWS services could they choose for data ingestion? (Select TWO.)
Amazon Redshift
Amazon Kinesis Data Streams
Amazon EMR
AWS Glue Data Catalog
Amazon Data Firehose
Amazon Kinesis Data Streams
Amazon Data Firehose
Amazon Kinesis Data Streams can be used to ingest real-time data, and Amazon Data Firehose can be used to ingest near real-time data.
Q: The data analytics team must ingest vast amounts of unstructured data into its pipeline.
Which AWS service is the BEST choice for storing this data?
Amazon S3 can store virtually unlimited amounts of unstructured data. This makes it a popular data lake choice and the best storage option for the team.
Q: Which AWS service is BEST suited for data processing in a data pipeline?
AWS Glue is used to process data by using the AWS Glue Data Catalog as a reference.
Q: Which AWS services could the data analytics team choose for data visualization? (Select TWO.)
Amazon Data Firehose
Amazon QuickSight
Amazon Athena
AWS Glue
Amazon OpenSearch Service
Amazon QuickSight
Amazon OpenSearch Service
The data analytics team can use both QuickSight and OpenSearch Service to visualize data in a data pipeline.