knowt logo

NotesSAA-C03

  1. The shared responsibility model
    1. Customer is responsible for security IN the loud
    2. AWS is responsible for security OF the cloud
  2. 6 Pillars of the Well-Architected Framework
    1. Operation Excellence
      1. Running and monitoring systems to deliver business value, and continually improving processes and procedures
    2. Performance Efficiency
      1. Using IT and computing resources efficiently
    3. Security
      1. Protecting info and systems
    4. Cost Optimization
      1. Avoiding unnecessary costs
    5. Reliability
      1. Ensuring a workload performs its intended function correctly and consistently
    6. Sustainability
      1. Minimizing the environmental impacts of running cloud workloads
  3. IAM
    1. To secure the root account:
      1. Enable MFA on root acct
      2. Create an admin group for admins and assign appropriate privileges
      3. Create user accounts for admins - don't share
      4. Add appropriate users to admin groups
    2. IAM Policy documents
      1. JSON - key pairs
    3. IAM does not work at regional level, it works at global level
    4. Identity Providers > Federation Services
      1. AWS SSO
      2. Can add a provider/configure a provider
        1. Most common provider type is SAML
          1. Uses: AD Federation Services
          2. SAML provider establishes a trust between AWS and AD Federation Services
      3. IAM Federation
        1. Uses the SAML standard, which is Active Directory
    5. How to apply policies:
      1. EAR: Effect, Action, Resource
    6. Why are IAM users considered “permanent”?
      1. Because once their password, access key, or secret key is set, these credentials do not update or change without human interaction
    7. IAM Roles
      1. = an identity in IAM with specific permissions
      2. Temporary
        1. When you assume a role, it provides you with temporary security credential for your role session
      3. Assign policy to role
      4. More secure to use roles instead of credentials - don't have to hardcode credentials
      5. Preferred option from security perspective
      6. Can attach/detach roles to running EC2 instances without having to stop or terminate those instances
  4. Simple Storage Solution
    1. S3 is object storage
      1. Manages data as objects rather than in file systems or data blocks
      2. Any type of file
      3. Cannot be used to run OS or DB, only static files
    2. Unlimited S3 storage
      1. Individual objects can be up to 5 TBs in size
    3. Universal Namespace
      1. All AWS accounts share S3 namespace so buckets must be globally unique
    4. S3 URLS:
      1. https://bucket-name.s3.Region.amazonaws.com/key-name
    5. When you successfully upload a file to an S3 bucket, you receive a HTTP 200 code
    6. S3 works off of a key-Value store
      1. Key = name of object
      2. Value = data itself
      3. Version id
      4. Metadata
    7. Lifecycle management
    8. Versioning
      1. All versions of an object are stored and can be retrieved, including deleted ones
      2. Once versioning is enabled, you cannot disable it, only suspend it
    9. Way to protect your objects from being accidentally deleted
      1. Turn versioning on
      2. Enable MFA
    10. Securing you S3 data
      1. Server-Side Encryption
        1. Can set default encryption on a bucket to encrypt all new objects when they are stored in that bucket
      2. Access Control Lists (ACLs)
        1. Define which AWS accounts or groups are granted access and type of access
        2. Way to get fine-grained access control - can assign ACLS to individual objects within a bucket
      3. Bucket Policies
        1. Bucket-wide policies that define what actions are allowed or denied on buckets
        2. In JSON format
    11. Data Consistency Model with S3
      1. Strong Read-After-Write Consistency
        1. After a successful write of a new object (PUT) or an overwrite of an existing object, any subsequent read request immediately receives the latest version of the object
      2. Strong consistency for list operations, so after a write, you can immediately perform a listing of the objects in a bucket with all changes reflected
    12. ACLs vs Bucket Policies
      1. ACLs
        1. Work at an individual object level
        2. Ie: public or private object
      2. Bucket Policies
        1. Apply bucket-wide
    13. Storage Classes in S3
      1. S3 standard
        1. Default
        2. Redundantly across greater than or equal to 3 AZs
        3. Frequent access
      2. S3 Standard - Infrequently Accessed (IA)
        1. Infrequently accessed data, but data must be rapidly accessed when needed
        2. Pay to access data - per GB storage price and per-GB retrieval fee
      3. S3 One Zone - IA
        1. Like S3 standard IA but data is stored redundantly within single AZ
        2. Great for long-lived, IA NON critical data
      4. S3 Intelligent Tiering
        1. 2 Tiers:
          1. Frequent Access
          2. Infrequent Access
        2. Optimizes Costs - automatically moves data to most cost-effective tier
      5. Glacier
        1. Way of archiving your data long-term
        2. Pay for each time you access your data
        3. Cheap storage
        4. 3 Glacier options:
          1. Glacier Instant Retrieval

Long-term data archiving with instant retrieval

          1. Glacier Flexible Retrieval

Ideal storage class for archive data that does not require immediate access but needs the flexibility to retrieve large data sets at no cost,m such as backup or DR

Retrieval -minutes to 12 hours

          1. Glacier Deep Archive

Cheapest

Retain data sets for 7-10 years or longer to meet customer needs and regulatory requirements

Retrieval is 12 hours for standard and 48 hours for bulk

    1. Lifecycle mgmt in S3
      1. Automates moving objects between different storage tiers to max cost-effectiveness
      2. Can be used with versioning
    2. S3 Object Lock
      1. Can use object lock to store objects using a Write Once Read Many (WORM) model
        1. Can help prevent objects from being deleted or modified for a fixed amount of time OR indefinitely
      2. 2 modes of S3 Object Lock:
        1. Governance Mode
          1. Users cannot overwrite or delete an object version or alter its lock settings unless they have special permissions
        2. Compliance Mode
          1. A protected object version cannot be overwritten or deleted by any user
          2. When object is locked in compliance mode, its retention cannot be changed/object cannot be overwritten/deleted for duration of period
          3. Retention Period:

Protect an object version for a fixed period of time

          1. Legal Holds:

Enables you to place a lock/hold on an object without an expiration period- remains in effect until removed

    1. Glacier Vault Locks
      1. Easily deploy and enforce compliance controls for individual Glacier vaults with a vault lock policy
      2. Can specify controls, such as WORM, in a vault lock policy and lock the policy from future edits
      3. Once locked, the policy can no longer be changed
    2. S3 Encryption
      1. TYpes of encryption available:
        1. Encryption in transit
          1. HTTPS-SSL/TLS
        2. Encryption at rest: Server Side Encryption
          1. Enabled by default with SSE-S3

This setting applies to all objects within S3 buckets

If the file is to be encrypted at upload time, the x-amz-server-side-encryption parameter will be included in the request header

You can create a bucket policy that denies any s3 PUT (upload) that does not include this parameter in the request header

          1. 3 Types:

SSE-S3

S3-managed keys, using AES 256-bit encryption

Most common

SSE-KMS

AWS key mgmt service-managed keys

If you use SSE-KMS to encrypt you objects in S3, you must keep in mind the KMS region limits

Uploading AND downloading will count towards the limit

SSE-C

Customer-provided keys

        1. Encryption at rest: Client-Side Encryption
          1. You encrypt the files yourself before you upload them to S3
    1. More folders/subfolder you have in S3, the better the performance
    2. S3 Performance:
      1. Uploads
        1. Multipart Uploads
          1. Recommended for files over 100 MB
          2. Required for files over 5GB
          3. = Parallelize uploads to increase efficiency
      2. Downloads
        1. S3 Byte-Range Fetches
          1. Parallelize downloads by specifying byte ranges
          2. If there is a failure in the download, it is only for that specific byte rance
          3. Used to speed up downloads
          4. Can be used to download partial amounts of a file - for ex: header info
    3. S3 Replication
      1. Can replicate objects from one bucket to another
      2. Versioning MUST be enabled on both buckets (source and destination buckets)
      3. Turn on replication, then replication is automatic afterwards
      4. S3 Bach Replication
        1. Allows replication of existing objects to different buckets on demand
      5. Delete markers are NOT replicated by default
        1. Can enable it when creating the replication rule
  1. EC2: Elastic Cloud Compute
    1. Pricing Options
      1. On-Demand
        1. Pay by hour or second, depending on instance
        2. Flexible - low cost without upfront cost or commitment
        3. Use Cases:
          1. Apps with short-term, spikey, or unpredictable workloads that cannot be interrupted
          2. Compliance
          3. Licencing
      2. Reserved
        1. For 1-3 years
        2. Up to 72% discount compared to on-demand
        3. Use Cases:
          1. Predictable usage
          2. Specific capacity requirements
        4. Types of RIs:
          1. Standard RIs
          2. Convertible RIs

Up to 54% off on-demand

You have the option to change to a different class of RI type of equal or greater value

          1. Scheduled RIs

Launch within timeframe you define

Match your capacity reservation to a predictable recurring schedule that only requires a fraction of day/wk/mo

        1. Reserved Instances operate at a REGIONAL level
      1. Spot
        1. Purchase unused capacity at a discount of up to 90%
        2. Prices fluctuate with supply and demand
        3. Say which price you want the capacity at and when it hits that price, you have your instance, when it moves away from that price, you lose it
        4. Use Cases:
          1. Flexible start and end times
          2. Cost sensitive
          3. Urgent need for large amounts of additional capacity
        5. Spot Fleet
          1. A collection of spot instances and (optionally) on-demand instances
          2. Attempts to launch that number of spot instances and on-demand instances to meet the target capacity you specified

It is fulfilled if there is available capacity and the max price you specified in the request exceeds the spot price

Launch pools - different details on when to launch

          1. 4 strategies with spot fleets available:

Capacity optimized

Spot instances come from the pool with optimal capacity for the number of instances launched

Diversified

Spot instances are distributed across all pools

Lowest price

Spot instances come from the pool with the lowest price

This is the default strategy

InstancePoolsToUseCount

Distributed across the number of spot instance pools you specify

This parameter is only valid when used in combo with lowestPrice

      1. Dedicated
        1. Physical EC2 server dedicated for your use
        2. Most expensive
    1. Pricing Calculator
      1. Can use to estimate what your infrastructure will cost in AWS
    2. Bootstrap Scripts
      1. Script that runs when the instance first runs, has root privileges
      2. Starts with a shebang : #!/bin/bash
      3. EC2 Metadata
        1. Can use curl command in bootstrap to save metadata into a text file, for example
    3. Networking with EC2
      1. 3 different types of networking cards
        1. Elastic Network Interface (ENI)
          1. For basic, day-to-day networking
          2. Use cases:

Create a management network

Use network and security appliances in VPC

Create dual-homed instances with workloads and roles on distinct subnets

Create a low budget, HA solution

          1. EC2s by default will have ENI attached to it
        1. Enhanced Networking (EN)
          1. For single root I/O virtualization (SR-10V) to provide high performance
          2. For high performance networking between 10 Gbps-100 Gbps
          3. Types of EN

Elastic Network Adapter (ENA)

Supports network speeds of up to 100 Gbps for supported instance types

Intel 82599 Virtual Function (VF) Interfaces

Used in Older instances

Always choose ENA over VF

        1. Elastic Fabric Adapter (EFA)
          1. Accelerates High performance Computing (HPC) and ML apps
          2. Can also use OS-Bypass

Enables HPC and ML apps to bypass the OS Kernal and communicate directly with the EFA device - only linux

    1. Optimizing EC2 with Placement Groups
      1. 3 types of placement groups
        1. Cluster Placement Groups
          1. Grouping of instances within a single AZ
          2. Recommended for apps that need low network latency, high network throughput or both
          3. Only certain instance types can be launched into a cluster PG
          4. Cannot span multiple AZs
          5. AWS recommends homogenous instances within cluster
        2. Spread Placement Group
          1. Each placed on distinct underlying hardware
          2. Recommended for apps that have small number of critical instances that should be kept separate
          3. Used for individual instances
          4. Can span multiple AZs
        3. Partition Placement Group
          1. Each partition PG has its own set of racks, each rack has its own network and power source
          2. No two partitions within PG share the same racks, allowing you to isolate impact of HW failure
          3. EC2 divides each group into logical segments called partitions (basically = a rack)
          4. Can span multiple AZs
      2. You can't merge PGs
      3. You can move an existing instance into a PG
        1. Must be in stopped state
        2. Has to be done via CLI or SDK
    2. EC2 Hibernation
      1. When you hibernate an EC2 instance, the OS is told to perform suspend-to-disk
        1. Saves the contents from the instance memory (RAM)to your EBS root volume
        2. We persist the instance EBS root volume and any attached EBS data volumes
      2. Instance RAM must be less than 150GB
      3. Instance families include - C, M, and R instance families
      4. Available for Windows, Linux 2 AMI, Ubuntu
      5. Instances cannot be hibernated for more than 60 days
      6. Available for on-demand and reserved instances
    3. Deploying vCenter in AWS with VMWare Cloud on AWS
      1. Used by orgs for private cloud deployment
      2. Use Cases - why VMWare on AWS
        1. Hybrid Cloud
        2. Cloud Migration
        3. Disaster Recovery
        4. Leverage AWS Services
    4. AWS Outposts
      1. Brings the AWS data center directly to you, on-prem
      2. Allows you to have AWS services in your data center
      3. Benefits:
        1. Allows for hybrid cloud
        2. Fully managed by AWS
        3. Consistency
      4. Outposts Family members
        1. Outposts Racks - large
        2. Outposts Servers - smaller
  1. Elastic Block Storage
    1. Elastic Block Storage
      1. Virtual disk, storage volume you can attach to EC2 instances
      2. Can install all sorts, use like any system disk, including apps, OS’s, run DBs, store data, create file systems
      3. Designed for mission critical workloads - HA and auto replicated within single AZ
    2. Different EBS Volume Types
      1. General Purpose SSD
        1. gp2/3
        2. Balance of prices and performance
        3. Good for boot volumes and general apps
      2. Provisioned IOPS SSD (PIOPS)
        1. io1/2
        2. Super fast, high performance, most expensive
        3. IO intensive apps, high durability
      3. Throughput Optimized HDD (ST1)
        1. Low-cost HDD volume
        2. Frequently accessed, throughput-intensive workloads
          1. Throughput = used more for big data, data warehouses, ETL, and log processing
        3. Cost effective way to store mountains of data
        4. CANNOT be a boot volume
      4. Cold HDD (SC1)
        1. Lowest cost option
        2. Good choice for colder data requirement fewer scans/day
        3. Good for apps that need lowest cost and performance is not a factor
        4. CANNOT be a boot volume
          1. Only static images, file system
    3. IOPS vs Throughput
      1. IOPS
        1. Measures the number of read and write Operations/second
        2. Important for quick transactions, low-latency apps, transactional workloads
        3. Choose provisioned IOPS SSD (io1/2)
      2. Throughput
        1. Measures the number of bits read or written per sec (MB/s)
        2. Important metrics for large datasets, large IO sizes, complex queries
        3. Ability to deal with large datasets
        4. Choose throughput optimized HDD (ST1)
    4. Volumes vs Snapshots
      1. Volumes exist on EBS
        1. Must have a minimum of 1 volume per EC2 instance - called root device volume
      2. Snapshots exist on S3
        1. Point in time copy of a volume
        2. Are incremental
        3. For consistent snapshots: stop instance
        4. Can only share snapshots within region they were created, if want to share outside, have to copy to destination region first
    5. Things to know about EBS’s:
      1. Can resize on the fly, just resize the filesystem
      2. Can change volume types on the fly
      3. EBS will always be in the same AZ as EC2
      4. If we stop an instance, data is kept on EBS disk
      5. EBS volumes are NOT encrypted by default
    6. EBS Encryption
      1. Uses KMS customer master keys (CMK) when creating encrypted volumes and snapshots
      2. Data at rest is encrypted in volume
      3. Data inflight between instance and volume is encrypted
      4. All volumes created from the snapshot are encrypted
      5. End-to-end encryption
      6. Important to remember: copying an unencrypted snapshot allows encryption
        1. 4 steps to encrypt an unencrypted volume:
          1. Create a snapshot of the unencrypted volume
          2. Create a copy of the snapshot and select the encrypt option
          3. Create an AMI from the encrypted snapshot
          4. Use that AMI to launch new encrypted instances
  2. Elastic File System (EFS)
    1. Managed NFS (Network File System) that can be mounted on many EC2 instances
    2. Shared storage
    3. EFS are NAS files for EC2 instances based on Network File System (NFSv4)
    4. EC2 has a mount target that connects to the EFS
    5. Use Cases:
      1. Web server farms, content mgmt systems, shared db access
    6. Uses NFSv4 protocol
    7. Linux-Based AMIs only (not windows)
    8. Encryption at rest with KMS
    9. Performance
      1. Amazing performance capabilities
      2. 1000s concurrent connections
      3. 10 Gbps throughput
      4. Scales to petabytes
    10. Set the performance characteristics:
      1. General Purpose - web servers, content management
      2. Max I/O - big data, media processing
    11. Storage Tiers for EFS
      1. Standard - frequently accessed files
      2. Infrequently Accessed
    12. FSx For Windows
      1. Provides a fully managed native Microsoft Windows file system so you can easily move your windows-based apps that require file storage to AWS
      2. Built on Windows servers
      3. If see anything regarding:
        1. sharepoint service
        2. shared storage for windows
        3. Active directory migration
      4. Managed Windows Server that runs windows server message block (SMB) - based file services
      5. Supports AD users, ACLs, groups, and security policies, along with Distributed File System (DFS) namespaces and replication
    13. FSx for Lustre:
      1. Managed file system that is optimized for compute-intensive workloads
      2. Use Cases:
        1. High performance computing, AI, ML, Media Data processing workflows, electronic design automation
      3. With a Lustre, you can launch and run a Lustre file system that can process massive datasets at up to 100s of Gbps of throughput, millions of IOPS, and sub-millisec latencies
    14. When To pick EFS vs FSx for Windows vs FSx for Lustre
      1. EFS
        1. Need distributed, highly resilient storage for Linux
      2. FSx for Windows
        1. Central storage for windows (IIS server, AD, SQL Server, Sharepoint)
      3. FSx for Lustre
        1. High speed, high-capacity, AI, ML
        2. IMPORTANT: Can store data directly on S3
  3. Amazon Machine Images: EBS vs Instance Store
    1. An AMI provides the info required to launch an instance
    2. *AMIs are region-specific
    3. 5 things you can base your AMIs on:
      1. Region
      2. OS
      3. Architecture (32 vs 64-bit)
      4. Launch permissions
      5. Storage for the root device (root volume)
    4. All AMIs are categorized as either backed by one of these:
      1. EBS
        1. The root device for an instance launched from the AMI is an EBS volume created from EBS snapshot
        2. CAN be stopped
        3. Will not lose data if instance is stopped
        4. By default, root volume will be deleted on termination, but you can tell AWS to keep the deleted root volume with EBS volume
        5. PERMANENT storage
      2. Instance Store
        1. Root device for an instance launched from the AMI is an instance store volume created from a template stored in S3
        2. Are ephemeral storage
          1. Meaning they cannot be stopped
          2. If underlying host fails, you will lose your data

CAN reboot your data without losing your data

          1. If you delete your instance, you will lose the instance store volume
  1. AWS Backup
    1. Allows you to consolidate your backups across multiple AWS Services such as EC2, EBS, EFS, FSx for Lustre, FSx for Windows file server and AWS Storage Gateway
    2. Backups can be used with AWS Organizations to backup multiple AWS accounts in your org
    3. Gives you centralized control across all services, in multiple AWS accounts across the entire AWS org
    4. Benefits
      1. Central management
      2. Automation
      3. Improved Compliance
        1. Policies can be enforced, and encryption
        2. Auditing is easy
  2. Relational Database Service
    1. 6 different RDS engines
      1. SQL Server
      2. Oracle
      3. MySQL
      4. PostgreSQL
      5. MariaDB
      6. Aurora
    2. When to use RDS’s:
      1. Generally used for Online Transaction Processing (OLTP) workloads
        1. OLTP: transaction
          1. Large numbers of small transactions in real-time
        2. Different than OLAP (Online Analytical Processing)
          1. OLAP:

Processes complex queries to analyze historical data

All about data analysis using large amounts of data as well as complex queries that take a long time

RDS’s are NOT suitable for this purpose → data warehouse option like Redshift which is optimized for OLAP

    1. Multi-AZ RDSs
      1. Aurora cannot be single AZ
        1. All others can be configured to be multi-AZ
      2. Creates an exact copy of your prod db in another AZ, automatically
        1. When you write to your prod db, this write will automatically synchronize to the standard db
      3. Unplanned Failure or Maintenance:
        1. Amazon auto detects any issues and will auto failover to the standby db via updating DNS
      4. Multi-AZ is for DISASTER RECOVERY, not for performance
        1. CANNOT connect to standby db when primary db is active
    2. Increase read performance with read replicas
      1. Read replica is a read-only copy of the primary db
      2. You run queries against the read-only copy and not the primary db
      3. Read replicas are for PERFORMANCE boosting
      4. Each read replica has its own unique DNS endpoint
      5. Read replicas can be promoted as their own dbs, but it breaks the replication
        1. For analytics for example
      6. Multiple read replicas are supported = up to 5 to each db instance
      7. Read replicas require auto backups to be turned on
    3. Aurora
      1. MySQL and Postgre-compatible RD engine that combines speed and availability of high-end commercial dbs with the simplicity and cost-effectiveness of open-source db
      2. 2 copies of data in each AZ with minimum of 3 AZs → 6 copies of data
      3. Aurora storage is self-healing
        1. Data blocks and disks are continuously scanned for errors and repaired automatically
      4. 3 types of Aurora Replicas Available:
        1. Aurora Replicas = 15 read replicas
        2. MySQL Replicas = 5 read replicas with Aurora MySQL
        3. PostgreSQL = 5 read replicas with Aurora PostgreSQL
      5. Aurora Serverless
        1. An on-demand, auto-scaling configuration for the MySQL-compatible and PostgreSQL-compatible editions of Aurora
        2. An Aurora serverless db cluster automatically starts up, shuts down, and scales capacity up or down based on your app’s needs
        3. Use Cases:
          1. For spikey workloads
          2. Relatively simple, cost-effective option for infrequent, intermittent, or unpredictable workloads
  1. DynamoDB
    1. Proprietary NON-relational DB
    2. Fast and flexible NoSQL db service for all applications that need constant, single-digit millisecond latency at any scale
    3. Fully managed db and supports both document and key-value data modules
    4. Use Cases:
      1. Flexible data model and reliable performance make it great fit for mobile, web, gaming, ad-tech, IoT, etc
    5. 4 facts on DynamoDB:
      1. All stored on SSD Storage
      2. Spread across 3 geographically distinct data center
      3. Eventually consistent reads by default
        1. This means that all copies of data is usually reached within a second. Repeating a read after a short time should return the updated data. Best read performance
      4. Can opt for strongly consistent reads
        1. This means that all copies return a result that reflects all writes that received a successful response prior to that read
    6. DynamoDB Accelerator (DAX)
      1. Fully managed, HA, in-memory cache
      2. 10x performance improvement
      3. Reduces request time from milliseconds to microseconds
      4. Compatible with DynamoDB API calls
      5. Sits in front of DynamoDB
    7. DynamoDB Security
      1. Encryption at rest with KMS
      2. Can connect with site-to-site VPN
      3. Can connect with Direct Connect (DX)
      4. Works with IAM policies and roles
        1. Fine-grained access
      5. Integrates with CloudWatch and CloudTrail
      6. VPC endpoints-compatible
    8. DynamoDB Transactions
      1. ACID Diagram/Methodology
        1. Atomic
          1. All changes to the data must be performed successfully or not at al
        2. Consistent
          1. Data must be in a constant state before and after the transaction
        3. Isolated
          1. No other process can change the data while the transaction is running
        4. Durable
          1. The changes made by a transaction must persist
      2. ACID basically means if anything fails, it all rolls back
      3. DynamoDB transactions provide developers ACID across 1 or more tables within a single aws acct and region
      4. You can use transactions when building apps that require coordinated inserts, deletes, or updates to multiple items as part of a single logical business operation
      5. DynamoDB transactions have to be enabled in DynamoDB to use ACID
      6. Use Cases:
        1. Financial Transactions, fulfilling orders
      7. 3 options for reads
        1. Eventual consistency
        2. Strong consistency
        3. Transactional
      8. 2 options for writes:
        1. Standard
        2. Transactional
    9. DynamoDB Backups
      1. On-Demand Backup and Restore
      2. Point-In-Time Recovery (PITR)
        1. Protects against accidental writes or deletes
        2. Restore to any point in the last 35 days
        3. Incremental backups
        4. NOT enabled by default
        5. Latest restorable: 5 minutes in the past
    10. DynamoDB Streams
      1. Are time-ordered sequence of item-level changes in a table (FIFO)
      2. Data is completely sequenced
      3. These sequences are stored in DynamoDB Streams
        1. Stored for 24 hours
      4. Sequences are broken up into shards
        1. A shard is a bunch of data that has sequential sequence numbers
      5. Everytime you make a change to DynamoDB table, that change is going to be stored sequentially in a stream record, which is broken up into shards
      6. Can combine streams with Lambda functions for functionality like stored procedures
    11. DynamoDB Global Tables
      1. Managed multi-master, multi-region replication
      2. Way of replicating your DynamoDB tables from one region to another
      3. Great for globally distributed apps
      4. This is based on DynamoDB streams
        1. Streams must be turned on to enable Global Tables
      5. Multi-region redundancy for disaster recovery or HA
      6. Natively built into DynamoDB
    12. Mongo-DB-compatible DBs in Amazon DocumentDB
      1. DocumentDB
        1. Allows you to run MongoDB in the AWS cloud
        2. A managed db service that scales with your workloads and safely and durably stores your db info
        3. NoSQL
        4. Direct move for MongoDB
        5. Cannot run Mongo workloads on DynamoDB so MUST use DocumentDB
    13. Amazon Keyspaces
      1. Run Apache Cassandra Workloads with Keyspaces
      2. Cassandra is a distributed (runs on many machines) database that uses NoSQL, primarily for big data solutions
      3. Keyspaces allows you to run cassandra workloads on AWS and is fully managed and serverless, auto-scaled
    14. Amazon Neptune
      1. Implement GraphDBs - stores nodes and relationships instead of tables or documents
    15. Amazon Quantum Ledger DB (QLDB)
      1. For Ledger DB
        1. Are NoSQL dbs that are immutable, transparent, and have a cryptographically verifiable transaction log that is owned by one authority
      2. QLDB is fully managed ledger db
    16. Amazon Timestream
      1. Time-series data are data points that are logged over a series of time, allowing you to track your data
      2. A serverless, fully managed db service for time-series data
      3. Can analyze trillions of events/day up to 1000x faster and at 1/10th the cost of traditional RDSs
  2. Virtual Private Cloud (VPC) Networking
    1. VPC Overview
      1. Virtual data center in the cloud
      2. Logically isolated part of AWS cloud where you can define your own network with complete control of your virtual network
      3. Can additionally create a hardware VPN connection between your corporate data center and your VPC and leverage the AWS cloud as an extension of your corporate data center
      4. Attach a Virtual Private Gateway to our VPC to establish a VPN and connect to our instances over private corporate data center
      5. By default we have 1 VPC in each region
    2. What can we do with a VPC
      1. Use route table to configure between subnets
      2. Use Internet Gateway to create secure access to internet
      3. Use Network Access Control Lists (NACLs) to block specific IP addresses
      4. Default VPC
        1. User friendly
        2. All subnets in default VPC have a route out to the internet
        3. Each EC2 instance has a public and private IP address
        4. Has route table and NACL associated with it
      5. Custom VPC
    3. Steps to set up a VPC Connection:
      1. Choose IPv4 CIDER
        1. Note: first 4 IP addresses and last IP address in CIDR block are reserved by Amazon
      2. Choose Tenancy
      3. By default, creates:
        1. Security Group
        2. Route Table
        3. NACL
      4. Create subnet associations
      5. Create internet gateway and attach to VPC
      6. Set up route table with route out to internet
      7. Associate subnet with VPC
      8. Create Security group
        1. With inbound, outbound rules
        2. Associate EC2 instance with Security Group
  3. Using NAT Gateways for internet access within private subnet
    1. For example, we need to patch db server
    2. NAT Gateway:
      1. You can use Network Access Translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services while preventing the internet from initiating a connection with those instances
    3. How to do this:
      1. Create a NAT Gateway in our public subnet
      2. Allow our EC2 instance (in private subnet) to connect to the NAT Gateway
    4. 5 facts to remember:
      1. Redundant inside the AZ
      2. Starts at 5 Gbps and scales to 45 Gbps
      3. No need to patch
      4. Not associated with any security groups
      5. Automatically assigned a public IP address
  4. Security Groups
    1. Are virtual firewalls for EC2 instances
      1. By default, everything is blocked
    2. Are stateful - this means that if you send a request from your instance, the response traffic for that request is allowed to flow in, regardless of inbound security group rules
      1. Ie: responses to allowed inbound traffic are allowed to flow out regardless of outbound rules
  5. Network ACLs
    1. Are frontline of defense, optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets
    2. You may match NACL rules similar to Security Groups as an added layer of security
    3. Overview:
      1. Default NACLs
        1. VPC automatically comes with default NACL and by default it allows all inbound and outbound traffic
      2. Custom NACLs
        1. By default block all inbound and outbound traffic until you add rules
      3. Each subnet in your VPC must be associated with a NACL
        1. If you don't explicitly associate a subnet with a NACL, the subnet is auto associated with the default NACL
      4. Can associate a NACL with multiple subnets, but each subnet can only have a single NACL associated with it at a time
      5. Have separate inbound and outbound NACLs
    4. Block IP addresses with NACLs NOT with Security Groups
      1. NACLs contain a numbered list of rules that are evaluated in order, starting with lowest numbered rule
        1. Once a match is found, stop going through list
        2. If you want to deny a single IP address, you have to deny FIRST before you allow all
      2. NACLs are stateless
        1. This means that responses to allowed inbound traffic are subject to the rules for outbound traffic and visa versa
  6. VPC Endpoints
    1. Enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection
    2. Like a NAT gateway but it doesnt use the public internet, it uses Amazon’s backbone network - stays within AWS environment
    3. Endpoints are virtual devices
      1. Horizontally scaled, redundant, and HA VPC components that allow communication between instances on your VPC and services without imposing availability risks or bandwidth constraints on your network traffic
    4. Remember that NAT gateways have 5-45 Gbps restriction - you dont want that restriction if you have an EC2 instance writing to S3, so you may have it go through the virtual endpoint (VPC endpoint)
    5. 2 Types of endpoints:
      1. Interface Endpoint
        1. An ENI with a private IP address that serves as an entry point for traffic headed to a supported service
      2. Gateway Endpoint
        1. Similar to NAT gateway
        2. Virtual device that supports connection to S3 and DynamoDB
    6. Use Case: you want to connect to AWS services without leaving the Amazon internal network = VPC endpoint
  7. VPC Peering
    1. Allows you to connect 1 VPC with another via a direct network route using private IP addresses
      1. Instances behave as if they were on the same private network
      2. Can peer VPCs with other AWS accounts as well as other VPCs in same account
      3. Can peer between regions
    2. Is in a star configuration with one central VPC
      1. No transitive peering
    3. Cannot have overlapping CIDR address ranges between peered VPCs
  8. PrivateLink
    1. Opening up your services in a VPC to another VPC can be done in two ways:
      1. Open VPC up to internet
      2. Use VPC Peering - whole network is accessible to peer
    2. Best way to expose a service VPC to tens, hundreds, thousands of customer VPC is through PrivateLink
      1. Does Not require peering, no route tables, no NAT gateways, no internet gateways, etc
      2. DOES require a Network Load Balancer on the service VPC and and ENI on the customer VPC
  9. CloudHub
    1. Useful if you have multiple sites, each with its own VPN connection, use CH to connect those sites together
    2. Overview:
      1. Hub and spoke model
      2. Low cost and easy to manage
      3. Operates over public internet, but all traffic between customer gateway and CloudHub is encrypted
      4. Essentially aggregating VPN connections to single entry point
  10. Direct Connect (DX)
    1. A cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS
    2. Private connectivity
    3. Can reduce network costs, increase bandwidth throughput, and provide a more consistent network experience than internet-based connections
    4. Instead of VPN
    5. Two types of Direct Connect Connections:
      1. Dedicated Connection
        1. Physical ethernet connection associated with a single customer
      2. Hosted Connection
        1. Physical ethernet connection that an AWS Direct Connect Partner (verizon, etc) provisions on behalf of a customer
  11. Transit Gateway
    1. Connects VPCs and on-prem networks through a central hub
    2. Simplifies network by ending complex peering relationships
    3. Acts as a cloud router - each connection is only made once
    4. Connect VPCs to Transit Gateway
      1. Everything connected to TG will be able to talk directly
    5. Facts
      1. Allows you to have transitive peering between thousands of VPCs and on-prem datacenters
      2. Works on regional basis, but can have it across multiple regions
      3. Can use it across multiple AWS account using RAM (Resource Access Manager)
      4. Use route table to limit how VPCs talk to one another
      5. Supports IP Multicast which is not supported by any other AWS Service
  12. Wavelength
    1. Embeds AWS compute and storage service within 5G networks, providing mobile edge computing infrastructure for developing, deploying, and scaling ultra-low-latency applications
  13. Route53
    1. Overview
      1. Domain Registrars: are authorities that can assign domain names directly under one or more top-level Domain
    2. Common DNS Record Types
      1. SOA: Start Of Authority Record
        1. Stores info about:
          1. Name of server that supplied the data for the zone
          2. Administrator of the zone
          3. Current version of the data file
          4. The default number of seconds for the TTL file on resource records
        2. How it works:
          1. Starts with NS (Name Server) records

NS records are used by top-level domain servers to direct traffic to the content DNS server that contains the authoritative DNS records

          1. So browser goes to top level domain first (.com) and will look up ‘ACG’,
          2. TLD will give the browser an NS record where the SOA will be stored
          3. Browser will browse over to the NS records and get SOA
          4. Start of Authority contains all of our DNS records
      1. A Record (or address record)
        1. The fundamental type of DNS record
        2. Used by a computer to translate the name of the domain to an IP address
        3. Most common type of DNS record
        4. TTL = time to live
          1. Length that a DNS record is cached on either the resolving server or the user’s own local PC
          2. The lower the TTL, the faster changes to DNS records take to propagate through the internet
      2. CNAME
        1. Canonical name can be used to resolve one domain name to another
          1. Ex m.acg.com and mobile.acg.com resolve to same
      3. Alias Records
        1. Used to map resource sets in your hosted zone to load balancers, CloudFront distros, or S3 buckets that are configured as websites
        2. Works like a CNAME record in that it can map one dns name to another, but
          1. CNAMES cannot be used for naked domain names/zone apex record
          2. Alias Records CAN be used to map naked domain names/zone apex record
    1. Route53 Overview
      1. Amazon’s DNS service, that allows you to register domain names, create hosted zones, and manage and create DNS records
      2. 7 Routing Policies available with Route53:
        1. Simple Routing Service
          1. Can only have one record with multiple IP addresses
          2. If you specify multiple values in a record, route53 returns all values to the user in a random order
        2. Weighted Routing Policy
          1. Allows you to split your traffic based on assigned weights
          2. Health Checks

Can set health checks on individual record sets/servers

So if a record set/server fails a health check, it will be removed from route53 until it passes the check

While it is down, no traffic will be routed to it, but will resume when it passes

          1. Create a health check for each weighted route that we are going to create to monitor the endpoint, monitor by IP address
        1. Failover Routing Policy
          1. When you want to create an active/passive setup
          2. Route53 will monitor the health of your primary site using health checks and auto-route traffic if primary site fails the check
        2. Geolocation Routing
          1. Lets you choose where your traffic will be sent based on the geographical location of your users
          2. Based on the location from which DNS queries originate; the end location of your user
        3. Geoproximity Routing Policy
          1. Can route traffic flow to build a routing system that uses a combo of:

Geographic location

Latency

Availability to route traffic from your users to your close or on-prem endpoints

          1. Can build from scratch or use templates and customize
        1. Latency Routing Policy
          1. Allows you to route your traffic based on the lowest network latency for your end user
          2. Create a latency resource record set for the EC2 (or ELB) resource in each region that hosts your website

When route53 receives a query for your site, it selects the latency resource record set for the region that gives you the lowest latency

        1. Multivalue Answer Routing Policy
          1. Lets you configure route53 to return multiple values, such as IP addresses for your web server, in response to DNS queries
          2. Basically similar to simple routing, however, it allows you to put health checks on each record set
  1. Elastic Load Balancers (ELBs)
    1. Auto distributes incoming traffic across multiple targets
      1. Can also be done across AZs
    2. 3 types of ELBs
      1. Application Load Balancer
        1. Best suited for balancing of HTTP and HTTPs Traffic
        2. Operates at layer 7
        3. Application-aware load balancer
        4. Intelligent load balancer
      2. Network Load Balancer
        1. Operates at the connection level (Level 4)
        2. Capable of handling millions of requests/sec, low latencies
        3. A performance load balancer
      3. Classic Load Balancer
        1. Legacy load balancer
        2. Can load balance HTTP/HTTPs applications and use Layer-7 specific features such as X-forwarded and sticky sessions
        3. For Test/Dev
    3. ELBs can be configured with Health Checks
      1. They periodically send requests to the load balancer’s registered instances to test their states [InService vs OutOfService returns]
    4. Application Load Balancers
      1. Layer 7, App-Aware Load Balancing
        1. After the load balancer receives a request, it evaluates the listener rules in priority order to determine which rule to apply and then selects a target from the target group for the rule action
      2. Listeners
        1. A listener checks for connection requests from clients when using the protocol and port you configure
        2. You define the rules that determine how the load balancer routes requests to its registered targets
          1. Each rule consists of priority, one or more actions, and one or more conditions
      3. Rules
        1. When conditions of rule are met, actions are performed
        2. Must define a default rule for each listener
      4. Target Group
        1. Each target group routes requests to one or more registered targets using the protocol and port you specify
      5. Path-Based Routing
        1. Enable path patterns to make load balancing decisions based on path
        2. /image → certain EC2 instance
      6. Limitations of App Load Balancers:
        1. Can ONLY support HTTP/HTTPS listeners
      7. Can enable sticky sessions with app load balancers, but traffic will be sent at the target group level, not specific EC2 instance
    5. Network Load Balancer
      1. Layer 4, Connection layer
      2. Can handle millions of requests/sec
      3. When network load balancer has only unhealthy registered targets, it routes requests to ALL the registered targets - known as fail-open mode
      4. How it works
        1. Connection request received
          1. Load balancer selects a target from the target group for the default rule
          2. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration
        2. Listeners
          1. A listener checks for connection requests from clients using the protocol and port you configure
          2. The listener on a network load balancer then forwards the request to the target group

There are NO rules, unlike the Application load balancers - cannot do intelligent routing at level 4

        1. Target Groups
          1. Each target group routes requests to one or more registered targets
        2. Supported protocols: TCP, TLS, UDP, TCP_UDP
        3. Encryption
          1. You can use a TLS listener to offload the work of encryption and decryption to your load balancer
      1. Use Cases:
        1. Best for load balancing TCP traffic when extreme performance is required
        2. Or if you need to use protocols that aren't supported by app load balancer
    1. Classic Load Balancer
      1. Legacy
      2. Can load balance HTTP/HTTPs apps and use Layer 7-spec features
      3. Can also use strict layer 4 load balancing for apps that rely purely on TCP protocol
      4. X-Forwarded-For Header
        1. When traffic is sent from a load balancer, the server access logs contain the IP address of the proxy or load balancer only
        2. To see the original IP address of the client the x-forwarded-for request header is used
      5. Gateway Timeouts with Classic load balancer
        1. If your application stops responding, the classic load balancer responds with a 504 error
          1. This means that the application is having issue
          2. Means the gateway has timed out
      6. Sticky Sessions
        1. Typically the classic load balancer routes each request independently to the registered EC2 instance with smallest load
        2. But with sticky sessions enabled, user will be sent to the same EC2 instance
        3. Problem could occur if we remove one of our EC2 instances while the user still has a sticky session going
          1. Load balancer will still try to route our user to that EC2 instance and they will get an error
          2. To fix this, we have to disable sticky sessions
    2. Deregistration Delay
      1. Aka Connection Draining with Classic load balancers
      2. Allows load balancers to keep existing connections open if the EC2 instances are deregistered or become unhealthy
      3. Can disable this if you want your load balancer to immediately close connections
  1. CloudWatch
    1. Monitoring and observability platform to give us insight into our AWS architecture
    2. Features
      1. System metrics
        1. The more managed a service is, the more you get out of the box
      2. Application Metrics
        1. By installing CloudWatch agent, you can get info from inside your EC2 instances
      3. Alarms
        1. No default alarms
        2. Can create an alarm to stop, terminate, reboot, or recover EC2 instances
    3. 2 kinds of metrics:
      1. Default
        1. CPU util, network throughput
      2. Custom
        1. Will need to be provided with CloudWatch agent installed on the host and reported back to CloudWatch because AWS cannot see past the hypervisor level for EC2 instances
        2. Ex: EC2 memory util, EBS storage capacity
    4. Standard vs Detailed monitoring
      1. Standard/Basic monitoring for EC2 provides metrics for your instances every 5 minutes
      2. Detailed monitoring provides metrics every 1 minute
    5. A period is the length of time associated with a specific CloudWatch stat - default period is 60 seconds
    6. CloudWatch Logs
      1. Tool that allows you to monitor, store, and access log files from a variety of different sources
      2. Gives you the ability to query your logs to look for potential issues or relevant data
      3. Terms:
        1. Log Event: data point, contains timestamp and data
        2. Log Stream: collection of log events from a single source
        3. Log Group: collection of log streams
          1. Ex may group all Apache web server host logs
      4. Features:
        1. Filter Patterns
        2. CloudWatch Log Insights
          1. Allows you to query all your logs using SQL-like interactive solution
        3. Alarms
      5. What services act as a source for CloudWatch logs?
        1. EC2, Lambda, CloudTrend, RDS
      6. CloudWatch is our go-to log tool, except for if the exam asks for a real-time solution (then it will be kinesis)
  2. Amazon Managed Grafana
    1. Fully managed service that allows us to securely visualize our data for instantly querying, correlating, and visualizing your operational metrics, logs, and traces from different sources
    2. Overview
      1. Grafana made easy
      2. Logical separation with workspaces
        1. Workspaces are logical Grafana servers that allow for separation of data visualizations and querying
      3. Data Sources for Grafana: CloudWatch, Managed Service for Prometheus, OpenSearch Service, Timestream
    3. Use Cases:
      1. Container metrics visualizations
        1. Connect to data sources like Prometheus for visualizing EKS, ECS, or own Kube cluster metrics
        2. IoT
  3. Amazon Managed Service for Prometheus
    1. Serverless, Prometheus-compatible service used for securely monitoring container metrics at scale
    2. Overview
      1. Still use open-source prometheus, but gives you AWS managed scaling and HA
      2. Auto Scaling
      3. HA- replicates data across three AZs in same region
      4. EKS and self-managed Kubernetes clusters
      5. PromQL: the open source query language for exploring and extracting data
      6. Data Retention:
        1. Data store in workspaces for 150 days, after that, deleted
  4. VPC Flow Logs
    1. Can configure to send to S3 bucket
  5. Horizontal and Vertical Scaling
    1. Launch Templates
      1. Specifies all of the needed settings that go into building out an EC2 instance
      2. More than just auto-scaling
      3. More granularity
      4. AWS recommends Launch templates over Configurations
        1. Configurations are only for auto-scaling, are immutable, limited configuration options, don't use them
    2. Create template for Auto Scaling Group
  6. Auto Scaling
    1. Auto Scaling Groups
      1. Contains a collection of EC2 instances that are treated as a collective group for the purposes of scaling and management
      2. What goes into auto scaling group?
        1. Define your template
          1. Pick from available launch templates or launch configurations
        2. Pick your networking and purchasing
          1. Pick networking space and purchasing options
        3. ELB configuration
          1. ELB sits in front of auto scaling group
          2. EC2 instances are registered behind a load balancer
          3. Auto scaling can be set to respect the load balancer health checks
        4. Set Scaling policies
          1. Min/Max/Desired capacity
        5. Notifications
          1. SNS can act as notification tool, alert when a scaling event occurs
    2. Step Scaling Policies
      1. Increase or decrease the current capacity of a scalable target based on scaling adjustments, known as step adjustments
      2. Adjustments vary based on the size of the alarm breach
        1. All alarms that are breached are evaluated by application auto scaling
    3. Instance Warm-Up and Cooldown
      1. Warm-up period - time for EC2s to come up before being placed behind LB
      2. Cooldown - pauses auto scaling for a set amount of time (default is 5 minutes)
      3. Warmup and cooldown help to avoid thrashing
    4. Scaling types
      1. Reactive Scaling
        1. Once the load is there, you measure it and then determine if you need to create more or less resources
        2. Respond to data points in real-time, react
      2. Scheduled Scaling
        1. Predictable workload, create a scaling event to handle
      3. Predictive Scaling
        1. AWS uses ML algorithms to determine
        2. They are reevaluated every 24 hours to create a forecast for the next 48
    5. Steady Scaling
      1. Allows us to create a situation where the failure of a legacy codebase or resource that cant be scaled down can auto recover from failure
      2. Set Min/Max/Desired = 1
    6. CloudWatch is your number one tool for alerting auto scaling that you need more or less of something
    7. Scaling Relational DBs
      1. Most scaling options
      2. 4 ways to scale Relational DBs/4 types of scaling we can use to adjust our RD performance
        1. Vertical Scaling
          1. Resizing the db from one size to another, can create greater performance, increase power
        2. Scaling Storage
          1. Storage can be resized up, not down
          2. Except aurora which auto scales
        3. Read Replicas
          1. Way to scale “horizontally” - create read only copies of our data
        4. Aurora Serverless
          1. Can offload scaling to AWS - excels with unpredictable workloads
    8. Scaling Non Relational DBs
      1. DynamoDB
        1. AWS managed - simplified
        2. Provisioned Model
          1. Use case: predictable workload
          2. Need to overview past usage to predict and set limits
          3. Most cost effective
        3. On-Demand
          1. Use case: sporadic workload
          2. Pay more
        4. Can switch from on-demand to provisioned only once per 24 hours per table
      2. Non-Relational DB scaling
        1. Access patterns
        2. Design matters
          1. Avoiding hot keys will lead to better performance
  7. Simple Queue Service (SQS)
    1. Fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless apps
    2. Can sit between frontend and backend and kind of replace the Load Balancer
      1. Web front end dumps messages into the queue and then backend resources can poll that queue looking for that data whenever it is ready
      2. Does not require that active connection that the load balancer requires
    3. Poll-Based Messaging
      1. We have a producer of messages, consumer comes and gets message when ready
      2. Messaging queue that allows asynchronous processing or work
    4. SQS Settings
      1. Delivery Delay - default 0, up to 15 minutes
      2. Message Size - up to 256 KB text in any format
      3. Encryption - encrypted in transit by default, now added encryption at rest default with SSE-SQS
        1. SSE-SQS = Server-Side Encryption using SQS-owned encryption
          1. Encryption at rest using the default SSE-SQS is supported at no charge for both standard and FIFO using HTTPS endpoints
      4. Message Retention
        1. Default retention is 4 days, can be set from 1 minute - 14 days, then purged
      5. Long vs Short Polling
        1. Long polling is not the default, but should be
        2. Short Polling
          1. Connect, checks if work, immediately disconnects if no work
          2. Burns CPU, additional API calls cost money
        3. Long Polling
          1. Connect, check if work, waits a bit
          2. Mostly will be the right answer
      6. Queue Depth
        1. This value can be a trigger for auto scaling
      7. Visibility Timeout
        1. Used to ensure proper handling of the message by backend EC2 instances
        2. Backend polls for the message, sees it, downloads that message from SQS to do work - after backend downloads message, SQS puts a lock on that message called the visibility timeout, where the message remains in the queue but no one else can see it
        3. So if other instances are polling that queue, the will not see the locked message
        4. Default visibility timeout is 30 seconds, but can be changed
        5. If that EC2 instance that downloaded the message fails to process that message and reach out to the queue to tell SQS that it is done, and tells it to purge that message, that message is going to reappear in the queue
      8. Dead-Letter Queues
        1. If there is a problem with the message, if the message cannot be processed by our backend process and we did not implement DLQ - the message would get pulled by a backend EC2 for processing, the EC2 would have an error processing it, so we would hit our 30 sec visibility timeout. So, the message would unlock, another EC2 would pick it up, etc, etc, until we hit our message retention period, then that message would be deleted
        2. By implementing DLQ: we create another SQS Queue that we can temporarily sideline messages into
        3. How it works:
          1. Set up a new queue and select it as the DLQ when setting up the primary SQS queue
          2. Set a number for retries in the primary queue

Once the message hits that limit, it gets moved to the DLW where it stays until message retention period, then deleted

        1. Can create SQS DLQ for SNS topics
      1. SQS FIFO
        1. Standard SQS offers best effort ordering and tries not to duplicate, but may - nearly unlimited transactions/sec
        2. SQS FIFO guarantees the order and that no duplication will occur
        3. Limited to 300 messages/sec
        4. How it works:
          1. Message group ID field is a tag that specifies that a message belongs to a specific message group
          2. Message Deduplication ID is the token used to ensure no duplication within the deduplication interval: a unique value that ensures that your messages are unique
        5. More expensive than standard SQS
  1. Simple Notification Service (SNS)
    1. Used to push out notifications - proactively deliver the notification to an end-point rather than leaving it in a queue
    2. Fully managed messaging service for both application-to-application (A2A) and application-to-person (A2P) communication
    3. Texts and emails to users
    4. Push-Based Messaging
      1. Consumer does not have control to receive when ready, the sender sends it all the way to the consumer
    5. Will proactively deliver messages to the endpoints that are Subscribed to it
    6. SNS topics are subscribed to
    7. SNS Settings
      1. Subscribers
        1. what/who is going to receive the data from the topic
          1. Ex: Kinesis firehose, SQS, Lambda, email, HTTP(S), etc
      2. Message Size
        1. Max size of 256 text in any format
      3. SNS does not retry, even if they fail to deliver
        1. Can store in an SQS DLQ to handle
      4. SNS FIFO only supports SQS as a subscriber
      5. Messages are encrypted in transit by default, and you can add encryption at rest
      6. Access Policies
        1. Can control who/what can publish to those SNS topics
        2. A resource policy can be added to a topic, similar to S3
          1. Have to make sure AP is set up properly with SNS to SQS so that SNS can have access to SQS Queue
    8. CloudWatch uses SNS to deliver alarms
  2. API Gateway
    1. Fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale
    2. “Front-door” to our apps so we can control what users talk to our resources
    3. Key features:
      1. Security
        1. Add security- one of the main reasons for using API Gateway in front of our applications
        2. Allows you to easily protect your endpoints by attaching a WAF; Can front API Gateway with a WAF - security at the edge
      2. Stop Abuse
        1. Set up rate limiting, DDoS protection
      3. Static stuff to S3, basically everything else to API Gateway
    4. Preferred method is to get API calls into your application and AWS environment
    5. Avoid hardcoding our access keys/secret keys with API Gateway
      1. Do not have to generate an IAM user to make calls to the backend, just send API call to API Gateway in front
    6. API Gateway is versioning supported
  3. AWS Batch
    1. AWS Managed service that allows us to run batch computing workloads within AWS- these workloads run on either EC2 or Fargate/ECS
      1. Capable of provisioning accurately sized compute resources based on number of jobs submitted and optimizes the distribution of workloads
    2. Removes any heavy lifting for configuration and management of infrastructure required for computing
    3. Components:
      1. Jobs = units of work that are submitted to Batch (ie: shell scripts, executables, docker images)
      2. Job Definitions = specify how your jobs are to be run, essentially the blueprint for the resources in the job
      3. Job Queues = jobs get submitted to specific queues and reside there until scheduled to run in a compute environment
      4. Compute Environment = set of managed or unmanaged compute resources used to run your jobs (EC2 or ECS/Fargate)
    4. Fargate or EC2 Compute Environments
      1. Fargate is the recommended way of launching most batch jobs
        1. Scales and matches your needs with less likelihood of over provisioning
      2. EC2 is sometimes the best choice, though:
        1. When you need a custom AMI (can only be run via EC2)
        2. When you have high vCPU requirements
        3. When you have high GiB requirements
        4. Need GPU or Graviton CPU requirement
        5. When you need to use linusParameters parameter field
        6. When you have a large number of jobs, best to run on EC2 because jobs are dispatched at a higher rate than Fargate
    5. Batch vs Lambda
      1. Lambda has a 15 minute execution time limit, batch does not
      2. Lambda has limited disk space
      3. Lambda has limited runtimes, batch uses docker so any runtime can be used
  4. Amazon MQ
    1. Managed message broker service allowing easier migration of existing applications to the AWS Cloud
    2. Makes it easy for users to migrate to a message broker in the cloud from an existing application
      1. Can use a variety of programming languages, OS’s, and messaging protocols
    3. MQ Engine types:
      1. Currently supports both Apache ActiveMQ or RabbitMQ engine types
    4. SNS with SQS vs AmazonMQ
      1. Both have topics and queues
        1. Both allow for one-to-one or one-to-many messaging designs
      2. MQ is easy application migration: so if you are migrating an existing application, likely want MQ
      3. If you are starting with new Application - easier and better to use SNS with SQS
      4. AmazonMQ requires that you have private networking like VPC, Direct Connect, or VPN while SNS and SQS are publicly accessible by default
      5. MQ has NO default AWS integrations and does not integrate as easily with other services
    5. Configuring Brokers
      1. Single_Instance Broker
        1. One broker lives within one AZ
        2. RabbitMQ has a network load balancer in front in a single instance broker environment
      2. MQ Brokers
        1. Offers HA architectures to minimize downtime during maintenance
        2. Architecture depends on broker engine type
        3. AmazonMQ for Apache ActiveMQ
          1. With active/standby deployments, one instance will remain available at all times
          2. Configure network of brokers with separate maintenance windows
        4. AmazonMQ for RabbitMQ
          1. Cluster deployment are logical groupings of three broker nodes across multiple AZs sitting behind a Network LB
      3. MQ is good for specific messaging protocols: JMS or messaging protocols like AMQP0-9-1, AMQP 1.0, MQTT, OpenWire, and STOMP
  5. AWS Step Functions
    1. A serverless orchestration service combining different AWS services for business applications
    2. Provides a graphical console for easier application workflow views and flows
    3. Components
      1. State Machine: a particular workflow with different event-driven steps
      2. Tasks: specific states within a workflow (state machine) representing a single unit of work
      3. States: every single step within a workflow = a state
    4. Two different types of workflows with Step Functions:
      1. Standard
        1. Exactly-once execution
        2. Can run for up to 1 year
        3. Useful for long-running workflows that need to have auditable history
        4. Rates up to 2000 executions/sec
        5. Pricing based per state transition
      2. Express
        1. Have an ‘at-least-once’ execution → means possible duplication you have to handle
        2. Only run for up to 5 minutes
        3. Useful for high-event-rate workloads
        4. Use Case: IoT data streaming
        5. Pricing based on number of executions, durations, and memory consumed
    5. States and State Machines
      1. Individual states are flexible
        1. Leverage states to either make decisions based on input, perform certain actions, or pass input
      2. Amazon States Language (ASL)
        1. States and workflows are defined in ASL
      3. States are elements within your state machines
        1. States are referred to by name, name in unique within workflow
    6. Integrates with Lambda, Batch, Dynamo, SNS, Fargate, API Gateway, etc
    7. Different States that exist:
      1. Pass - no work
      2. Task - single unit of work performed
      3. Choice - adds branching logic to state machines
      4. Wait - time delay
      5. Succeed - stops executions successfully
      6. Fail - stops executions and mark as failures
      7. Parallel - runs parallel branches of executions within state machines
      8. Map - runs a set of steps based on elements of an input array
  6. Amazon AppFlow
    1. Fully managed service that allows us to securely exchange data between a SaaS App and AWS
      1. Ex: Salesforce migrating data to S3
    2. Entire purpose is to ingest data
      1. Pulls data records from third-party SaaS vendors and stores them in S3
    3. Bi-directional: allows for bi-directional data transfers with some combinations of source and destination
    4. Concepts:
      1. Flow: transfer data between sources and destinations
      2. Data Mapping: determines how your source data is stored within your destination
      3. Filters: criteria set to control which data is transferred
      4. Trigger: determines how the flow is started
        1. Multiple options/supported types:
          1. Run on demand
          2. Run on event
          3. Run on schedule
  7. Redshift Databases
    1. Fully managed, petabyte-scale data warehouse service in the cloud
    2. Very large relational db traditionally used in big data
      1. Because it is relational, you can use standard SQL and BI tools to interact with it
      2. Best use is for BI applications
    3. Can store massive amounts of data - up to 16 PB of data
      1. Means you do not have to split up your large datasets
    4. Not a replacement for a traditional RDS - it would fall apart as the backend of your web app, for example
  8. Elastic Map Reduce (EMR)
    1. ETL tool
    2. Managed big data platform that allows you to process vast amounts of data using open-source tools, such as Spark, Hive, HBase, Flink, Hudi, and Presto
      1. Quickly use open source tools and get them running in our environment
    3. For this exam, EMR will be run on EC2 instances and you pick the open-source tool for AWS to manage on them
    4. Open-source cluster, managed fleet of EC2 instances running open-source tools
  9. Amazon Kinesis
    1. Allows you to ingest, process, and analyze real-time streaming data
    2. 2 forms of Kinesis:
      1. Data Streams
        1. Real-time streaming for ingesting data
        2. You are responsible for creating the consumer and scaling the stream
        3. Process for Kinesis Data Streams:
          1. Producers creating data
          2. Connect producers to Data Stream
          3. Decide how many shards you are going to create

Shards can only handle a certain amount of data

          1. Consumer takes data in, processes it, and puts it into endpoints

You have to create the consumer

Endpoint could be S3, Dynamo, Redshift, EMR, …

You have to use the Kinesis SDK to build the consumer application

Handle scaling with the amount of shards

      1. Data Firehose
        1. Data transfer tool to get info into S3, Redshift, ElasticSearch, or Splunk
        2. Near-real-time
        3. Plug and play with AWS architecture
        4. Process for Kinesis Data Firehose:
          1. Limited supported endpoints- ElasticSearch service, S3, and Redshift, some 3rd party endpoints supported as well
          2. Place Data Firehose in between input and endpoint
          3. Handles the scaling and the building out of the consumer
    1. Kinesis Data Analytics
      1. Paired with Data Stream/Firehose, it does the analysis using standard SQL
      2. Makes it easy to tie Data Analytics into your pipeline
        1. Data comes in with Streams/Firehose and Data Analytics can transform/sanitize/format data in real-time as it gets pushed through
      3. Serverless, fully managed, auto scaling
    2. Kinesis vs SQS
      1. SQS does NOT provide real-time message delivery
      2. Kinese DOES provide real-time message delivery
  1. Amazon Athena
    1. An interactive query that makes it easy to analyze data in S3 using SQL
    2. Allows you to directly query data in your S3 buckets without loading it into a database
    3. “Serverless SQL”
      1. Can use Athena to query logs stored in S3
  2. Amazon Glue
    1. A serverless data ingestion service that makes it easy to discover, prepare, and combine data
    2. Allows you to perform ETL workloads without managing underlying servers
    3. Effectively replaces EMR - and with Glue, you don't have to spin up EC2 instances or use 3rd party tools to ETL
    4. Using Athena and Glue together:
      1. AWS S3 data is unstructured, unformatted – deploy Glue Crawlers to build a catalog/structure for that data
        1. Glue produces Data Catalog
      2. After glue, we have some options:
        1. Can use Amazon Redshift Spectrum - allows us to use Redshift without having to load data into Redshift db
        2. Athena - use to query the data catalog, and can even use Quicksight to visualize data
  3. Amazon QuickSight
    1. Amazon's version of Tableau
    2. Fully managed BI data visualization service, easily create dashboards
  4. AWS Data Pipeline
    1. A managed ETL service for automating management and transformation of your data, automatic retries for data-driven workflow
    2. Data driven web-service that allows you to define data-driven workflows
    3. Steps are dependent on previous tasks completing successfully
    4. Define parameters for data transformations - enforces your chosen logic
    5. Auto retries failed attempts
    6. Configure notifications via SNS
    7. Integrates easily with Dynamo, Redshift, RDS, S3 for data storage, and integrates with EC2 and EMR for compute needs
    8. Components:
      1. Pipeline Definition = specify the logic of you data management
      2. Managed Compute = service will create EC2 instances to perform your activities or leverage existing EC2
      3. Task Runners = (EC2) poll for different tasks and perform them when found
      4. Data Nodes= define the locations and types of data that will be input and output
      5. Activities = pipeline components that define the work to perform
    9. Use Cases:
      1. Processing data in EMR using Hadoop streaming
      2. Importing or exporting DynamoDB data
      3. Copying CSV files or data between S3 buckets
      4. Exporting RDS data to S3
      5. Copying data to Redshift
  5. Amazon Managed Streaming for Apache Kafka (Amazon MSK)
    1. Fully managed service for running data streaming apps that leverage Apache Kafka
    2. Provides control-plane operations; creates, updates, and deletes clusters as required
    3. Can leverage the Kafka data-plane operations for production and consuming streaming data
    4. Good for existing operations; allows support for existing apps, tools, and plugins
    5. Components:
      1. Broker Nodes
        1. Specify the amount of broker nodes per AZ you want at time of cluster creation
      2. Zookeeper Nodes
        1. Created for you
      3. Producers, Consumers, and Topics
        1. Kafka data-plane operations allow creation of topics and ability to produce/consume data
      4. Flexible Cluster Operations
        1. Perform cluster operations with the console, AWS CLI, or APIs within any SDK
    6. Resiliency in AmazonMSK:
      1. Auto Recovery
      2. Detected broker failures result in mitigation or replacement of unhealthy nodes
      3. Tries to reduce storage from other brokers during failures to reduce data needing replication
      4. Impact time is limited to however long it takes MSK to complete detection and recovery
      5. After successful recovery, producer and consumer apps continue to communicate with the same IP as before
    7. Features:
      1. MSK Serverless
        1. Cluster type within AmazonMSK offering serverless cluster management - auto provisioning and scaling
        2. Fully compatible with Apache Kafka - use the same client apps for prod/cons data
      2. MSK Connect
        1. Allows developers to easily stream data to and from Apache Kafka clusters
    8. Security
      1. Integrates with Amazon KMS for SSE requirements
      2. Will always encrypt data at rest by default
      3. TLS1.2 by default in transit between brokers in clusters
    9. Logging
      1. Broker logs can be delivered to services like CloudWatch, S3, Data Firehose
      2. By default, metrics are gathered and sent to CloudWatch
      3. MSK API calls are logged to CloudFront
  6. Amazon OpenSearch Service
    1. Managed service allowing you to run search and analytics engines for various use cases
    2. It is the successor to Amazon ElasticSearch Service
    3. Features:
      1. Allows you to perform quick analysis - quickly ingest, search, and analyze data in your clusters - commonly a part of an ETL process
      2. Easily scale cluster infrastructure running the OpenSearch services
    4. Security: leverage IAM for access control, VPC security groups, encryption at rest and in transit, and field-level security
    5. Multi-AZ capable service with Master nodes and automated snapshots
    6. Allows for SQL support for BI apps
    7. Integrates with CloudWatch, CloudTrail, S3, Kinesis - can set log streams to OpenSearch Service
      1. Logging solution involving creating visualization of log file analytics or BI tools/imports
  7. Serverless Overview
    1. Benefits
      1. Easy of use: we bring code, AWS handles everything else
      2. Event-Based: can be brought online in response to an event then go back offline
      3. True “pay for what you use” architecture: pay for provisioned resources and the length of runtime
    2. Example serverless services: Lambda, Fargate
  8. Lambda
    1. Serverless compute service that lets you run code without provisioning or managing the underlying server
    2. How to build a lambda function:
      1. Runtime selection: pick from an available run-time or bring your own. This is the environment your code will run in
      2. Set permissions: if your lambda function needs to make an API call in AWS, you need to attach a role
      3. Networking definitions: optionally, you can define the VPC, subnet, and security groups your functions are a part of
      4. Resource definitions: define the amount of available memory will allocate how much CPU and RAM your code gets
      5. Define Trigger: select what is going to kick off your lambda function to start
    3. Lambda has built-in logging and monitoring using CloudWatch
  9. AWS Serverless Application Repository
    1. Serverless App Repository
      1. Service that makes it easy for users to easily find, deploy, or even publish their own serverless apps
      2. Can share privately within organization or publicly
      3. How it works:
        1. You upload your application code and a Manifest File
          1. Manifest File: is known as the Serverless Application Model (SAM) Template
          2. SAM templates are basically CloudFormation templates
      4. Deeply integrated with the AWS lambda service - actually appears in the console
    2. 2 Options in Serverless App Repository:
      1. Publish: define apps with the SAM templates and make them available for others to find and deploy
        1. When you first publish your app, it is set to private by default
      2. Deploy: find and deploy published apps
  10. Container Overview
    1. Container
      1. Standard unit of software that packages up code and all its dependencies
    2. Terms:
      1. Docker File: text document that contains all the commands or instructions that will be used to build an image
      2. Image: Docker files build images, immutable file that contains the code, libraries, dependencies, and configuration files needed to build an app
      3. Registry: like GitHub for images, stores docker images for distribution
      4. Container: a running copy of image
    3. ECS: Elastic Container Service
      1. Management of containers at scale
      2. Integrates natively with ELB
      3. Easy integration with roles to get permissions for containers - containers can have individual roles attached to them
      4. ECS only works in AWS
    4. EKS: Elastic Kubernetes Service
      1. Kubernetes: open source container manager, can be used on-prem and in the cloud
      2. EKS is the AWS-managed version
    5. ECS vs EKS
      1. ECS - simple, easy to integrate, but it does not work on-prem
      2. EKS - flexible, works in cloud and on-prem, but it is more work to configure and integrate with AWS
    6. Fargate
      1. Serverless compute engine for containers that works with both ECS and EKS (requires one of them)
      2. EC2 vs Fargate for container management
        1. If you use EC2:
          1. You are responsible for underlying OS
          2. Can better deal with long-running containers
          3. Multiple containers can share the same host
        2. If you use Fargate:
          1. No OS access - don’t have to manage
          2. Pay based on resources allocated and time run
          3. Better for short-running tasks
          4. Isolated environments
      3. Fargate vs Lambda
        1. Use fargate when you have more consistent workloads, allows for docker use across the organization and a greater level of control for developers
        2. Use lambda when you have unpredictable or inconsistent workloads, use for applications that can be expressed as a single function (lambda function responds to event and shuts down)
  11. EventBridge (formerly known as CloudWatch Events)
    1. Serverless event bus, allows you to pass events from a server to an endpoint
      1. Essentially the glue that holds your serverless apps together
    2. Creating an EventBridge Rule:
      1. Define Pattern: scheduled/invoked/etc?
      2. Select Event Bus: AWS-based event/Custom event/Partner event?
      3. Select your target: what happens when this event kicks off?
      4. Remember to tag it
    3. Remember:
      1. EventBridge is the glue - it triggers an action based on some event in AWS, holds together a serverless application and Lambda functions
        1. An API call in AWS can alert a variety of endpoints
  12. Amazon Elastic Container Registry (ECR)
    1. AWS-managed container image registry that offers secure, scalable, and reliable infrastructure
    2. Private container image repositories with resources-based permissions via IAM
    3. Supported formats include: Open Container Initiative (OCI) images, docker images, and OCI artifact
    4. Components:
      1. Registry
        1. A private registry provided to each AWS account, regional
        2. Can create one or more registries for image storage
      2. Authentication Token
        1. Required for pushing/pulling images to/from registries
      3. Repository
        1. Contains all of your Docker Images, OCI Images, and Artifacts
      4. Repository Policy
        1. Control all access to repository and images
      5. Image
        1. Container images that get pushed to and pulled from your repositories
    5. Amazon ECR Public is a similar service for public image repository
    6. Features:
      1. Lifecycle policies
        1. Helps management of images in your repository
        2. Defines rules for cleaning up unused images
        3. It does give you the ability to test your rules before applying them to repository
      2. Image Scanning
        1. Helps identify software vulnerabilities in your container images
        2. Repositories can be set to scan on push
        3. Retrieve results of scans for each image
      3. Sharing
        1. Cross-Region Support
        2. Cross-Account support
        3. Both are configured per repository and per region > each registry is regional for each account
      4. Cache Rules
        1. Pull through cache rules allow for caching public repositories privately
        2. ECR periodically reaches out to check current caching status
      5. Tag Mutability
        1. Prevents image tags from being overwritten
        2. Configure this setting per repository
    7. Service Integrations:
      1. Bring your own containers - can integrate with your own container infrastructure
      2. ECS - use container images in ECS container definitions
      3. EKS - pull images from EKS clusters
      4. Amazon Linux Containers - can be used locally for your software development
  13. EKS Distro
    1. EKS Distro aka EKS-D
      1. Kubernetes distribution based on and used by Amazon EKS
      2. Same versions and dependencies deployed by EKS
    2. EKS-D is fully your responsibility, fully managed by you, unlike EKS
    3. Can run EKS-D anywhere, on-prem, cloud, etc
  14. EKS Anywhere and ECS Anywhere
    1. EKS Anywhere
      1. An on-prem way to manage Kubernetes clusters with the same practices used in EKS, with these clusters on-prem
      2. Based on EKS-Distro allows for deployment, usage, and management methods for clusters in data centers
      3. Can use lifecycle management of multiple kubernetes clusters and operate independently of AWS Services
      4. Concepts:
        1. Kubernetes control plane management operated completely by customer
        2. Control plane location within customer center
        3. Updates are done entirely via manual CLI or Flux
    2. ECS Anywhere
      1. Feature of ECS allowing the management of container-based apps on-prem
      2. No orchestration needed: no need to install and operate local container orchestration software, meaning more operational efficiency
      3. Completely managed solution enabling standardization of container management across environment
      4. Inbound Traffic
        1. No ELB support - customer managed, on-prem
      5. EXTERNAL = new launch type noted as ‘EXTERNAL’ for creating services or running tasks
      6. Requirements for ECS Anywhere:
        1. On local server, must have the following installed:
          1. SSM Agent
          2. ESC Agent
          3. Docker
        2. Must first register external instances as SSM Managed Instances
          1. Can easily create an installation script within ECS console to run on your instances
          2. Scripts contain SSM activation keys and commands for required software
  15. Auto Scaling DBs on Demand with Aurora Serverless
    1. Aurora Provisioned vs Aurora Serverless
      1. Provisioned is typical Aurora service
      2. Aurora Serverless
        1. On-demand and Auto Scaling configuration for Aurora db service
        2. Automation of monitoring workloads and adjusting capacity for dbs
        3. Based on demand - capacity adjusted
        4. Billed per-second only for resources consumed by db clusters
    2. Concepts for Aurora Serverless
      1. Aurora Capacity Units (ACUs) = a measurement on how your clusters scale
      2. Set minimum and maximum of ACUs for scaling → can be 0
      3. Allocate quickly by AWS managed warm pools
      4. Combo of 2 GiB of memory, matching CPU and networking
      5. Same data resiliency as provisioned - 6 copies of data across 3 AZs
    3. Use Cases:
      1. Variable workloads
      2. Multi-tenant apps - let the service manage db capacity for each individual app
      3. New apps
      4. Dev and Test
      5. Mixed-Use Apps: apps that might serve more than one purpose with different traffic spikes
      6. Capacity planning: easily swap from provisioned to serverless or vise-versa
  16. Amazon X-Ray
    1. Application Insights - collects application data for viewing, filtering, and gaining insights about requests and responses
    2. View calls to downstream AWS resources and other microservices/APIs or dbs
    3. Receives traces from your applications for allowing insights
    4. Integrated services can add tracing headers, send trace data, or run the X-Ray daemon
    5. Concepts:
      1. Segments: data containing resource names, request details, etc
      2. Subsegments: segments providing more granular timing info and data
      3. Service graph: graphical representation of interacting services in requests
      4. Traces: trace ID tracks paths of requests and traces collect all segments in a request
      5. Tracing Header: extra HTTP header containing sampling decisions and trace ID
        1. Tracing header continuing added info is named: X-Amzn-Trace-ID
    6. X-Ray Daemon
      1. AWS Software application that listens on UDP port 2000. It collects raw segment data and sends it to the X-Ray API
      2. When daemon is running, it works alongside the X-Ray SDK
    7. Integrations:
      1. EC2- installed, running agent
      2. ECS- installed within tasks
      3. Lambda- on/off toggle, built-in/available for functions
      4. Beanstalk- a configuration option
      5. API Gateway- can add to stages as desired
      6. SNS and SQS- view time taken for messages in queues within topics
  17. GraphQL interfaces in AppSync
    1. AppSync
      1. Robust, scalable GraphQL interface for app developers
      2. Combines data from multiple sources
      3. Enables interaction for developers via GraphQL, which is a data language that enables apps to fetch data from servers
      4. Seamless integration with React, ReactNative, iOS, and Android
      5. Especially used for fetching app data, declarative coding, and frontend app data fetching
  18. Layer 4 DDoS Attacks aka SYN flood
    1. Work at the transport layer
    2. How it works:
      1. SYN flood overwhelms the server by sending a large number of SYN packets and then ignoring the SYN-ACKs returned by the server
        1. Causes the server to use up resources waiting for a set amount of time for the ACK
        2. There are only so many concurrent TCP connections that a web app server can have open - so attacker could take all the allowed connections causing the server to not be able to respond to legitimate traffic
  19. Amplification Attacks aka Reflection Attacks
    1. When an attacker may send a third party server (such as an NTP server) a request using a spoofed IP address. That server will then respond to that request with a greater payload than the initial request (28-54 xs larger) to the spoofed IP
      1. Attackers can coordinate this and use multiple NTP servers a second to send legitimate NTP traffic to the target
      2. Include things such as NTP, SSDP, DNS, CharGEN, SNMP attacks, etc
  20. Layer 7 Attack
    1. Occurs when a web server receives a flood of GET or POST requests, usually from a botnet or large number of compromised computers
      1. Causes legitimate users to not be able to connect to the web server because it is busy responding to the flood of requests from the botnet
  21. Logging API Calls using CloudTrail
    1. CloudTrail Overview:
      1. Increases visibility into your user and resource activity by recording AWS Management Console actions and API calls
      2. Can identify which users and accounts called AWS, the source IP from which the calls were made and when
      3. Just tracks API calls:
        1. Every call is logged into an S3 bucket by CloudTrail
        2. RDP and SSH traffic is NOT logged
        3. DOES include anything done in the console
    2. What is logged in a CloudTrail logged event?
      1. Metadata around the API calls
      2. Id of the API caller
      3. Time of call
      4. Source IP of API caller
      5. Request parameters
      6. Response elements returned by the service
    3. What CloudTrail allows for:
      1. After-the-fact incident investigation
      2. Near real-time intrusion detection → integrate with Lambda function to create an intrusion detection system that you can customize
      3. Logging for industry and regulatory compliance
  22. Amazon Shield
    1. AWS Shield
      1. Free DDoS Protection
      2. Protects all AWS customers on ELBs, CloudFront, and Route53
      3. Protects against SYN/UDP floods, reflection attacks and other Layer 3 and Layer 4 attacks
    2. AWS Shield-Enhanced
      1. Provides enhanced protections for apps running on ELB, CloudFront, Route53 against larger and more sophisticated attacks
      2. Offers always-on, flow-based monitoring of network traffic and active application monitoring to provide near real-time notifications of DDoS attacks
      3. 24/7 access to the DDoS Response Team (DRT) to help mitigate and manage app-layer DDoS attacks
      4. Protects your AWS bill against higher fees due to ELB, CloudFront, and Route53 usage spikes during a DDoS attack
      5. Costs $3000/month
  23. Web Application Firewall
    1. Web Application Firewall that allows you to monitor the HTTP and HTTPS requests that are forwarded on to CloudFront or Application Load Balancer
    2. Lets you control access to your content
      1. Can configure conditions such as what IP addresses are allowed to make this request or what query string parameters need to be passed for the request to be allowed
      2. The Application Load Balancer or CloudFront will either allow this content to be received or give an HTTP 403 status code
    3. Operates at Layer 7
    4. At the most basic level, WAF allows 3 behaviors:
      1. Allow all requests except the ones you specify
      2. Block all requests except for the ones you specify
      3. Count the requests that match the properties you specify
    5. Can define conditions by using characteristics of web requests such as:
      1. IP addresses that the requests originate from
      2. Country that the requests originate from
      3. Values in requests headers
      4. Presence of a SQL code that is likely to be malicious (ie: SQL Injection)
      5. Presence of a script that is likely to be malicious (ie: cross-site scripting)
      6. Strings that appear in requests - either specific strings or strings that match regex patterns
    6. WAF can:
      1. Can protect against Layer 7 DDoS attacks like cross-site scripting, SQL injections
      2. Can block specific countries or specific IP addresses
  24. GuardDuty
    1. Threat detection service that uses ML to continuously monitor for malicious behavior
      1. Unusual API calls, calls from a known malicious IP
      2. Attempts to disable CloudTrail logging
      3. Unauthorized Deployments
      4. Compromised Instances
      5. Recon by would-be attackers
      6. Port scanning and failed logins
    2. Features
      1. Alerts appear in GuardDuty console and CloudWatch events
      2. Receives feeds from 3rd parties like Proofpoint and CrowdStrike, as well as AWS Security, about known malicious domains and IP addresses
      3. Monitors CloudTrail logs, VPC flow logs and DNS logs
      4. Allows you to centralize threat detection across multiple AWS Accounts
      5. Automated response using CloudWatch Events and Lambda
      6. Gives you ML and anomaly detection
      7. Basically threat detection with AI
    3. Setting up GuardDuty:
      1. 7-14 days to set a baseline = normal behavior
      2. You will only see findings that GuardDuty detects as a threat
    4. Cost
      1. 30 days free
      2. Charges based on:
        1. Quality of CloudTrail events
        2. Volume of DNS and VPC Flow logs data
  25. Firewall Manager
    1. A security management service in a single pane of glass
    2. Allows you to centrally set up and manage firewall rules across multiple AWS accounts and apps in AWS Organizations
      1. Can create new AWS WAF Rules for your App Load Balancers, API Gateways, and CloudFront distributions
      2. Can also mitigate DDoS attacks using shield Advanced for your App Load Balancers, Elastic IP addresses, CloudFront distributions, and more
    3. Benefits:
      1. Simplifies management firewall rules across accounts
      2. Ensure compliance of existing and new apps
  26. Monitoring S3 Buckets with Macie
    1. Macie
      1. Automated analysis of data - uses ML and pattern matching to discover sensitive data stored in S3
      2. Uses AI to recognize if your S3 objects contain sensitive data, such as PII, PHI, and financial data
      3. Buckets:
        1. Alerts you to unencrypted buckets
        2. Alerts you about public buckets
        3. Can also alert you about buckets shared with AWS accounts outside of those defined in your AWS Orgs
      4. Great for frameworks like HIPAA
      5. Macie Alerts
        1. You can filter and search Macie alerts in AWS console
        2. Alerts sent to Amazon EventBridge can be integrated with your security incident and event management (SIEM) system
        3. Can be integrated with AWS Security Hub for a broader analysis of your organization's security posture
        4. Can also be integrated with other AWS Services, such as Step Functions, to automatically take remediation actions
  27. Inspector
    1. An automated security assessment service that helps improve the security and compliance of apps deployed on AWS
    2. Auto assesses apps for vulnerabilities or deviations from best practices
    3. Inspects EC2 instances and networks
    4. Assessment findings:
      1. After performing an assessment, Inspector produces a detailed list of security findings by level of security
      2. These findings can be reviewed directly or as part of detailed assessment reports available via Inspector console to API
      3. 2 types of assessments:
        1. Network Assessments
          1. Network configuration analysis to check for ports reachable from outside the VPC
          2. Inspector agent not required
        2. Host Assessments
          1. Vulnerable software (CVE), host hardening (CIS Benchmarks), and security best practices to review
          2. Inspector agent is required
    5. How does it work:
      1. Create assessment target
      2. Install agents on EC2 instances
        1. AWS will auto install the agent for instances that allow Systems Manager run commands
      3. Create Assessment Templates
      4. Perform Assessment run
      5. Review findings against the rules
  28. Key Management Service (KMS) and CloudHSM
    1. KMS
      1. AWS KMS is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data
      2. KSM Integrations
        1. Integrates with other services (EBS, S3, and RDS, etc) to make it simple to encrypt your data with encryption keys you manage
        2. Controlling your keys
          1. Provides you with centralized control over the lifecycle and permissions of your keys
          2. Can create new keys whenever you wish and you can control who can manage keys separately from who can use them
      3. CMK = Customer Master Key
        1. Logical representation of a master key
        2. CMK includes metadata such as they Key ID, creation date description, and key state
        3. CMK also contains the key material used to encrypt/decrypt data
        4. Getting started with CMK:
          1. You start the service by requesting the creation of a CMK
          2. You control the lifecycle of a CMK as well as who can use or manage it
      4. HSM - Hardware Security Module
        1. A physical computing device that safeguards and manages digital keys and performs encryption and decryption functions
        2. HSM contains one or more secure cryptoprocessor chips
        3. 3 ways to generate a CMK:
          1. AWS creates the CMK for you

Key material for CMK is generated within HSMs managed by AWS KMS

          1. Import key material from your own key management infrastructure and associate it with a CMK
          2. Have they key material generated and used in an AWS CloudHSM Cluster as part of the custom key store feature in KMS
      1. Key Rotation:
        1. If KMS HSMs were used to generate your keys, you can have AWS KMS auto rotate CMKs every year
          1. Auto key rotation is not supported for imported keys, asymmetric keys or keys generated in an AWS CloudHSM cluster using KMS custom key store feature
      2. Policies
        1. Primary way to manage access to your KMS CMKs is with policies
          1. Policies re documents that describe who has access to what
        2. Policies attached to an IAM Identity = identity-based policies (IAM policies), policies attached to other kinds of resources are called resource-based policies
        3. Key Policies
          1. In KMS, you must attach resource-based policies to your customer master keys (CMKs) → these are key policies
          2. All CMKs must have a key policy
        4. 3 ways to control permissions:
          1. Use the key policy- controlling access this way means the full scope of access to the CMK is defined in a single document (the key policy)
          2. Use IAM policies in combo with key policy- controlling access this way enables you to manage all the permissions for your IAM identities in IAM
          3. Use grants in combo with key policy- enables you to allow access to CMK in the key policy, as well as allow users to delegate their access to others
    1. CloudHSM
      1. Cloud-based Hardware Security Module that enables you to easily generate and use your own encryption keys on the AWS Cloud
        1. Basically renting physical device from AWS
    2. KMS vs CloudHSM
      1. KSM
        1. Shared tenancy of underlying Hardware
        2. Auto key rotation
        3. Auto key generation
      2. CloudHSM
        1. Dedicated HSM to you
        2. Full control of underlying hardware
        3. Full control of users, groups, keys, etc
        4. No auto key rotation
  1. Secrets Manager
    1. Service that securely stores, encrypts, and rotates your db credentials and other secrets
      1. Encryption in transit and at rest using KMS
      2. Auto rotates credentials
      3. Apply fine-grained access control using IAM policies
      4. Costs money, but highly scalable
    2. What else can it do
      1. Your app makes an API call to Secrets Manager to retrieve the secret programmatically
      2. Reduces the risk of credentials being compromised
    3. What can be stored?
      1. RDS credentials
      2. Credentials for non-RDS dbs
      3. Any other type of secret, provided you can store it as a key-value pair (SSH keys, API keys)
    4. Important: If you enable rotation, Secrets Manager immediately rotates the secret once to test the configuration
      1. You have to ensure that all of your apps that use these creds are updated to retrieve the creds from this secret using Secrets Manager
      2. If you apps are still using embedded creds, do not enable rotation
      3. Recommended to enable rotation if your apps are not already using embedded creds
  2. Parameter Store
    1. A capability of AWS Systems Manager that provides, secure, hierarchical storage for config data management and secrets management
    2. Can store things like passwords, db strings, AMI IDs, and license credentials as parameter values - can store as plain text or encrypted
    3. Parameter store is free
    4. 2 Big limits to Parameter Store:
      1. Limit to number of parameters you can store (current max is 10k)
      2. No key rotation
  3. Parameter Store vs Secrets Manager
    1. Minimize cost → Parameter Store
    2. Need more than 10k secrets, key rotation, or the ability to generate passwords using CloudFormation → Secrets Manager
  4. Pre Signed URLs or Cookies
    1. All objects in S3 are private by default- only object owner has permissions to access
    2. Pre Signed URLS
      1. Owner can share objects with others by creating a pre-signed URL, using their own credentials, to grant the time-limited permission to download the objects
        1. When you create a pre signed URL for your object, you must provide your security credentials, specify a bucket name and an object key, and indicate the HTTP method (or GET to download the object) as well as expiration date and time
        2. URLs are only valid for specified duration
      2. To generate a pre signed URL for an object, must do through CLI
        1. > aws S3 presign s3://nameofbucket/objectname --expires-in 3600
      3. Way to share an object in a private bucket?
        1. Pre signed URLs
    3. Pre signed Cookies
      1. Useful when you want to provide access to multiple restricted files
      2. The cookie will be saved on the user’s computer and they will be able to browse the entire comments of the restricted content
      3. Use case:
        1. Subscription to download files
  5. IAM Policy Documents
    1. Amazon Resource Names (ARNs)
      1. These uniquely ID a resource within Amazon
      2. All ARNs begin with:
        1. arn:partition:service:region:account_id
          1. Ex: arn:aws:ec2:eu-central-1:123456789012
      3. And end with:
        1. Resource
        2. resource_type/resource
          1. Ex: my_awesome_bucket/image.jpg
        3. resource_type/resource/qualifier
        4. resource_type:resource
        5. Resource_type:resource:qualifier
      4. Note: for global services, for example, we will have no region, so there will be a :: aka omitted value in the arn
    2. IAM Policies
      1. Are JSON docs that define permissions
      2. IAM/Identity Policy = applying policies to users/groups
      3. Resource Policy = apply to S3, CMKs, etc
      4. They are basically a list of statements:
        1. {“Version”:”2012-10-17”,

“Statement”:[

{...},

{...}

]

}

        1. Each statement matches an AWS API request
        2. Each statement has an effect of Allow or Deny
        3. Matched based on action
      1. Permission boundaries
        1. Used to delegate admin to other users
        2. Prevent privilege escalation or unnecessarily broad permissions
        3. Control max permissions an IAM policy can grant
    1. Exam tips:
      1. If permission is not explicitly allowed, it is implicitly denied
      2. An explicit deny > anything else
      3. AWS joins all applicable policies
      4. AWS managed vs customer managed
  1. AWS Certificate Manager
    1. Allows you to create, manage, and deploy public and private SSL certificates use with other services
      1. Integrates with other services - such as ELB, CloudFront distros, API Gateway - allowing you to easily manage and deploy SSL certs in environment
    2. Benefits:
      1. Do not have to pay for SSL certificates
        1. Provisions both public and private certificates for free
        2. You will still pay for the resources that utilize your certificates - such as ELBs
      2. Automated Renewals and Deployment
        1. Can automate the renewal of your SSL certificate and then auto update the new certificate with ACM-integrated services, such as ELB, CloudFront, API Gateway
      3. Easy to set up
  2. AWS Audit Manager
    1. Continuously audit your AWS usage and make sure you stay compliant with industry standards and regulations
    2. It is an automated service that produces reports specific to auditors for PCI compliance, etc
    3. Use Cases:
      1. Transition from Manual to Automated Evidence Collection
        1. Allows you to produce automated reports for auditors and reduce manual
      2. Continuous Auditing and Compliance
        1. Continuous basis, as your environment evolves and adapts, you can produce automated reports to evaluate your environment against industry standards
      3. Internal Risk Assessments
        1. Can create a new framework from the beginning or customize pre built frameworks
        2. Can launch assessments to auto collect evidence, helping you validate if your internal policies are being followed
  3. AWS Artifact
    1. Single source you can visit to get the compliance-related info that matters to you, such as security and compliance reports
    2. What is available?
      1. Huge number of reports available
        1. Service Organization control (SOC) reports
        2. Payment Card Industry (PCI) reports
        3. As well as other certifications - HIPAA, etc
  4. AWS Cognito
    1. Provides authentication, authorization, and user management for your web and mobile apps in a single service without the need for custom code
      1. Users can sign-in directly with a UN/PW they create or through a third party (FB, Amazon, Google, etc)
      2. ⇒ authorization engine
    2. Provides the following features:
      1. Sign-up and sign-in options for your apps
      2. Access for guest users
      3. Acts as an identity broker between your application and web ID providers, so you don’t have to write any custom code
      4. Synchronizes user data across multiple devices
      5. Recommended for all mobile apps that call AWS Services
    3. Use cases:
      1. Authentication
        1. Users can sign in using a user pool or a 3rd party identity provider, such as FB
      2. 3rd Party Authentication
        1. Users can authenticate using identity pools that require an identity pool (IdP) token
      3. Access Server-Side Resources
        1. A signed-in user is given a token that allows them access to resources that you specify
      4. Access AWS AppSync Resources
        1. Users can be given access to AppSync resources with tokens received from a user or identity pool in Cognito
    4. User Pools and Identity Pools
      1. Two main components of Cognito
        1. User Pools
          1. Directories of users that provide sign-up and sign-in options for your application users
        2. Identity Pools
          1. Allows you to give your users access to other AWS Services
          2. You can use identity pools and user pools together or separately
    5. How it works - broadly
      1. When you use the basic authflow, your app first presents an ID token from an authorized Amazon Cognito user pool or third-party identity provider in a GetID request.
      2. The app exchanges the token for an identity ID in your identity pool.
      3. The identity ID is then used with the same identity provider token in a GetOpenIdToken request.
      4. GetOpenIdToken returns a new OAuth 2.0 token that is issued by your identity pool.
      5. You can then use the new token in an AssumeRoleWithWebIdentity request to retrieve AWS API credentials.
      6. The basic workflow gives you more granular control over the credentials that you distribute to your users.
        1. The GetCredentialsForIdentity request of the enhanced authflow requests a role based on the contents of an access token.
        2. The AssumeRoleWithWebIdentity request in the classic workflow grants your app a greater ability to request credentials for any AWS Identity and Access Management role that you have configured with a sufficient trust policy.
        3. You can also request a custom role session duration.
    6. Cognito Sequence
      1. Device/App connects to a User Pool in Cognito - You are authenticating and getting tokens
      2. Once you've got that token, your device is going to exchange that token to an identity pool, and then the identity pool with hand over some AWS credentials
      3. Then you can use those credentials to access your AWS Services
      4. Basic Cognito Sequence:
        1. Request to user pool, authenticates and gets token
        2. Exchanges token and get AWS creds
        3. Use AWS creds to access AWS services
  1. Amazon Detective
    1. You can analyze, investigate and quickly identify the root cause of potential security issues or suspicious activities
    2. Detective pulls data from your AWS resources and uses ML, statistical analysis, and graph theory to build a linked set of data that enables you to quickly figure out the root cause of your security issues
      1. To auto create an overview of users, resources, and the interactions between them over time
    3. Sources for Detective:
      1. VPC flow logs, CloudTrail logs, EKS audit logs, and GuardDuty findings
    4. Use Cases:
      1. Triage Security Findings - generates visualizations
      2. Threat Hunting
    5. Exam Tips:
      1. Operates across multiple services and analyzes root cause of an event
      2. If you see “root cause” or “graph theory”, think Detective
      3. Don’t confuse with Inspector
        1. Inspector = Automated vulnerability management service that continually scans EC2 and container workloads for software vulnerabilities and unintentioned network exposure
  2. AWS Network Firewall
    1. Physical firewall protection - a managed service that makes it easy to deploy physical firewall protection across your VPCs
      1. Managed infrastructure
    2. Includes a firewall rules engine that gives you complete control over your network traffic
      1. Allowing you to do things such as block outbound Server Message Block (SMB) requests to stop the spread of malicious activity
    3. Benefits:
      1. Physical infrastructure in the AWS datacenter that is managed by AWS
      2. Network Firewall works with Firewall Manager
        1. FW Manager with Network Firewall added: Allows you to centrally manage security policies across existing and newly created accounts and VPC
      3. Also provides an Intrusion Detection System (IPS) that gives you active traffic flow inspection
        1. See IPS, think Network Firewall
    4. Use Cases:
      1. Filter Internet Traffic
        1. Use methods like ACL rules, stateful inspection, protocol detection, and intrusion prevention to filter your internet traffic
      2. Filter Outbound Traffic
        1. Provide the URL/domain name, IP address, and content-based outbound traffic filtering
        2. Help you stop possible data loss and block known malicious communicators
      3. Inspect VPC-to-VPC Traffic
        1. Auto inspect traffic moving from one VPC to another as well as across multiple accounts
    5. Exam Tips:
      1. Scenario about filtering your network traffic before it reaches your internet gateway
      2. Or if you require IPS or any hardware firewall requirements
  3. AWS Security Hub
    1. Single place to view all of your security alerts from services like GuardDuty, Inspector, Macie, and Firewall Manager
      1. Works across multiple accounts
    2. Use Cases:
      1. Conduct Cloud Security Posture Management (CSPM)- use automated checks that comply with common frameworks (for ex: Center for Information Security (CIS) or PSI DSS) to help reduce your risk
      2. Correlate security findings to discover new insights
        1. Aggregate all your security findings in one place, allowing security staff to more easily identify threats and alerts
  4. CloudFormation
    1. Overview:
      1. Written in declarative programming language supports either JSON or YAML formatting
      2. Creates immutable architecture - easily create/destroy architecture
      3. Creates the same API calls that you would make manually
    2. Steps in CloudFormation:
      1. Step 1: write code
      2. Step 2: Deploy your template
        1. When you upload your template, CloudFormation engine will go through the process of making the needed AWS API calls on your behalf
    3. Create CloudFormation Stack
      1. Set parameters that are defined in your template and allow you to input custom values when you create or update a stack
        1. Parameters come from the code in the template
    4. 3 sections of CloudFormation template:
      1. Parameters
      2. Mappings = values that fill themselves in during formation
      3. Resource Section
    5. If CloudFormation finds an error, it rolls back to the last known good state
  5. Elastic Beanstalk
    1. The Amazon PaaS tool - one stop for everything AWS
    2. Automation
      1. Automates all of your deployments
      2. You can templatize what you would like your environment to look like
      3. Deployments handled for us- upload code, test your code in a staging environment, then deploy to production
      4. Handles building out the insides of your EC2 instances for you
    3. Configuring Elastic Beanstalk
      1. Pick your platform
        1. Pick language - supports docker which means we can run all sorts of languages/environments inside a container on Elastic Beanstalk
      2. Additional Configurations
        1. Basically bundles all the wizards from across AWS services and gives you a place to configure all these in Beanstalk
    4. Exam Tips:
      1. Bring your code and that is all
      2. Elastic Beanstalk = PaaS tool
        1. It builds the platform and stacks your application on top
      3. Not serverless - Beanstalk creates and manages standard EC2 architecture
  6. Systems Manager
    1. Suite of tools designed to let you view, control and automate both your AWS architecture and on-prem architecture
    2. Features of Systems Manager
      1. Automation Documents [now called Runbooks]
        1. Can be used to control your instances or AWS resources
      2. Run Command
        1. Executes commands on your hosts
      3. Patch Manager
        1. Manage app versions
      4. Parameter Store
        1. Secret Values
      5. Hybrid Architecture
        1. Control you on-prem architecture
      6. Session Manager
        1. Allows you to connect and remotely interact with your architecture
    3. All it takes for EC2/on-prem to be managed by Systems Manager is to:
      1. Install Systems Manager Agent
      2. And give the instance a role/permissions to communicate to the Systems Manager
  7. Caching
    1. Types of Caching
      1. Internal: in front of database to store frequent queries, for example
      2. External: CDN = Content Delivery Network
    2. AWS Caching Options
      1. CloudFront = External
      2. ElastiCache = Internal
      3. DAX = DynamoDB Solution
      4. Global Accelerator = External
  8. Global Caching with CloudFront
    1. CloudFront Overview
      1. Fast Content Delivery Network (CDN) service that securely delivers data, videos, apps, and APIs to customers globally
        1. Helps reduce latency and provide higher transfer speeds using AWS edge locations
      2. First user makes request through CloudFront at an Edge location, CloudFront will go to S3 and grab object, it will hold a copy of that object at the Edge Location
        1. First user’s request is not faster, but all after are pulling from Edge location
    2. CloudFront Settings
      1. Security
        1. Defaults to HTTPS connections with the ability to add custom SSL certifications
        2. Put secure connection on static S3 connections
      2. Global Distribution
        1. Cannot pick specific countries, just general areas
      3. Endpoint Support - AWS and Non-AWS
        1. Can be used to front AWS endpoints as well as non-AWS applications
      4. Expiring Content
        1. You can force an expiration of content from the cache if you cannot wait for the TTL
      5. Can restrict access to your content via CloudFront using signed URLs or signed cookies
    3. Exam Tips:
      1. Solution for external customer performance issue
  9. Caching your data with ElastiCache and DAX
    1. ElastiCache
      1. Managed version of 2 open source technologies
        1. Memcached
        2. Redis
      2. Neither of these tools are specific to AWS but by using ElastiCache, you can spin up 1 or the other, or both to avoid a lot of common issues
      3. ElastiCache can sit in front of almost any database, but it really excels being placed in front of RDS’s
      4. Memcached vs Redis
        1. Both sit in front of database and cache common queries that you make
        2. Memcached
          1. Simple db caching solution
          2. Not a db by itself
          3. No failover, no multi-AZ support, no backups
        3. Redis
          1. Supported as a caching solution
          2. But also has the ability to function as a standalone NoSQL db

“Caching Solution” but also can be the answer if you are looking for a NoSQL solution and DynamoDB isn’t present

Has failover, multi-AZ, backup support

    1. DynamoDB Accelerator (DAX)
      1. It is an In-Memory Cache
        1. Reduce DynamoDB response times from milliseconds to microseconds
      2. DAX lives inside a VPC, you specify which - is highly available
      3. You are in control
        1. You determine the node size and count for the cluster, TTL for the data, and maintenance windows for changes and updates
  1. Fixing IP Caching with Global Accelerator
    1. Global Accelerator
      1. A networking service that sits in front of your apps and sends your users’ traffic through AWS’s global network infrastructure
        1. Can increase performance and help deal with IP Caching
      2. IP Caching Issue
        1. User connecting to ELB, caches that ELB’s IP address (for the period defined by TTL)
        2. If that ELB goes offline and its IP changes, the user will be trying to connect with wrong IP
        3. Global accelerator solves this problem by sitting in front of ELB
          1. User is served 1 of only 2 IP addresses - that never change
          2. Even if ELB’s IP changes, user connection doesn't change
      3. Top 3 features of Global Accelerator:
        1. Masks Complex Architecture
          1. Global Accelerator IPs never change for users
        2. Speeds things up
          1. Traffic is routed through AWS’s global network infrastructure
        3. Weighted Pools
          1. Create weighted groups behind the IPs to test out new features or handle failure in your environment
      4. Creating Global Accelerator
        1. Listeners = port/port range
        2. Listeners direct traffic to one or more endpoint groups (endpoints = such as load balancers)
          1. Each listener can have multiple endpoint groups
          2. Each endpoint group can only include endpoints that are in one Region
          3. Adjust the weight here
      5. Global Accelerator solves IP caching
  2. AWS Organizations
    1. Managing accounts with AWS Organizations
      1. Free governance tool that allows you to create and manage multiple AWS accounts - can control your accounts from a single location
      2. Applying standards across accounts (ex: Prod, Dev, Beta, etc)
    2. Key features in Organizations
      1. It is vital to create a Logging Account
        1. It is best practice to create a specific account dedicated to logging
        2. Ship all logs to one central location with Organizations
          1. CloudTrail supports logs aggregation
      2. Programmatic Creation: easily create and destroy new AWS accounts with API calls
      3. Reserve Instances: RIs can be shared across all accounts
      4. Consolidated Billing
      5. Service Control Policies (SCPs) can limit users’ permissions
        1. SCPs are in JSON format
        2. Once implemented, these policies will be applied to every single resource inside an account
          1. They are the ultimate way to restrict permissions and even apply to the root account
        3. Effectively a global policy and the only way to restrict what the root user can do
        4. SCPs never give permissions, they ONLY TAKE away the possible permissions that can be handed out
          1. Deny rules - deny specific things globally
          2. Allow rules - even more restrictive because it limits all of the permissions we could hand out
  3. Resource Access Manager (RAM)
    1. Free service that allows you to share AWS resources with other accounts and within your organization
    2. Allows you to easily share resources rather than having to create duplicate copies
    3. What can be shared?
      1. Transit Gateways
      2. VPC Subnets
      3. License Manager
      4. Route53 Resolver
      5. Dedicated Hosts
      6. Etc
    4. Can set permissions for what actions are allowed to happen on shared resources
    5. RAM vs VPC Peering
      1. Use RAM when sharing resources within the same region
      2. Use VPC Peering when sharing resources across regions
  4. Setting up Cross Account Role Access
    1. Cross-Account Role Access
      1. Allows you to set up temporary access you can easily control
      2. Set up primary user and then other users assume roles rather than having to create many many accounts
    2. Steps to set up Cross-Account Role Access:
      1. Create an IAM Role
      2. Grant access to allow users to temporarily assume role
    3. Exam Tips
      1. It is preferred to create cross-account roles rather than add additional IAM users
      2. Auditing - temporary access, temporary employees
  5. AWS Config
    1. Inventory management and control tool
    2. Allows you to show the history of your infrastructure along with creating rules to make sure it conforms to the best practices you’ve laid out
    3. 3 things it allows us to do:
      1. Allows us to query our architecture
        1. Can easily discover what architecture you have in your account - query by resource type, tag, even see deleted resources
      2. Rules can be created to flag when something is going wrong/out of compliance
        1. Whenever a rule is violated, you can be alerted or even have it auto fixed
      3. Shows history of Environment
        1. When did something change, who made that call, etc
    4. Can open up specific CloudTrail event that is tied to that event in Config
    5. Can auto remediate issues
      1. Can select remediation actions
        1. For example, can auto kick off an Automation Document that will block public access
    6. Exam Tips
      1. Config = Setting standards
  6. Directory Service
    1. Fully managed version of Active Directory
    2. Allows you to offload the painful parts of keeping AD online to AWS while still giving you the full control and flexibility AD provides
    3. Available Types:
      1. Managed Microsoft AD
        1. Entire AD suite, easily build out AD in AWS
      2. AD Connector
        1. Creates a tunnel between AWS and your on-prem AD
        2. Want to leave AD in physical data center
        3. Get an endpoint that you can authenticate against in AWS while leaving all of your actual users and data on-prem
      3. Simple AD
        1. Standalone directory powered by Linux Samba AD-Compatible server
        2. Just an authentication service
  7. AWS Cost Explorer
    1. Easy-to-use tool that allows you to visualize your cloud costs
      1. Can generate reports based on a variety of factors, including resource tags
    2. What can it do?
      1. Break down costs on a service-by-service basis
      2. Can break out by time, can estimate next month
      3. Filter and breakdown data however we want
    3. Exam tips
      1. Have to set tags as “cost allocation tag” to use
      2. Cost Explorer and Budgets go hand-in-hand
  8. AWS Budgets
    1. Allows organizations to easily plan and set expectations around cloud costs
      1. Easily track your ongoing spending and create alerts to let users know where they are close to exceeding their allotted spend
    2. Types of Budgets - you get 2 free each month
      1. Cost Budgets
      2. Usage Budgets
      3. Reservation Budgets - RIs
      4. Savings Plan Budgets
    3. Exam Tips
      1. Can be alerted on current spend or projected spend
      2. Can create a budget using tags as a filter
  9. AWS Costs and Usage Reports (CUR)
    1. Most comprehensive set of cost and usage data available for AWS spending
    2. Publishes billing reports to AWS S3 for centralized collection
      1. These breakdown costs by the time span (hour, day, month), service and resource, or by tags
      2. Daily updates to reports in S3 in CSV formats
      3. Integrates with other services - Athena, Redshift, QuickSight
    3. Use Cases for AWS CUR:
      1. Within Organizations for entire OU groups or individual accounts
      2. Tracks Savings Plans utilizations, changes, and current allocations
      3. Monitor On-Demand capacity reservations
      4. Break down your AWS data transfer charges: external and inter-Regional
      5. Dive deeper into cost allocation tag resources spending
    4. Exam Tip
      1. Most comprehensive overview of spending
  10. Reducing compute spend using Savings Plans and AWS Compute Optimizer
    1. AWS Compute Optimizer
      1. Optimizes, analyzes configurations and utilization metrics of your AWS resources
      2. Reports current usage optimizations and potential recommendations
      3. Graphs
      4. Informed decisions
      5. Which resources work with this service?
        1. EC2, Auto Scaling Groups, EBS, Lambda
      6. Supported Account Types
        1. Standalone AWS account without Organizations enabled
        2. Member Account - single member account within an Organization
        3. Management Account
          1. When you enable at the AWS Organization management account, you get recommendations based on entire organization (or lock down to 1 account)
      7. Things to know:
        1. Disabled by default
          1. Must opt in to leverage Compute Optimizer
          2. After opting in, enhance recommendations via activation of recommendation preferences
    2. Savings Plans
      1. Offer flexible pricing modules for up to 72% savings on compute
      2. Lower prices for EC2 instances regardless of instance family, size, OS, tenancy, or Region
      3. Savings can also apply to Lambda and Fargate usage
      4. SageMaker plans available for lowering SageMaker instance pricing
      5. Require Commitments
        1. 1 or 3 year options
        2. Pricing Plan options:
          1. Pay all upfront - most reduced
          2. Partial upfront
          3. No upfront
        3. Savings Plan Types:
          1. Compute Savings

Most flexible savings plan

Applies to any EC2 compute, Lambda, or Fargate usage

Up to 66% savings on compute

          1. EC2 Instance Savings

Stricter savings plan

Applies only to EC2 instances of a specific instance family in specific regions

Up to 72% savings

          1. SageMaker Savings

Only SageMaker Instances - any region and any component, regardless of family or sizing

Up to 64% savings

      1. Using and applying Savings Plan
        1. View recommendations in AWS billing console
        2. Recommendations are auto calculated to make purchasing easier
        3. Add to cart and purchase directly within account
        4. Apply to usage rates AFTER your RIs are applied and exhausted
          1. RIs have to be used first
        5. If in a consolidated billing family; savings applied to account owner first, then can be spread to others if sharing enabled
  1. Trusted Advisor for Auditing
    1. Trusted Advisor Overview
      1. Fully managed best practice auditing tool
      2. It will scan 5 different parts of your account and look for places when you could improve your adoption of the recommended best practice provided by AWS
    2. 5 Questions Trusted Advisor Asks:
      1. Cost Optimizations: Are you spending money on resources that are not needed?
      2. Performance: Are your services configured properly for your environment?
      3. Security: Is your AWS architecture full of vulnerabilities?
      4. Fault Tolerance: Are you protected when something fails?
      5. Service Limits: Do you have room to scale?
    3. Want to link Trusted Advisor with an automated response to alert users or fix the problem
      1. Use EventBridge (CloudWatch Events) to kick of Lambda function to fix problem
    4. To get the most useful checks, you will need a Business or Enterprise support plan
  2. Control Tower to enforce account Governance
    1. Control Tower Overview
      1. Governance: Easy way to set up and govern an AWS multi-account environment
      2. Orchestration: automates account creation and security controls via other AWS Services
      3. Extension: extends AWS Organizations to prevent governance drift, and leverabes different guardrails
      4. New AWS Accounts: users can provision new AWS accounts quickly, using central administration-established compliance policies
      5. Simple Terms: quickest way to create and manage a secure, compliance, multi-account environment based on best practices
    2. Features and Terms
      1. Landing Zone
        1. Well-architected, multi-account environment based on compliance and security best practices
        2. Basically a container that holds all of your Organizational Units, your accounts within there OU’s, and users/other resources that you want to force compliance on
        3. Can scale to fit whatever side you need
      2. Guardrails
        1. High-level rules providing continuous governance for the AWS Environment
        2. 2 Types
          1. Preventative
          2. Detective
      3. Account Factory
        1. Configurable account template for standardizing pre-approved configurations of new accounts
      4. CloudFormation Stack Set
        1. Automated deployments of templates deploying repeated resources for governance
        2. Management account would deploy a stack set to either the entire organization unit or the entire organization itself using repeatedly deployed resources
      5. Shared Accounts
        1. Three accounts used by Control Tower
          1. 2 of which are created during landing zone creation: Log Archive and Audit
    3. More on GuardRails
      1. High-level rules written in plain language providing ongoing governance
      2. 2 types:
        1. Preventative:
          1. Ensures accounts maintain governance by disallowing violating actions
          2. Leverages service control policies (SCPs) within Organizations
          3. Statuses of: Enforced or Not Enabled
          4. Supported in all regions
        2. Detective:
          1. Detects and alerts on noncompliant resources within all accounts
          2. Leverages AWS Config Rules

Config Rules - it alerts, does NOT remediate unless you leverage other resources

          1. Statuses: clear, in violation or not enabled
          2. Only apply to certain regions

Only going to work in regions that are supported by Control Tower

Which is currently not every region

    1. Control Tower Diagram
      1. Start with management account - with Organization and Control Tower enabled
      2. Control Tower creates 2 accounts (“shared accounts”)
        1. Log Archive Account
        2. Audit Account
      3. Control Tower places an SCP on every account that exists within our Organization → Preventative Guardrails
      4. Control Tower places AWS Config Rules in each account as well → Detective Guardrails
      5. All of our Config and CloudTrail logs get sent to the Log Archive shared account - to centralize logging
      6. Will also set up notifications for any governance violations that may occur
        1. All governance will be sent to the auditing account - including configuration events, aggregate security notifications, and drift notifications
          1. Go to an SNS topic in the audit account, you can use to alert the correct team
  1. AWS License Manager
    1. Manage Software Licenses
    2. Licenses made easy - simplifies managing software licenses with different vendors
      1. Centralized: helps centrally manage licenses across AWS accounts and on-prem environments
    3. Set usage limits
      1. Control and visibility into usage of licenses and enabling license usage limits
    4. Reduce overages and penalties via inventory tracking and rule-based controls for consumption
    5. Versatile- support any software based on vCPU physical cores, sockets, and number of machines
  2. AWS Personal Health Dashboard (aka AWS Health)
    1. Monitoring Health Events
      1. Visibility of resource performance and availability of AWS services or accounts
      2. View how the health events affect you and your services, resources, and accounts
    2. AWS maintains timeliness and relevant info within the events
    3. View upcoming maintenance tasks that may affect your accounts and resources
    4. Alerts- near instant delivery of notifications and alerts to speed up troubleshooting or prevention actions
      1. Automates actions based on incoming events using EventBridge
      2. Health Event → EventBridge → SNS topic, etc
    5. Concepts
      1. AWS Health Event = notifications sent on behalf of AWS services or AWS
      2. Account-specific event: events specific to your AWS account or AWS organization
      3. Public event = events reported on services that are public
      4. AWS Health Dashboard
        1. Dashboard showing account and public events, as well as service health
      5. Event type code = include the affected services and the specific type of event
      6. Event type category = associated category will be attached to every event
      7. Event status = open, closed, or upcoming
      8. Affected Entities
    6. Exam tip
      1. Look out for questions about checking alerts for service health and automating the reboot of EC2 instances for AWS Maintenance
  3. AWS Service Catalog and AWS Proton
    1. Standardizing Deployments using AWS Service Catalog
      1. Allows organizations to create and manage catalogs of approved IT services for deployments within AWS
      2. Multipurpose catalogs - list things like AMIs, server software, databases, and other pre-configured components
      3. Centralized management of IT service and maintain compliance
      4. End-User Friendly - easily deploy approved items
      5. CloudFormation
        1. Catalogs are written and listed using CloudFormation templates
      6. Benefits:
        1. Standardize
        2. Self-service deployments
        3. Fine-grained Access Control
        4. Versioning within Catalogs- propagate changes automatically
    2. AWS Proton
      1. Creates and manages infrastructure and deployment tooling for users as well as serverless and container-based apps
      2. How it works:
        1. Automate IaC provisioning and deployments
        2. Define standardized infrastructure for your serverless and container-based apps
        3. Use templates to define and manage app stacks that contain ALL components
        4. Automatically provisions resources, configures CI/CD, and deploys the code
        5. Supports AWS CloudFormation and Terraform IaC providers
      3. Proton is an all encompassing tool/service for deployments of your applications
      4. Standardization, empower developers
  4. Optimizing Architectures with the AWS Well-Architected Tool
    1. Review 6 Pillars of the Well-Architected Framework:
      1. Operational Excellence
      2. Reliability
      3. Security
      4. Performing Efficiency
      5. Cost Optimization
      6. Sustainability
    2. Well-Architected Tool
      1. Provides a consistent process for measuring cloud architecture
      2. Enables assistance with documenting workloads and architectures
      3. Guides for making workloads reliable, secure, efficient, and cost effective
      4. Measure workloads against years of AWS best practice
      5. Intended for specific audiences: technical teams, CTOs, architecture and operations teams
  5. AWS Snow Family
    1. Ways to move data to AWS
      1. Internet
        1. Could be slow, presents security risks
      2. Direct Connect
        1. Not always practical - not for short periods of time
      3. Physical
        1. Bypass internet entirely, and bundle data to physically move
    2. Snow Family
      1. Set of secure appliances that provide petabyte-scale data collection and processing solutions at the edge and migrate large-scale data into and out of AWS
        1. Offers built-in computing capabilities, enabling customers to run their operations in remote locations that do not have data center access or reliable network connectivity
      2. Members of the Snow Family:
        1. Snowcone
          1. Ex: climbing a wind turbine to collect data
          2. Smallest device
          3. 8TB of storage, 4 GB memory, 2 vCPUs
          4. Easily migrate data to AWS after you’ve processed it
          5. IoT sensor integration
          6. Perfect for edge computing where space and power are constrained
        2. Snowball Edge
          1. Ex: on a boat
          2. Jack of all trades
          3. 48-81TB storage
          4. Storage, compute (less than Edge Compute), and GPU flavors, varying amount of CPU/RAM
          5. Perfect for off-the-grid computing or migration to AWS
        3. Snowmobile
          1. Literally a semi truck of hard drives
          2. 100PB of storage
          3. Designed for exabyte-scale data center migration
  6. Storage Gateway and Types
    1. Storage Gateway
      1. A hybrid cloud storage service that helps you merge on-prem resources with the cloud
      2. Can help with a one-time migration or a long-term pairing of your architecture with AWS
    2. Types of Storage Gateway:
      1. File Gateway
        1. Caching local files
        2. NFS or SMB Mount - basically a network file share
          1. Mount locally and backs up data into S3
        3. Keep a local copy of recently used files
        4. Versions of File Gateway:
          1. Backup all data into cloud and Storage Gateway just act as the method of doing that - data lives in S3
          2. Or you can keep a local cached copy of most recently used files - so you don’t have to download from S3
          3. Keep data on-prem, backups go to S3
        5. Scenario - extend on-prem storage
        6. Helps with migrations to AWS
      2. Volume Gateway
        1. Backup drives
        2. iSCSI mount
          1. Backing up these disks that the VMs are reading/writing to
        3. Same cached or stored mode as File Gateway - all backed up to S3
        4. Can create EBS snapshots and restore volumes inside AWS
          1. Easy way to migrate on-prem volumes to become EBS volumes inside AWS
        5. Perfect for backups and migration to AWS
      3. Tape Gateway
        1. Ditch the physical tapes and backup to Tape Gateway
        2. Stores inside S3 Glacier, Deep Archive, etc
        3. Directly integrated as a VM on-prem so it doesn’t change the current workflow
        4. It is encrypted
    3. Exam Tips
      1. Storage Gateway = Hybrid Storage
        1. Complement existing architecture
  7. AWS DataSync
    1. Agent-based solution for migrating on-prem storage to AWS
    2. Easily move data between NFS and SMB shares and AWS storage solution
    3. Migration Tool
    4. Using DataSync:
      1. On-prem
        1. Have on-prem server and install DataSync agent
      2. Configure DataSync service to tell it where data is going to go
        1. Secure transmission with TLS
      3. Supports S3, EFS, and FSx
  8. AWS Transfer Family
    1. Allows you to easily move files in and out of S3 or EFS using SFTP, FTP over SSL (FTPS), or FTP
    2. How does it transfer:
      1. Legacy users/apps have processes to transfer data, if want to now transfer to S3 or EFS can put Transfer Family (SFTP, FTPS, FTP) where the old endpoint was and have that service transfer to AWS S3/EFS
    3. Transfer Family Members
      1. AWS Transfer for SFTP and AWS Transfer for FTPS - transfers from outside of your AWS environment into S3/EFS
      2. AWS Transfer for FTP - only supported within the VPC and not over public internet
    4. Exam Tips
      1. Bringing legacy app storage to cloud
      2. DNS entry (endpoint) stays the same in legacy app
        1. We just swap out the old endpoint to become S3
  9. Migration Hub
    1. Single place to track the progress of your app migration to AWS
    2. Integrates with Server Migration Service (SMS) and Database Migration Service (DMS)
    3. Server Migration Service (SMS)
      1. Schedule movement for VMWare server migration
      2. All scheduled time, takes a copy of your underlying VSphere volume, bring that data into S3
      3. It converts that volume in S3 into an EBS Snapshot
      4. Creates and AMI from that
      5. Can use AMI to launch EC2 instance
      6. Essentially gives you an easy way to take your VM architecture and convert it to an AMI
    4. Database Migration Service (DMS)
      1. Takes on-prem/EC2/RDS old Oracle (or SQL Server) and runs the AWS Schema Conversion Tool on it
      2. To convert it to an Amazon Aurora DB
        1. Takes on-prem/EC2/RDS MySQL DB and consolidate it with DMS into Aurora
  10. Directory Service
    1. Fully managed version of Active Directory
    2. Allows you to offload the painful parts of keeping AD online to AWS while still giving you the full control and flexibility AD provides
    3. Available Types:
      1. Managed Microsoft AD
        1. Entire AD suite, easily build out AD in AWS
      2. AD Connector
        1. Creates a tunnel between AWS and your on-prem AD
        2. Want to leave AD in physical data center
        3. Get an endpoint that you can authenticate against in AWS while leaving all of your actual users and data on-prem
      3. Simple AD
        1. Standalone directory powered by Linux Samba AD-Compatible server
        2. Just an authentication service
  11. AWS Cost Explorer
    1. Easy-to-use tool that allows you to visualize your cloud costs
      1. Can generate reports based on a variety of factors, including resource tags
    2. What can it do?
      1. Break down costs on a service-by-service basis
      2. Can break out by time, can estimate next month
      3. Filter and breakdown data however we want
    3. Exam tips
      1. Have to set tags as “cost allocation tag” to use
      2. Cost Explorer and Budgets go hand-in-hand
  12. AWS Budgets
    1. Allows organizations to easily plan and set expectations around cloud costs
      1. Easily track your ongoing spending and create alerts to let users know where they are close to exceeding their allotted spend
    2. Types of Budgets - you get 2 free each month
      1. Cost Budgets
      2. Usage Budgets
      3. Reservation Budgets - RIs
      4. Savings Plan Budgets
    3. Exam Tips
      1. Can be alerted on current spend or projected spend
      2. Can create a budget using tags as a filter
  13. AWS Costs and Usage Reports (CUR)
    1. Most comprehensive set of cost and usage data available for AWS spending
    2. Publishes billing reports to AWS S3 for centralized collection
      1. These breakdown costs by the time span (hour, day, month), service and resource, or by tags
      2. Daily updates to reports in S3 in CSV formats
      3. Integrates with other services - Athena, Redshift, QuickSight
    3. Use Cases for AWS CUR:
      1. Within Organizations for entire OU groups or individual accounts
      2. Tracks Savings Plans utilizations, changes, and current allocations
      3. Monitor On-Demand capacity reservations
      4. Break down your AWS data transfer charges: external and inter-Regional
      5. Dive deeper into cost allocation tag resources spending
    4. Exam Tip
      1. Most comprehensive overview of spending
  14. Reducing compute spend using Savings Plans and AWS Compute Optimizer
    1. AWS Compute Optimizer
      1. Optimizes, analyzes configurations and utilization metrics of your AWS resources
      2. Reports current usage optimizations and potential recommendations
      3. Graphs
      4. Informed decisions
      5. Which resources work with this service?
        1. EC2, Auto Scaling Groups, EBS, Lambda
      6. Supported Account Types
        1. Standalone AWS account without Organizations enabled
        2. Member Account - single member account within an Organization
        3. Management Account
          1. When you enable at the AWS Organization management account, you get recommendations based on entire organization (or lock down to 1 account)
      7. Things to know:
        1. Disabled by default
          1. Must opt in to leverage Compute Optimizer
          2. After opting in, enhance recommendations via activation of recommendation preferences
    2. Savings Plans
      1. Offer flexible pricing modules for up to 72% savings on compute
      2. Lower prices for EC2 instances regardless of instance family, size, OS, tenancy, or Region
      3. Savings can also apply to Lambda and Fargate usage
      4. SageMaker plans available for lowering SageMaker instance pricing
      5. Require Commitments
        1. 1 or 3 year options
        2. Pricing Plan options:
          1. Pay all upfront - most reduced
          2. Partial upfront
          3. No upfront
        3. Savings Plan Types:
          1. Compute Savings

Most flexible savings plan

Applies to any EC2 compute, Lambda, or Fargate usage

Up to 66% savings on compute

          1. EC2 Instance Savings

Stricter savings plan

Applies only to EC2 instances of a specific instance family in specific regions

Up to 72% savings

          1. SageMaker Savings

Only SageMaker Instances - any region and any component, regardless of family or sizing

Up to 64% savings

  1. Migrating Workloads to AWS using AWS Application Discovery Service or AWS Application Migration Service (MGN)
    1. Application Discovery Service
      1. Helps you plan your migrations to the cloud via collection of usage and configuration data from on-prem servers
      2. Integrates with AWS Migration Hub which simplifies migrations and tracking migration statuses
      3. Helps you easily view discovered services, group them by application, and track each application migration
      4. How do we discover our on-prem servers?
        1. 2 Discovery Types
          1. Agentless

Completed via the Agentless Collector

It is an OVA file within the VMWare vCenter

OVA file = deployable file for a new type of VM appliance that you can deploy in vCenter

Once you deploy the OVA, it identifies hosts and VMs in vCenter

Helps track and collect IP addresses and MAC addresses, info on resource allocations (memory and CPUs), and host names

Collects utilization data metrics

          1. Agent-Based

Via an AWS Application Discovery Agent that is deployed

Install this agent on each VM and physical server

There is an installer for Linux and for Windows

Collects more info than agentless process

Static Configuration data, time-series performance info, network connections and OS processes

    1. Application Migration Service (MGN)
      1. An automated lift and shift servicer for expediting migration of apps to AWS
      2. Used for physical, virtual, or cloud servers to avoid cutover windows or disruptions - flexible
      3. Replicates source servers into AWS and auto converts and launches on AWS to migrate quickly
      4. 2 key features are RTO and RPO:
        1. Recovery Time Objective
          1. Typically just minutes; depending on OS boot time
        2. Recovery Point Objective
          1. Measured in the sub-second range
          2. Can recover at any point after migration
  1. Migraging DBs from On-Prem to AWS Database Migration Service (DMS)
    1. Migration tool for relational, Data Warehouses, NoSQL databases, and other data stores
      1. Migrate data into cloud or on-prem: either into or out of AWS
      2. Can be one-time migration or continuous replicate ongoing changes
    2. Conversion Tool called Schema Conversion Tool (SCT) used to transfer database schemas to new platforms
    3. How does DMS work?
      1. It's basically just a server running replication software
      2. Create a source and target connections
      3. Schedule tasks to run on DMS server to move data
      4. AWS creates the tables and primary keys *if they don't exist on the target)
        1. Optionally create you target tables beforehand
      5. Leverage the SCT for creating some or all of your tables, indexes, and more
      6. Source and target data stores are referred to as endpoints
    4. Important Concepts
      1. Can migrate between source and target endpoints with the same engine types
      2. Can also utilize SCT to migrate between source and target endpoints with different engines
      3. Important to know that at least 1 endpoint most live within an AWS Service
    5. AWS Schema Conversion Tool (SCT)
      1. Convert
      2. Supports many engine types
        1. Many types of relational databases including both OLAP and OLTP, even supports data warehouses
        2. Supports many endpoints
          1. Any supported RDS engine type: Aurora, Redshift
        3. Can use the converted schemas with dbs running on EC2 or data stored in S3
          1. So don't have to migrate to a db service, per se, can be EC2 or S3
    6. 3 Migration Types:
      1. Full Load
        1. All existing data is moved from sources to targets in parallel
        2. Any updates to your tables while this is in progress are cached on your replication server
      2. Full Load and Change Data Capture (CDC)
        1. CDC guarantees transactional integrity of the target db- only one
      3. CDC Only
        1. Only replicate the data changes from the source db
    7. Migrating Large Data Stores via AWS Snowball
      1. With terabyte migrations, can run into bandwidth throttle/network throttles on network
      2. Can leverage Snowball Edge
        1. Leverage certain Snowball Edge devices and S3 with DMS to migrate large data sets quickly
      3. Can still leverage SCT to extract data into Snowball devices and then into S3
      4. Load converted data
        1. DMS can still load the extracted data from S3 and migrate to chosen destination
      5. Also CDC compatible
  2. Replicating and Tracking Migrations with AWS Migration Hub
    1. Migration Hub
      1. Single place to discover existing servers, plan migration efforts and track migration statuses
      2. Visualize connection and server/db statuses that are a part of your migrations
      3. Options to start migrations immediately or group servers into app groups first
      4. Integrates with App Migration Service or DMS
      5. ONLY discovers and plans migrations and works with the other mentioned services to actually do the migrations
    2. Migration Phases
      1. Discover- find servers and databases to plan you migrations
      2. Migrate - connect tools to Migration Hub, and migrate
      3. Track
    3. Server Migration Service (SMS)
      1. Automate migrating on-prem services to cloud
      2. Flexible - covers broad range of supported VMs
      3. Works by incremental replications of server VMs over to AWS AMIs that can be deployed on EC2
      4. Can handle volume replication
      5. Incremental Testing
      6. Minimize downtime
  3. Amplify
    1. For Quickly deploying web apps
    2. Offers tools for front-end web and mobile developers to quickly build full-stack applications on AWS
    3. Offers 2 services:
      1. Amplify Hosting
        1. Support for common single-pane application (SPA) frameworks like React, Angular, and Vue
          1. Also supports Gatsby and Hugo static site generators
        2. Allows for separate prod and staging environments for the frontend and backend
        3. Support for Server-Side Rendering (SSR) apps like Next.js
          1. Remember cannot do dynamic websites in S3, so any answer with Server-Side Rendering would be Amplify
      2. Amplify Studio
        1. Easy Authentication and Authorization
        2. Simplified Development
          1. Visual development environment to simplify creation of full-stack web or mobile apps
        3. Ready-to-use components, easy creation of backends and automated connections between the frontend and backend
    4. Exam Tip
      1. Amplify is the answer in scenario based questions like managed server-side rendering in AWS, easy mobile development, and developers running full-stack applications
  4. Device Farm
    1. For testing App Services
    2. Application testing service for testing and interacting with Android, iOS, and web apps on real devices
    3. 2 primary testing methods:
      1. Automated
        1. Upload scripts or use built-in tests for automatic parallel tests on mobile devices
      2. Remote Access
        1. You can swipe, gesture, and interact with the devices in real time via web browser
  5. Amazon Pinpoint
    1. Enables you to engage with customers through a variety of different messaging channels
      1. Generally used by marketers, business users, and developers
    2. Terms:
      1. Projects
        1. Collection of info, segments, campaigns, and journeys
      2. Channels
        1. Platform you intend to engage your audience with
      3. Segments
        1. Dynamic or imported; designates which users receive specific messages
      4. Campaigns
        1. Initiatives engaging specific audience segments using tailored messages
      5. Journeys
        1. Multi-step engagements
      6. Message Templates
        1. Content and settings for easily reusing repeated messages
    3. Leverage Machine Learning modules to predict user patterns
    4. 3 Primary uses:
      1. Marketing
      2. Transactions - order confirmations, shipping notifications
      3. Bulk Communications
  6. Analyzing Text using Comprehend, Kendra, and Textract
    1. Comprehend
      1. Uses Natural Language Processing (NLP) to help you understand the meaning and sentiment in your text
        1. Ex: automate understanding reviews as positive or negative
      2. Automating comprehension at scale
      3. Use Cases:
        1. Analyze call center analytics
        2. Index and Search product reviews
        3. Legal briefs management
        4. Process financial data
    2. Kendra
      1. Allows you to create an intelligent search service powered by machine learning
      2. Enterprise search applications - bridge between different silos of information (S3, file servers, websites), allowing you to have all the data intelligently in one place
      3. Use Cases:
        1. Research and Development Acceleration
        2. Improve Customer Interaction
        3. Minimize Regulatory and Compliance Risks
        4. Increase Employee productivity
        5. Can do research for you
    3. Textract
      1. Uses Machine Learning to automatically extract text, handwriting, and data from scanned documents
      2. Goes beyond OCR (Optical Character Recognition) by adding Machine Learning
      3. Turn text into data
      4. Use Cases:
        1. Convert handwritten/filled forms
  7. AWS Forecast
    1. Time-series forecasting service that uses Machine Learning and it built to give you important business insights
    2. Can send your data to forecast and it will automatically learn your data, select the right Machine Learning algorithm, and then help you forecast your data
    3. Use cases:
      1. IoT, DevOps, Analytics
  8. AWS Fraud Detector
    1. AWS AI service built to detect fraud in your data
    2. Create a fraud detection machine learning model that is based on your data - can quickly automate this
    3. Use Cases:
      1. Identify suspicious online transactions
      2. Detect new account fraud
      3. Prevent Trial and Loyalty program abuse
      4. Improve account takeover detection
  9. Working with Text and Speech using Polly, Transcribe, and Lex
    1. Transcribe
      1. Speech to text
    2. Lex
      1. Build conversational interfaces in your apps using NLP
    3. Polly
      1. Turns text into lifelike speech
    4. Alexa uses: Transcribe → Lex (sends answer/text to) → Polly
  10. Rekognition
    1. Computer vision product that automates the recognition of pictures and video using deep learning and neural networks
    2. Use these processes to understand and label images and videos
    3. Main use case is Content Moderation
      1. Also facial detection and analysis
      2. Celebrity recognition
      3. Streaming video events detection
        1. Ring, ex
  11. SageMaker
    1. To train learning models
    2. Way to build Machine Learning models in AWS Cloud
    3. 4 Parts
      1. Ground Truth: set up and manage labeling jobs for training datasets using active learning and human labeling
      2. Notebook: managed Jupyter notebook (python)
      3. Training: train and tune models
      4. Inference: Package and deploy Machine Learning models at scale
    4. Deployment Types:
      1. Online Usage- if need immediate response
      2. Offline Usage- otherwise
    5. Elastic Inference - used to decrease cost
  12. Translate
    1. Machine learning service that allows you to automate language translation
    2. Uses deep learning and neural networks
  13. Elastic Transcoder
    1. For converting media files
    2. Allows businesses/developers to convert (transcode) media files from original source format into versions that are optimized for various devices
    3. Benefits:
      1. Easy to use - APIs, SDKs, or via management console
      2. Elastically scalable
  14. AWS Kinesis Video Streams
    1. Way of streaming media content from a large number of devices to AWS and then running analytics, Machine Learning, and playback and other processing
      1. Ex: Ring
      2. Elastically scales
      3. Access data through easy-to-use APIs
      4. Use Cases:
        1. Smart Home- ring
        2. Smart city- CCTV
        3. Industrial Automation

NotesSAA-C03

  1. The shared responsibility model
    1. Customer is responsible for security IN the loud
    2. AWS is responsible for security OF the cloud
  2. 6 Pillars of the Well-Architected Framework
    1. Operation Excellence
      1. Running and monitoring systems to deliver business value, and continually improving processes and procedures
    2. Performance Efficiency
      1. Using IT and computing resources efficiently
    3. Security
      1. Protecting info and systems
    4. Cost Optimization
      1. Avoiding unnecessary costs
    5. Reliability
      1. Ensuring a workload performs its intended function correctly and consistently
    6. Sustainability
      1. Minimizing the environmental impacts of running cloud workloads
  3. IAM
    1. To secure the root account:
      1. Enable MFA on root acct
      2. Create an admin group for admins and assign appropriate privileges
      3. Create user accounts for admins - don't share
      4. Add appropriate users to admin groups
    2. IAM Policy documents
      1. JSON - key pairs
    3. IAM does not work at regional level, it works at global level
    4. Identity Providers > Federation Services
      1. AWS SSO
      2. Can add a provider/configure a provider
        1. Most common provider type is SAML
          1. Uses: AD Federation Services
          2. SAML provider establishes a trust between AWS and AD Federation Services
      3. IAM Federation
        1. Uses the SAML standard, which is Active Directory
    5. How to apply policies:
      1. EAR: Effect, Action, Resource
    6. Why are IAM users considered “permanent”?
      1. Because once their password, access key, or secret key is set, these credentials do not update or change without human interaction
    7. IAM Roles
      1. = an identity in IAM with specific permissions
      2. Temporary
        1. When you assume a role, it provides you with temporary security credential for your role session
      3. Assign policy to role
      4. More secure to use roles instead of credentials - don't have to hardcode credentials
      5. Preferred option from security perspective
      6. Can attach/detach roles to running EC2 instances without having to stop or terminate those instances
  4. Simple Storage Solution
    1. S3 is object storage
      1. Manages data as objects rather than in file systems or data blocks
      2. Any type of file
      3. Cannot be used to run OS or DB, only static files
    2. Unlimited S3 storage
      1. Individual objects can be up to 5 TBs in size
    3. Universal Namespace
      1. All AWS accounts share S3 namespace so buckets must be globally unique
    4. S3 URLS:
      1. https://bucket-name.s3.Region.amazonaws.com/key-name
    5. When you successfully upload a file to an S3 bucket, you receive a HTTP 200 code
    6. S3 works off of a key-Value store
      1. Key = name of object
      2. Value = data itself
      3. Version id
      4. Metadata
    7. Lifecycle management
    8. Versioning
      1. All versions of an object are stored and can be retrieved, including deleted ones
      2. Once versioning is enabled, you cannot disable it, only suspend it
    9. Way to protect your objects from being accidentally deleted
      1. Turn versioning on
      2. Enable MFA
    10. Securing you S3 data
      1. Server-Side Encryption
        1. Can set default encryption on a bucket to encrypt all new objects when they are stored in that bucket
      2. Access Control Lists (ACLs)
        1. Define which AWS accounts or groups are granted access and type of access
        2. Way to get fine-grained access control - can assign ACLS to individual objects within a bucket
      3. Bucket Policies
        1. Bucket-wide policies that define what actions are allowed or denied on buckets
        2. In JSON format
    11. Data Consistency Model with S3
      1. Strong Read-After-Write Consistency
        1. After a successful write of a new object (PUT) or an overwrite of an existing object, any subsequent read request immediately receives the latest version of the object
      2. Strong consistency for list operations, so after a write, you can immediately perform a listing of the objects in a bucket with all changes reflected
    12. ACLs vs Bucket Policies
      1. ACLs
        1. Work at an individual object level
        2. Ie: public or private object
      2. Bucket Policies
        1. Apply bucket-wide
    13. Storage Classes in S3
      1. S3 standard
        1. Default
        2. Redundantly across greater than or equal to 3 AZs
        3. Frequent access
      2. S3 Standard - Infrequently Accessed (IA)
        1. Infrequently accessed data, but data must be rapidly accessed when needed
        2. Pay to access data - per GB storage price and per-GB retrieval fee
      3. S3 One Zone - IA
        1. Like S3 standard IA but data is stored redundantly within single AZ
        2. Great for long-lived, IA NON critical data
      4. S3 Intelligent Tiering
        1. 2 Tiers:
          1. Frequent Access
          2. Infrequent Access
        2. Optimizes Costs - automatically moves data to most cost-effective tier
      5. Glacier
        1. Way of archiving your data long-term
        2. Pay for each time you access your data
        3. Cheap storage
        4. 3 Glacier options:
          1. Glacier Instant Retrieval

Long-term data archiving with instant retrieval

          1. Glacier Flexible Retrieval

Ideal storage class for archive data that does not require immediate access but needs the flexibility to retrieve large data sets at no cost,m such as backup or DR

Retrieval -minutes to 12 hours

          1. Glacier Deep Archive

Cheapest

Retain data sets for 7-10 years or longer to meet customer needs and regulatory requirements

Retrieval is 12 hours for standard and 48 hours for bulk

    1. Lifecycle mgmt in S3
      1. Automates moving objects between different storage tiers to max cost-effectiveness
      2. Can be used with versioning
    2. S3 Object Lock
      1. Can use object lock to store objects using a Write Once Read Many (WORM) model
        1. Can help prevent objects from being deleted or modified for a fixed amount of time OR indefinitely
      2. 2 modes of S3 Object Lock:
        1. Governance Mode
          1. Users cannot overwrite or delete an object version or alter its lock settings unless they have special permissions
        2. Compliance Mode
          1. A protected object version cannot be overwritten or deleted by any user
          2. When object is locked in compliance mode, its retention cannot be changed/object cannot be overwritten/deleted for duration of period
          3. Retention Period:

Protect an object version for a fixed period of time

          1. Legal Holds:

Enables you to place a lock/hold on an object without an expiration period- remains in effect until removed

    1. Glacier Vault Locks
      1. Easily deploy and enforce compliance controls for individual Glacier vaults with a vault lock policy
      2. Can specify controls, such as WORM, in a vault lock policy and lock the policy from future edits
      3. Once locked, the policy can no longer be changed
    2. S3 Encryption
      1. TYpes of encryption available:
        1. Encryption in transit
          1. HTTPS-SSL/TLS
        2. Encryption at rest: Server Side Encryption
          1. Enabled by default with SSE-S3

This setting applies to all objects within S3 buckets

If the file is to be encrypted at upload time, the x-amz-server-side-encryption parameter will be included in the request header

You can create a bucket policy that denies any s3 PUT (upload) that does not include this parameter in the request header

          1. 3 Types:

SSE-S3

S3-managed keys, using AES 256-bit encryption

Most common

SSE-KMS

AWS key mgmt service-managed keys

If you use SSE-KMS to encrypt you objects in S3, you must keep in mind the KMS region limits

Uploading AND downloading will count towards the limit

SSE-C

Customer-provided keys

        1. Encryption at rest: Client-Side Encryption
          1. You encrypt the files yourself before you upload them to S3
    1. More folders/subfolder you have in S3, the better the performance
    2. S3 Performance:
      1. Uploads
        1. Multipart Uploads
          1. Recommended for files over 100 MB
          2. Required for files over 5GB
          3. = Parallelize uploads to increase efficiency
      2. Downloads
        1. S3 Byte-Range Fetches
          1. Parallelize downloads by specifying byte ranges
          2. If there is a failure in the download, it is only for that specific byte rance
          3. Used to speed up downloads
          4. Can be used to download partial amounts of a file - for ex: header info
    3. S3 Replication
      1. Can replicate objects from one bucket to another
      2. Versioning MUST be enabled on both buckets (source and destination buckets)
      3. Turn on replication, then replication is automatic afterwards
      4. S3 Bach Replication
        1. Allows replication of existing objects to different buckets on demand
      5. Delete markers are NOT replicated by default
        1. Can enable it when creating the replication rule
  1. EC2: Elastic Cloud Compute
    1. Pricing Options
      1. On-Demand
        1. Pay by hour or second, depending on instance
        2. Flexible - low cost without upfront cost or commitment
        3. Use Cases:
          1. Apps with short-term, spikey, or unpredictable workloads that cannot be interrupted
          2. Compliance
          3. Licencing
      2. Reserved
        1. For 1-3 years
        2. Up to 72% discount compared to on-demand
        3. Use Cases:
          1. Predictable usage
          2. Specific capacity requirements
        4. Types of RIs:
          1. Standard RIs
          2. Convertible RIs

Up to 54% off on-demand

You have the option to change to a different class of RI type of equal or greater value

          1. Scheduled RIs

Launch within timeframe you define

Match your capacity reservation to a predictable recurring schedule that only requires a fraction of day/wk/mo

        1. Reserved Instances operate at a REGIONAL level
      1. Spot
        1. Purchase unused capacity at a discount of up to 90%
        2. Prices fluctuate with supply and demand
        3. Say which price you want the capacity at and when it hits that price, you have your instance, when it moves away from that price, you lose it
        4. Use Cases:
          1. Flexible start and end times
          2. Cost sensitive
          3. Urgent need for large amounts of additional capacity
        5. Spot Fleet
          1. A collection of spot instances and (optionally) on-demand instances
          2. Attempts to launch that number of spot instances and on-demand instances to meet the target capacity you specified

It is fulfilled if there is available capacity and the max price you specified in the request exceeds the spot price

Launch pools - different details on when to launch

          1. 4 strategies with spot fleets available:

Capacity optimized

Spot instances come from the pool with optimal capacity for the number of instances launched

Diversified

Spot instances are distributed across all pools

Lowest price

Spot instances come from the pool with the lowest price

This is the default strategy

InstancePoolsToUseCount

Distributed across the number of spot instance pools you specify

This parameter is only valid when used in combo with lowestPrice

      1. Dedicated
        1. Physical EC2 server dedicated for your use
        2. Most expensive
    1. Pricing Calculator
      1. Can use to estimate what your infrastructure will cost in AWS
    2. Bootstrap Scripts
      1. Script that runs when the instance first runs, has root privileges
      2. Starts with a shebang : #!/bin/bash
      3. EC2 Metadata
        1. Can use curl command in bootstrap to save metadata into a text file, for example
    3. Networking with EC2
      1. 3 different types of networking cards
        1. Elastic Network Interface (ENI)
          1. For basic, day-to-day networking
          2. Use cases:

Create a management network

Use network and security appliances in VPC

Create dual-homed instances with workloads and roles on distinct subnets

Create a low budget, HA solution

          1. EC2s by default will have ENI attached to it
        1. Enhanced Networking (EN)
          1. For single root I/O virtualization (SR-10V) to provide high performance
          2. For high performance networking between 10 Gbps-100 Gbps
          3. Types of EN

Elastic Network Adapter (ENA)

Supports network speeds of up to 100 Gbps for supported instance types

Intel 82599 Virtual Function (VF) Interfaces

Used in Older instances

Always choose ENA over VF

        1. Elastic Fabric Adapter (EFA)
          1. Accelerates High performance Computing (HPC) and ML apps
          2. Can also use OS-Bypass

Enables HPC and ML apps to bypass the OS Kernal and communicate directly with the EFA device - only linux

    1. Optimizing EC2 with Placement Groups
      1. 3 types of placement groups
        1. Cluster Placement Groups
          1. Grouping of instances within a single AZ
          2. Recommended for apps that need low network latency, high network throughput or both
          3. Only certain instance types can be launched into a cluster PG
          4. Cannot span multiple AZs
          5. AWS recommends homogenous instances within cluster
        2. Spread Placement Group
          1. Each placed on distinct underlying hardware
          2. Recommended for apps that have small number of critical instances that should be kept separate
          3. Used for individual instances
          4. Can span multiple AZs
        3. Partition Placement Group
          1. Each partition PG has its own set of racks, each rack has its own network and power source
          2. No two partitions within PG share the same racks, allowing you to isolate impact of HW failure
          3. EC2 divides each group into logical segments called partitions (basically = a rack)
          4. Can span multiple AZs
      2. You can't merge PGs
      3. You can move an existing instance into a PG
        1. Must be in stopped state
        2. Has to be done via CLI or SDK
    2. EC2 Hibernation
      1. When you hibernate an EC2 instance, the OS is told to perform suspend-to-disk
        1. Saves the contents from the instance memory (RAM)to your EBS root volume
        2. We persist the instance EBS root volume and any attached EBS data volumes
      2. Instance RAM must be less than 150GB
      3. Instance families include - C, M, and R instance families
      4. Available for Windows, Linux 2 AMI, Ubuntu
      5. Instances cannot be hibernated for more than 60 days
      6. Available for on-demand and reserved instances
    3. Deploying vCenter in AWS with VMWare Cloud on AWS
      1. Used by orgs for private cloud deployment
      2. Use Cases - why VMWare on AWS
        1. Hybrid Cloud
        2. Cloud Migration
        3. Disaster Recovery
        4. Leverage AWS Services
    4. AWS Outposts
      1. Brings the AWS data center directly to you, on-prem
      2. Allows you to have AWS services in your data center
      3. Benefits:
        1. Allows for hybrid cloud
        2. Fully managed by AWS
        3. Consistency
      4. Outposts Family members
        1. Outposts Racks - large
        2. Outposts Servers - smaller
  1. Elastic Block Storage
    1. Elastic Block Storage
      1. Virtual disk, storage volume you can attach to EC2 instances
      2. Can install all sorts, use like any system disk, including apps, OS’s, run DBs, store data, create file systems
      3. Designed for mission critical workloads - HA and auto replicated within single AZ
    2. Different EBS Volume Types
      1. General Purpose SSD
        1. gp2/3
        2. Balance of prices and performance
        3. Good for boot volumes and general apps
      2. Provisioned IOPS SSD (PIOPS)
        1. io1/2
        2. Super fast, high performance, most expensive
        3. IO intensive apps, high durability
      3. Throughput Optimized HDD (ST1)
        1. Low-cost HDD volume
        2. Frequently accessed, throughput-intensive workloads
          1. Throughput = used more for big data, data warehouses, ETL, and log processing
        3. Cost effective way to store mountains of data
        4. CANNOT be a boot volume
      4. Cold HDD (SC1)
        1. Lowest cost option
        2. Good choice for colder data requirement fewer scans/day
        3. Good for apps that need lowest cost and performance is not a factor
        4. CANNOT be a boot volume
          1. Only static images, file system
    3. IOPS vs Throughput
      1. IOPS
        1. Measures the number of read and write Operations/second
        2. Important for quick transactions, low-latency apps, transactional workloads
        3. Choose provisioned IOPS SSD (io1/2)
      2. Throughput
        1. Measures the number of bits read or written per sec (MB/s)
        2. Important metrics for large datasets, large IO sizes, complex queries
        3. Ability to deal with large datasets
        4. Choose throughput optimized HDD (ST1)
    4. Volumes vs Snapshots
      1. Volumes exist on EBS
        1. Must have a minimum of 1 volume per EC2 instance - called root device volume
      2. Snapshots exist on S3
        1. Point in time copy of a volume
        2. Are incremental
        3. For consistent snapshots: stop instance
        4. Can only share snapshots within region they were created, if want to share outside, have to copy to destination region first
    5. Things to know about EBS’s:
      1. Can resize on the fly, just resize the filesystem
      2. Can change volume types on the fly
      3. EBS will always be in the same AZ as EC2
      4. If we stop an instance, data is kept on EBS disk
      5. EBS volumes are NOT encrypted by default
    6. EBS Encryption
      1. Uses KMS customer master keys (CMK) when creating encrypted volumes and snapshots
      2. Data at rest is encrypted in volume
      3. Data inflight between instance and volume is encrypted
      4. All volumes created from the snapshot are encrypted
      5. End-to-end encryption
      6. Important to remember: copying an unencrypted snapshot allows encryption
        1. 4 steps to encrypt an unencrypted volume:
          1. Create a snapshot of the unencrypted volume
          2. Create a copy of the snapshot and select the encrypt option
          3. Create an AMI from the encrypted snapshot
          4. Use that AMI to launch new encrypted instances
  2. Elastic File System (EFS)
    1. Managed NFS (Network File System) that can be mounted on many EC2 instances
    2. Shared storage
    3. EFS are NAS files for EC2 instances based on Network File System (NFSv4)
    4. EC2 has a mount target that connects to the EFS
    5. Use Cases:
      1. Web server farms, content mgmt systems, shared db access
    6. Uses NFSv4 protocol
    7. Linux-Based AMIs only (not windows)
    8. Encryption at rest with KMS
    9. Performance
      1. Amazing performance capabilities
      2. 1000s concurrent connections
      3. 10 Gbps throughput
      4. Scales to petabytes
    10. Set the performance characteristics:
      1. General Purpose - web servers, content management
      2. Max I/O - big data, media processing
    11. Storage Tiers for EFS
      1. Standard - frequently accessed files
      2. Infrequently Accessed
    12. FSx For Windows
      1. Provides a fully managed native Microsoft Windows file system so you can easily move your windows-based apps that require file storage to AWS
      2. Built on Windows servers
      3. If see anything regarding:
        1. sharepoint service
        2. shared storage for windows
        3. Active directory migration
      4. Managed Windows Server that runs windows server message block (SMB) - based file services
      5. Supports AD users, ACLs, groups, and security policies, along with Distributed File System (DFS) namespaces and replication
    13. FSx for Lustre:
      1. Managed file system that is optimized for compute-intensive workloads
      2. Use Cases:
        1. High performance computing, AI, ML, Media Data processing workflows, electronic design automation
      3. With a Lustre, you can launch and run a Lustre file system that can process massive datasets at up to 100s of Gbps of throughput, millions of IOPS, and sub-millisec latencies
    14. When To pick EFS vs FSx for Windows vs FSx for Lustre
      1. EFS
        1. Need distributed, highly resilient storage for Linux
      2. FSx for Windows
        1. Central storage for windows (IIS server, AD, SQL Server, Sharepoint)
      3. FSx for Lustre
        1. High speed, high-capacity, AI, ML
        2. IMPORTANT: Can store data directly on S3
  3. Amazon Machine Images: EBS vs Instance Store
    1. An AMI provides the info required to launch an instance
    2. *AMIs are region-specific
    3. 5 things you can base your AMIs on:
      1. Region
      2. OS
      3. Architecture (32 vs 64-bit)
      4. Launch permissions
      5. Storage for the root device (root volume)
    4. All AMIs are categorized as either backed by one of these:
      1. EBS
        1. The root device for an instance launched from the AMI is an EBS volume created from EBS snapshot
        2. CAN be stopped
        3. Will not lose data if instance is stopped
        4. By default, root volume will be deleted on termination, but you can tell AWS to keep the deleted root volume with EBS volume
        5. PERMANENT storage
      2. Instance Store
        1. Root device for an instance launched from the AMI is an instance store volume created from a template stored in S3
        2. Are ephemeral storage
          1. Meaning they cannot be stopped
          2. If underlying host fails, you will lose your data

CAN reboot your data without losing your data

          1. If you delete your instance, you will lose the instance store volume
  1. AWS Backup
    1. Allows you to consolidate your backups across multiple AWS Services such as EC2, EBS, EFS, FSx for Lustre, FSx for Windows file server and AWS Storage Gateway
    2. Backups can be used with AWS Organizations to backup multiple AWS accounts in your org
    3. Gives you centralized control across all services, in multiple AWS accounts across the entire AWS org
    4. Benefits
      1. Central management
      2. Automation
      3. Improved Compliance
        1. Policies can be enforced, and encryption
        2. Auditing is easy
  2. Relational Database Service
    1. 6 different RDS engines
      1. SQL Server
      2. Oracle
      3. MySQL
      4. PostgreSQL
      5. MariaDB
      6. Aurora
    2. When to use RDS’s:
      1. Generally used for Online Transaction Processing (OLTP) workloads
        1. OLTP: transaction
          1. Large numbers of small transactions in real-time
        2. Different than OLAP (Online Analytical Processing)
          1. OLAP:

Processes complex queries to analyze historical data

All about data analysis using large amounts of data as well as complex queries that take a long time

RDS’s are NOT suitable for this purpose → data warehouse option like Redshift which is optimized for OLAP

    1. Multi-AZ RDSs
      1. Aurora cannot be single AZ
        1. All others can be configured to be multi-AZ
      2. Creates an exact copy of your prod db in another AZ, automatically
        1. When you write to your prod db, this write will automatically synchronize to the standard db
      3. Unplanned Failure or Maintenance:
        1. Amazon auto detects any issues and will auto failover to the standby db via updating DNS
      4. Multi-AZ is for DISASTER RECOVERY, not for performance
        1. CANNOT connect to standby db when primary db is active
    2. Increase read performance with read replicas
      1. Read replica is a read-only copy of the primary db
      2. You run queries against the read-only copy and not the primary db
      3. Read replicas are for PERFORMANCE boosting
      4. Each read replica has its own unique DNS endpoint
      5. Read replicas can be promoted as their own dbs, but it breaks the replication
        1. For analytics for example
      6. Multiple read replicas are supported = up to 5 to each db instance
      7. Read replicas require auto backups to be turned on
    3. Aurora
      1. MySQL and Postgre-compatible RD engine that combines speed and availability of high-end commercial dbs with the simplicity and cost-effectiveness of open-source db
      2. 2 copies of data in each AZ with minimum of 3 AZs → 6 copies of data
      3. Aurora storage is self-healing
        1. Data blocks and disks are continuously scanned for errors and repaired automatically
      4. 3 types of Aurora Replicas Available:
        1. Aurora Replicas = 15 read replicas
        2. MySQL Replicas = 5 read replicas with Aurora MySQL
        3. PostgreSQL = 5 read replicas with Aurora PostgreSQL
      5. Aurora Serverless
        1. An on-demand, auto-scaling configuration for the MySQL-compatible and PostgreSQL-compatible editions of Aurora
        2. An Aurora serverless db cluster automatically starts up, shuts down, and scales capacity up or down based on your app’s needs
        3. Use Cases:
          1. For spikey workloads
          2. Relatively simple, cost-effective option for infrequent, intermittent, or unpredictable workloads
  1. DynamoDB
    1. Proprietary NON-relational DB
    2. Fast and flexible NoSQL db service for all applications that need constant, single-digit millisecond latency at any scale
    3. Fully managed db and supports both document and key-value data modules
    4. Use Cases:
      1. Flexible data model and reliable performance make it great fit for mobile, web, gaming, ad-tech, IoT, etc
    5. 4 facts on DynamoDB:
      1. All stored on SSD Storage
      2. Spread across 3 geographically distinct data center
      3. Eventually consistent reads by default
        1. This means that all copies of data is usually reached within a second. Repeating a read after a short time should return the updated data. Best read performance
      4. Can opt for strongly consistent reads
        1. This means that all copies return a result that reflects all writes that received a successful response prior to that read
    6. DynamoDB Accelerator (DAX)
      1. Fully managed, HA, in-memory cache
      2. 10x performance improvement
      3. Reduces request time from milliseconds to microseconds
      4. Compatible with DynamoDB API calls
      5. Sits in front of DynamoDB
    7. DynamoDB Security
      1. Encryption at rest with KMS
      2. Can connect with site-to-site VPN
      3. Can connect with Direct Connect (DX)
      4. Works with IAM policies and roles
        1. Fine-grained access
      5. Integrates with CloudWatch and CloudTrail
      6. VPC endpoints-compatible
    8. DynamoDB Transactions
      1. ACID Diagram/Methodology
        1. Atomic
          1. All changes to the data must be performed successfully or not at al
        2. Consistent
          1. Data must be in a constant state before and after the transaction
        3. Isolated
          1. No other process can change the data while the transaction is running
        4. Durable
          1. The changes made by a transaction must persist
      2. ACID basically means if anything fails, it all rolls back
      3. DynamoDB transactions provide developers ACID across 1 or more tables within a single aws acct and region
      4. You can use transactions when building apps that require coordinated inserts, deletes, or updates to multiple items as part of a single logical business operation
      5. DynamoDB transactions have to be enabled in DynamoDB to use ACID
      6. Use Cases:
        1. Financial Transactions, fulfilling orders
      7. 3 options for reads
        1. Eventual consistency
        2. Strong consistency
        3. Transactional
      8. 2 options for writes:
        1. Standard
        2. Transactional
    9. DynamoDB Backups
      1. On-Demand Backup and Restore
      2. Point-In-Time Recovery (PITR)
        1. Protects against accidental writes or deletes
        2. Restore to any point in the last 35 days
        3. Incremental backups
        4. NOT enabled by default
        5. Latest restorable: 5 minutes in the past
    10. DynamoDB Streams
      1. Are time-ordered sequence of item-level changes in a table (FIFO)
      2. Data is completely sequenced
      3. These sequences are stored in DynamoDB Streams
        1. Stored for 24 hours
      4. Sequences are broken up into shards
        1. A shard is a bunch of data that has sequential sequence numbers
      5. Everytime you make a change to DynamoDB table, that change is going to be stored sequentially in a stream record, which is broken up into shards
      6. Can combine streams with Lambda functions for functionality like stored procedures
    11. DynamoDB Global Tables
      1. Managed multi-master, multi-region replication
      2. Way of replicating your DynamoDB tables from one region to another
      3. Great for globally distributed apps
      4. This is based on DynamoDB streams
        1. Streams must be turned on to enable Global Tables
      5. Multi-region redundancy for disaster recovery or HA
      6. Natively built into DynamoDB
    12. Mongo-DB-compatible DBs in Amazon DocumentDB
      1. DocumentDB
        1. Allows you to run MongoDB in the AWS cloud
        2. A managed db service that scales with your workloads and safely and durably stores your db info
        3. NoSQL
        4. Direct move for MongoDB
        5. Cannot run Mongo workloads on DynamoDB so MUST use DocumentDB
    13. Amazon Keyspaces
      1. Run Apache Cassandra Workloads with Keyspaces
      2. Cassandra is a distributed (runs on many machines) database that uses NoSQL, primarily for big data solutions
      3. Keyspaces allows you to run cassandra workloads on AWS and is fully managed and serverless, auto-scaled
    14. Amazon Neptune
      1. Implement GraphDBs - stores nodes and relationships instead of tables or documents
    15. Amazon Quantum Ledger DB (QLDB)
      1. For Ledger DB
        1. Are NoSQL dbs that are immutable, transparent, and have a cryptographically verifiable transaction log that is owned by one authority
      2. QLDB is fully managed ledger db
    16. Amazon Timestream
      1. Time-series data are data points that are logged over a series of time, allowing you to track your data
      2. A serverless, fully managed db service for time-series data
      3. Can analyze trillions of events/day up to 1000x faster and at 1/10th the cost of traditional RDSs
  2. Virtual Private Cloud (VPC) Networking
    1. VPC Overview
      1. Virtual data center in the cloud
      2. Logically isolated part of AWS cloud where you can define your own network with complete control of your virtual network
      3. Can additionally create a hardware VPN connection between your corporate data center and your VPC and leverage the AWS cloud as an extension of your corporate data center
      4. Attach a Virtual Private Gateway to our VPC to establish a VPN and connect to our instances over private corporate data center
      5. By default we have 1 VPC in each region
    2. What can we do with a VPC
      1. Use route table to configure between subnets
      2. Use Internet Gateway to create secure access to internet
      3. Use Network Access Control Lists (NACLs) to block specific IP addresses
      4. Default VPC
        1. User friendly
        2. All subnets in default VPC have a route out to the internet
        3. Each EC2 instance has a public and private IP address
        4. Has route table and NACL associated with it
      5. Custom VPC
    3. Steps to set up a VPC Connection:
      1. Choose IPv4 CIDER
        1. Note: first 4 IP addresses and last IP address in CIDR block are reserved by Amazon
      2. Choose Tenancy
      3. By default, creates:
        1. Security Group
        2. Route Table
        3. NACL
      4. Create subnet associations
      5. Create internet gateway and attach to VPC
      6. Set up route table with route out to internet
      7. Associate subnet with VPC
      8. Create Security group
        1. With inbound, outbound rules
        2. Associate EC2 instance with Security Group
  3. Using NAT Gateways for internet access within private subnet
    1. For example, we need to patch db server
    2. NAT Gateway:
      1. You can use Network Access Translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services while preventing the internet from initiating a connection with those instances
    3. How to do this:
      1. Create a NAT Gateway in our public subnet
      2. Allow our EC2 instance (in private subnet) to connect to the NAT Gateway
    4. 5 facts to remember:
      1. Redundant inside the AZ
      2. Starts at 5 Gbps and scales to 45 Gbps
      3. No need to patch
      4. Not associated with any security groups
      5. Automatically assigned a public IP address
  4. Security Groups
    1. Are virtual firewalls for EC2 instances
      1. By default, everything is blocked
    2. Are stateful - this means that if you send a request from your instance, the response traffic for that request is allowed to flow in, regardless of inbound security group rules
      1. Ie: responses to allowed inbound traffic are allowed to flow out regardless of outbound rules
  5. Network ACLs
    1. Are frontline of defense, optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets
    2. You may match NACL rules similar to Security Groups as an added layer of security
    3. Overview:
      1. Default NACLs
        1. VPC automatically comes with default NACL and by default it allows all inbound and outbound traffic
      2. Custom NACLs
        1. By default block all inbound and outbound traffic until you add rules
      3. Each subnet in your VPC must be associated with a NACL
        1. If you don't explicitly associate a subnet with a NACL, the subnet is auto associated with the default NACL
      4. Can associate a NACL with multiple subnets, but each subnet can only have a single NACL associated with it at a time
      5. Have separate inbound and outbound NACLs
    4. Block IP addresses with NACLs NOT with Security Groups
      1. NACLs contain a numbered list of rules that are evaluated in order, starting with lowest numbered rule
        1. Once a match is found, stop going through list
        2. If you want to deny a single IP address, you have to deny FIRST before you allow all
      2. NACLs are stateless
        1. This means that responses to allowed inbound traffic are subject to the rules for outbound traffic and visa versa
  6. VPC Endpoints
    1. Enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection
    2. Like a NAT gateway but it doesnt use the public internet, it uses Amazon’s backbone network - stays within AWS environment
    3. Endpoints are virtual devices
      1. Horizontally scaled, redundant, and HA VPC components that allow communication between instances on your VPC and services without imposing availability risks or bandwidth constraints on your network traffic
    4. Remember that NAT gateways have 5-45 Gbps restriction - you dont want that restriction if you have an EC2 instance writing to S3, so you may have it go through the virtual endpoint (VPC endpoint)
    5. 2 Types of endpoints:
      1. Interface Endpoint
        1. An ENI with a private IP address that serves as an entry point for traffic headed to a supported service
      2. Gateway Endpoint
        1. Similar to NAT gateway
        2. Virtual device that supports connection to S3 and DynamoDB
    6. Use Case: you want to connect to AWS services without leaving the Amazon internal network = VPC endpoint
  7. VPC Peering
    1. Allows you to connect 1 VPC with another via a direct network route using private IP addresses
      1. Instances behave as if they were on the same private network
      2. Can peer VPCs with other AWS accounts as well as other VPCs in same account
      3. Can peer between regions
    2. Is in a star configuration with one central VPC
      1. No transitive peering
    3. Cannot have overlapping CIDR address ranges between peered VPCs
  8. PrivateLink
    1. Opening up your services in a VPC to another VPC can be done in two ways:
      1. Open VPC up to internet
      2. Use VPC Peering - whole network is accessible to peer
    2. Best way to expose a service VPC to tens, hundreds, thousands of customer VPC is through PrivateLink
      1. Does Not require peering, no route tables, no NAT gateways, no internet gateways, etc
      2. DOES require a Network Load Balancer on the service VPC and and ENI on the customer VPC
  9. CloudHub
    1. Useful if you have multiple sites, each with its own VPN connection, use CH to connect those sites together
    2. Overview:
      1. Hub and spoke model
      2. Low cost and easy to manage
      3. Operates over public internet, but all traffic between customer gateway and CloudHub is encrypted
      4. Essentially aggregating VPN connections to single entry point
  10. Direct Connect (DX)
    1. A cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS
    2. Private connectivity
    3. Can reduce network costs, increase bandwidth throughput, and provide a more consistent network experience than internet-based connections
    4. Instead of VPN
    5. Two types of Direct Connect Connections:
      1. Dedicated Connection
        1. Physical ethernet connection associated with a single customer
      2. Hosted Connection
        1. Physical ethernet connection that an AWS Direct Connect Partner (verizon, etc) provisions on behalf of a customer
  11. Transit Gateway
    1. Connects VPCs and on-prem networks through a central hub
    2. Simplifies network by ending complex peering relationships
    3. Acts as a cloud router - each connection is only made once
    4. Connect VPCs to Transit Gateway
      1. Everything connected to TG will be able to talk directly
    5. Facts
      1. Allows you to have transitive peering between thousands of VPCs and on-prem datacenters
      2. Works on regional basis, but can have it across multiple regions
      3. Can use it across multiple AWS account using RAM (Resource Access Manager)
      4. Use route table to limit how VPCs talk to one another
      5. Supports IP Multicast which is not supported by any other AWS Service
  12. Wavelength
    1. Embeds AWS compute and storage service within 5G networks, providing mobile edge computing infrastructure for developing, deploying, and scaling ultra-low-latency applications
  13. Route53
    1. Overview
      1. Domain Registrars: are authorities that can assign domain names directly under one or more top-level Domain
    2. Common DNS Record Types
      1. SOA: Start Of Authority Record
        1. Stores info about:
          1. Name of server that supplied the data for the zone
          2. Administrator of the zone
          3. Current version of the data file
          4. The default number of seconds for the TTL file on resource records
        2. How it works:
          1. Starts with NS (Name Server) records

NS records are used by top-level domain servers to direct traffic to the content DNS server that contains the authoritative DNS records

          1. So browser goes to top level domain first (.com) and will look up ‘ACG’,
          2. TLD will give the browser an NS record where the SOA will be stored
          3. Browser will browse over to the NS records and get SOA
          4. Start of Authority contains all of our DNS records
      1. A Record (or address record)
        1. The fundamental type of DNS record
        2. Used by a computer to translate the name of the domain to an IP address
        3. Most common type of DNS record
        4. TTL = time to live
          1. Length that a DNS record is cached on either the resolving server or the user’s own local PC
          2. The lower the TTL, the faster changes to DNS records take to propagate through the internet
      2. CNAME
        1. Canonical name can be used to resolve one domain name to another
          1. Ex m.acg.com and mobile.acg.com resolve to same
      3. Alias Records
        1. Used to map resource sets in your hosted zone to load balancers, CloudFront distros, or S3 buckets that are configured as websites
        2. Works like a CNAME record in that it can map one dns name to another, but
          1. CNAMES cannot be used for naked domain names/zone apex record
          2. Alias Records CAN be used to map naked domain names/zone apex record
    1. Route53 Overview
      1. Amazon’s DNS service, that allows you to register domain names, create hosted zones, and manage and create DNS records
      2. 7 Routing Policies available with Route53:
        1. Simple Routing Service
          1. Can only have one record with multiple IP addresses
          2. If you specify multiple values in a record, route53 returns all values to the user in a random order
        2. Weighted Routing Policy
          1. Allows you to split your traffic based on assigned weights
          2. Health Checks

Can set health checks on individual record sets/servers

So if a record set/server fails a health check, it will be removed from route53 until it passes the check

While it is down, no traffic will be routed to it, but will resume when it passes

          1. Create a health check for each weighted route that we are going to create to monitor the endpoint, monitor by IP address
        1. Failover Routing Policy
          1. When you want to create an active/passive setup
          2. Route53 will monitor the health of your primary site using health checks and auto-route traffic if primary site fails the check
        2. Geolocation Routing
          1. Lets you choose where your traffic will be sent based on the geographical location of your users
          2. Based on the location from which DNS queries originate; the end location of your user
        3. Geoproximity Routing Policy
          1. Can route traffic flow to build a routing system that uses a combo of:

Geographic location

Latency

Availability to route traffic from your users to your close or on-prem endpoints

          1. Can build from scratch or use templates and customize
        1. Latency Routing Policy
          1. Allows you to route your traffic based on the lowest network latency for your end user
          2. Create a latency resource record set for the EC2 (or ELB) resource in each region that hosts your website

When route53 receives a query for your site, it selects the latency resource record set for the region that gives you the lowest latency

        1. Multivalue Answer Routing Policy
          1. Lets you configure route53 to return multiple values, such as IP addresses for your web server, in response to DNS queries
          2. Basically similar to simple routing, however, it allows you to put health checks on each record set
  1. Elastic Load Balancers (ELBs)
    1. Auto distributes incoming traffic across multiple targets
      1. Can also be done across AZs
    2. 3 types of ELBs
      1. Application Load Balancer
        1. Best suited for balancing of HTTP and HTTPs Traffic
        2. Operates at layer 7
        3. Application-aware load balancer
        4. Intelligent load balancer
      2. Network Load Balancer
        1. Operates at the connection level (Level 4)
        2. Capable of handling millions of requests/sec, low latencies
        3. A performance load balancer
      3. Classic Load Balancer
        1. Legacy load balancer
        2. Can load balance HTTP/HTTPs applications and use Layer-7 specific features such as X-forwarded and sticky sessions
        3. For Test/Dev
    3. ELBs can be configured with Health Checks
      1. They periodically send requests to the load balancer’s registered instances to test their states [InService vs OutOfService returns]
    4. Application Load Balancers
      1. Layer 7, App-Aware Load Balancing
        1. After the load balancer receives a request, it evaluates the listener rules in priority order to determine which rule to apply and then selects a target from the target group for the rule action
      2. Listeners
        1. A listener checks for connection requests from clients when using the protocol and port you configure
        2. You define the rules that determine how the load balancer routes requests to its registered targets
          1. Each rule consists of priority, one or more actions, and one or more conditions
      3. Rules
        1. When conditions of rule are met, actions are performed
        2. Must define a default rule for each listener
      4. Target Group
        1. Each target group routes requests to one or more registered targets using the protocol and port you specify
      5. Path-Based Routing
        1. Enable path patterns to make load balancing decisions based on path
        2. /image → certain EC2 instance
      6. Limitations of App Load Balancers:
        1. Can ONLY support HTTP/HTTPS listeners
      7. Can enable sticky sessions with app load balancers, but traffic will be sent at the target group level, not specific EC2 instance
    5. Network Load Balancer
      1. Layer 4, Connection layer
      2. Can handle millions of requests/sec
      3. When network load balancer has only unhealthy registered targets, it routes requests to ALL the registered targets - known as fail-open mode
      4. How it works
        1. Connection request received
          1. Load balancer selects a target from the target group for the default rule
          2. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration
        2. Listeners
          1. A listener checks for connection requests from clients using the protocol and port you configure
          2. The listener on a network load balancer then forwards the request to the target group

There are NO rules, unlike the Application load balancers - cannot do intelligent routing at level 4

        1. Target Groups
          1. Each target group routes requests to one or more registered targets
        2. Supported protocols: TCP, TLS, UDP, TCP_UDP
        3. Encryption
          1. You can use a TLS listener to offload the work of encryption and decryption to your load balancer
      1. Use Cases:
        1. Best for load balancing TCP traffic when extreme performance is required
        2. Or if you need to use protocols that aren't supported by app load balancer
    1. Classic Load Balancer
      1. Legacy
      2. Can load balance HTTP/HTTPs apps and use Layer 7-spec features
      3. Can also use strict layer 4 load balancing for apps that rely purely on TCP protocol
      4. X-Forwarded-For Header
        1. When traffic is sent from a load balancer, the server access logs contain the IP address of the proxy or load balancer only
        2. To see the original IP address of the client the x-forwarded-for request header is used
      5. Gateway Timeouts with Classic load balancer
        1. If your application stops responding, the classic load balancer responds with a 504 error
          1. This means that the application is having issue
          2. Means the gateway has timed out
      6. Sticky Sessions
        1. Typically the classic load balancer routes each request independently to the registered EC2 instance with smallest load
        2. But with sticky sessions enabled, user will be sent to the same EC2 instance
        3. Problem could occur if we remove one of our EC2 instances while the user still has a sticky session going
          1. Load balancer will still try to route our user to that EC2 instance and they will get an error
          2. To fix this, we have to disable sticky sessions
    2. Deregistration Delay
      1. Aka Connection Draining with Classic load balancers
      2. Allows load balancers to keep existing connections open if the EC2 instances are deregistered or become unhealthy
      3. Can disable this if you want your load balancer to immediately close connections
  1. CloudWatch
    1. Monitoring and observability platform to give us insight into our AWS architecture
    2. Features
      1. System metrics
        1. The more managed a service is, the more you get out of the box
      2. Application Metrics
        1. By installing CloudWatch agent, you can get info from inside your EC2 instances
      3. Alarms
        1. No default alarms
        2. Can create an alarm to stop, terminate, reboot, or recover EC2 instances
    3. 2 kinds of metrics:
      1. Default
        1. CPU util, network throughput
      2. Custom
        1. Will need to be provided with CloudWatch agent installed on the host and reported back to CloudWatch because AWS cannot see past the hypervisor level for EC2 instances
        2. Ex: EC2 memory util, EBS storage capacity
    4. Standard vs Detailed monitoring
      1. Standard/Basic monitoring for EC2 provides metrics for your instances every 5 minutes
      2. Detailed monitoring provides metrics every 1 minute
    5. A period is the length of time associated with a specific CloudWatch stat - default period is 60 seconds
    6. CloudWatch Logs
      1. Tool that allows you to monitor, store, and access log files from a variety of different sources
      2. Gives you the ability to query your logs to look for potential issues or relevant data
      3. Terms:
        1. Log Event: data point, contains timestamp and data
        2. Log Stream: collection of log events from a single source
        3. Log Group: collection of log streams
          1. Ex may group all Apache web server host logs
      4. Features:
        1. Filter Patterns
        2. CloudWatch Log Insights
          1. Allows you to query all your logs using SQL-like interactive solution
        3. Alarms
      5. What services act as a source for CloudWatch logs?
        1. EC2, Lambda, CloudTrend, RDS
      6. CloudWatch is our go-to log tool, except for if the exam asks for a real-time solution (then it will be kinesis)
  2. Amazon Managed Grafana
    1. Fully managed service that allows us to securely visualize our data for instantly querying, correlating, and visualizing your operational metrics, logs, and traces from different sources
    2. Overview
      1. Grafana made easy
      2. Logical separation with workspaces
        1. Workspaces are logical Grafana servers that allow for separation of data visualizations and querying
      3. Data Sources for Grafana: CloudWatch, Managed Service for Prometheus, OpenSearch Service, Timestream
    3. Use Cases:
      1. Container metrics visualizations
        1. Connect to data sources like Prometheus for visualizing EKS, ECS, or own Kube cluster metrics
        2. IoT
  3. Amazon Managed Service for Prometheus
    1. Serverless, Prometheus-compatible service used for securely monitoring container metrics at scale
    2. Overview
      1. Still use open-source prometheus, but gives you AWS managed scaling and HA
      2. Auto Scaling
      3. HA- replicates data across three AZs in same region
      4. EKS and self-managed Kubernetes clusters
      5. PromQL: the open source query language for exploring and extracting data
      6. Data Retention:
        1. Data store in workspaces for 150 days, after that, deleted
  4. VPC Flow Logs
    1. Can configure to send to S3 bucket
  5. Horizontal and Vertical Scaling
    1. Launch Templates
      1. Specifies all of the needed settings that go into building out an EC2 instance
      2. More than just auto-scaling
      3. More granularity
      4. AWS recommends Launch templates over Configurations
        1. Configurations are only for auto-scaling, are immutable, limited configuration options, don't use them
    2. Create template for Auto Scaling Group
  6. Auto Scaling
    1. Auto Scaling Groups
      1. Contains a collection of EC2 instances that are treated as a collective group for the purposes of scaling and management
      2. What goes into auto scaling group?
        1. Define your template
          1. Pick from available launch templates or launch configurations
        2. Pick your networking and purchasing
          1. Pick networking space and purchasing options
        3. ELB configuration
          1. ELB sits in front of auto scaling group
          2. EC2 instances are registered behind a load balancer
          3. Auto scaling can be set to respect the load balancer health checks
        4. Set Scaling policies
          1. Min/Max/Desired capacity
        5. Notifications
          1. SNS can act as notification tool, alert when a scaling event occurs
    2. Step Scaling Policies
      1. Increase or decrease the current capacity of a scalable target based on scaling adjustments, known as step adjustments
      2. Adjustments vary based on the size of the alarm breach
        1. All alarms that are breached are evaluated by application auto scaling
    3. Instance Warm-Up and Cooldown
      1. Warm-up period - time for EC2s to come up before being placed behind LB
      2. Cooldown - pauses auto scaling for a set amount of time (default is 5 minutes)
      3. Warmup and cooldown help to avoid thrashing
    4. Scaling types
      1. Reactive Scaling
        1. Once the load is there, you measure it and then determine if you need to create more or less resources
        2. Respond to data points in real-time, react
      2. Scheduled Scaling
        1. Predictable workload, create a scaling event to handle
      3. Predictive Scaling
        1. AWS uses ML algorithms to determine
        2. They are reevaluated every 24 hours to create a forecast for the next 48
    5. Steady Scaling
      1. Allows us to create a situation where the failure of a legacy codebase or resource that cant be scaled down can auto recover from failure
      2. Set Min/Max/Desired = 1
    6. CloudWatch is your number one tool for alerting auto scaling that you need more or less of something
    7. Scaling Relational DBs
      1. Most scaling options
      2. 4 ways to scale Relational DBs/4 types of scaling we can use to adjust our RD performance
        1. Vertical Scaling
          1. Resizing the db from one size to another, can create greater performance, increase power
        2. Scaling Storage
          1. Storage can be resized up, not down
          2. Except aurora which auto scales
        3. Read Replicas
          1. Way to scale “horizontally” - create read only copies of our data
        4. Aurora Serverless
          1. Can offload scaling to AWS - excels with unpredictable workloads
    8. Scaling Non Relational DBs
      1. DynamoDB
        1. AWS managed - simplified
        2. Provisioned Model
          1. Use case: predictable workload
          2. Need to overview past usage to predict and set limits
          3. Most cost effective
        3. On-Demand
          1. Use case: sporadic workload
          2. Pay more
        4. Can switch from on-demand to provisioned only once per 24 hours per table
      2. Non-Relational DB scaling
        1. Access patterns
        2. Design matters
          1. Avoiding hot keys will lead to better performance
  7. Simple Queue Service (SQS)
    1. Fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless apps
    2. Can sit between frontend and backend and kind of replace the Load Balancer
      1. Web front end dumps messages into the queue and then backend resources can poll that queue looking for that data whenever it is ready
      2. Does not require that active connection that the load balancer requires
    3. Poll-Based Messaging
      1. We have a producer of messages, consumer comes and gets message when ready
      2. Messaging queue that allows asynchronous processing or work
    4. SQS Settings
      1. Delivery Delay - default 0, up to 15 minutes
      2. Message Size - up to 256 KB text in any format
      3. Encryption - encrypted in transit by default, now added encryption at rest default with SSE-SQS
        1. SSE-SQS = Server-Side Encryption using SQS-owned encryption
          1. Encryption at rest using the default SSE-SQS is supported at no charge for both standard and FIFO using HTTPS endpoints
      4. Message Retention
        1. Default retention is 4 days, can be set from 1 minute - 14 days, then purged
      5. Long vs Short Polling
        1. Long polling is not the default, but should be
        2. Short Polling
          1. Connect, checks if work, immediately disconnects if no work
          2. Burns CPU, additional API calls cost money
        3. Long Polling
          1. Connect, check if work, waits a bit
          2. Mostly will be the right answer
      6. Queue Depth
        1. This value can be a trigger for auto scaling
      7. Visibility Timeout
        1. Used to ensure proper handling of the message by backend EC2 instances
        2. Backend polls for the message, sees it, downloads that message from SQS to do work - after backend downloads message, SQS puts a lock on that message called the visibility timeout, where the message remains in the queue but no one else can see it
        3. So if other instances are polling that queue, the will not see the locked message
        4. Default visibility timeout is 30 seconds, but can be changed
        5. If that EC2 instance that downloaded the message fails to process that message and reach out to the queue to tell SQS that it is done, and tells it to purge that message, that message is going to reappear in the queue
      8. Dead-Letter Queues
        1. If there is a problem with the message, if the message cannot be processed by our backend process and we did not implement DLQ - the message would get pulled by a backend EC2 for processing, the EC2 would have an error processing it, so we would hit our 30 sec visibility timeout. So, the message would unlock, another EC2 would pick it up, etc, etc, until we hit our message retention period, then that message would be deleted
        2. By implementing DLQ: we create another SQS Queue that we can temporarily sideline messages into
        3. How it works:
          1. Set up a new queue and select it as the DLQ when setting up the primary SQS queue
          2. Set a number for retries in the primary queue

Once the message hits that limit, it gets moved to the DLW where it stays until message retention period, then deleted

        1. Can create SQS DLQ for SNS topics
      1. SQS FIFO
        1. Standard SQS offers best effort ordering and tries not to duplicate, but may - nearly unlimited transactions/sec
        2. SQS FIFO guarantees the order and that no duplication will occur
        3. Limited to 300 messages/sec
        4. How it works:
          1. Message group ID field is a tag that specifies that a message belongs to a specific message group
          2. Message Deduplication ID is the token used to ensure no duplication within the deduplication interval: a unique value that ensures that your messages are unique
        5. More expensive than standard SQS
  1. Simple Notification Service (SNS)
    1. Used to push out notifications - proactively deliver the notification to an end-point rather than leaving it in a queue
    2. Fully managed messaging service for both application-to-application (A2A) and application-to-person (A2P) communication
    3. Texts and emails to users
    4. Push-Based Messaging
      1. Consumer does not have control to receive when ready, the sender sends it all the way to the consumer
    5. Will proactively deliver messages to the endpoints that are Subscribed to it
    6. SNS topics are subscribed to
    7. SNS Settings
      1. Subscribers
        1. what/who is going to receive the data from the topic
          1. Ex: Kinesis firehose, SQS, Lambda, email, HTTP(S), etc
      2. Message Size
        1. Max size of 256 text in any format
      3. SNS does not retry, even if they fail to deliver
        1. Can store in an SQS DLQ to handle
      4. SNS FIFO only supports SQS as a subscriber
      5. Messages are encrypted in transit by default, and you can add encryption at rest
      6. Access Policies
        1. Can control who/what can publish to those SNS topics
        2. A resource policy can be added to a topic, similar to S3
          1. Have to make sure AP is set up properly with SNS to SQS so that SNS can have access to SQS Queue
    8. CloudWatch uses SNS to deliver alarms
  2. API Gateway
    1. Fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale
    2. “Front-door” to our apps so we can control what users talk to our resources
    3. Key features:
      1. Security
        1. Add security- one of the main reasons for using API Gateway in front of our applications
        2. Allows you to easily protect your endpoints by attaching a WAF; Can front API Gateway with a WAF - security at the edge
      2. Stop Abuse
        1. Set up rate limiting, DDoS protection
      3. Static stuff to S3, basically everything else to API Gateway
    4. Preferred method is to get API calls into your application and AWS environment
    5. Avoid hardcoding our access keys/secret keys with API Gateway
      1. Do not have to generate an IAM user to make calls to the backend, just send API call to API Gateway in front
    6. API Gateway is versioning supported
  3. AWS Batch
    1. AWS Managed service that allows us to run batch computing workloads within AWS- these workloads run on either EC2 or Fargate/ECS
      1. Capable of provisioning accurately sized compute resources based on number of jobs submitted and optimizes the distribution of workloads
    2. Removes any heavy lifting for configuration and management of infrastructure required for computing
    3. Components:
      1. Jobs = units of work that are submitted to Batch (ie: shell scripts, executables, docker images)
      2. Job Definitions = specify how your jobs are to be run, essentially the blueprint for the resources in the job
      3. Job Queues = jobs get submitted to specific queues and reside there until scheduled to run in a compute environment
      4. Compute Environment = set of managed or unmanaged compute resources used to run your jobs (EC2 or ECS/Fargate)
    4. Fargate or EC2 Compute Environments
      1. Fargate is the recommended way of launching most batch jobs
        1. Scales and matches your needs with less likelihood of over provisioning
      2. EC2 is sometimes the best choice, though:
        1. When you need a custom AMI (can only be run via EC2)
        2. When you have high vCPU requirements
        3. When you have high GiB requirements
        4. Need GPU or Graviton CPU requirement
        5. When you need to use linusParameters parameter field
        6. When you have a large number of jobs, best to run on EC2 because jobs are dispatched at a higher rate than Fargate
    5. Batch vs Lambda
      1. Lambda has a 15 minute execution time limit, batch does not
      2. Lambda has limited disk space
      3. Lambda has limited runtimes, batch uses docker so any runtime can be used
  4. Amazon MQ
    1. Managed message broker service allowing easier migration of existing applications to the AWS Cloud
    2. Makes it easy for users to migrate to a message broker in the cloud from an existing application
      1. Can use a variety of programming languages, OS’s, and messaging protocols
    3. MQ Engine types:
      1. Currently supports both Apache ActiveMQ or RabbitMQ engine types
    4. SNS with SQS vs AmazonMQ
      1. Both have topics and queues
        1. Both allow for one-to-one or one-to-many messaging designs
      2. MQ is easy application migration: so if you are migrating an existing application, likely want MQ
      3. If you are starting with new Application - easier and better to use SNS with SQS
      4. AmazonMQ requires that you have private networking like VPC, Direct Connect, or VPN while SNS and SQS are publicly accessible by default
      5. MQ has NO default AWS integrations and does not integrate as easily with other services
    5. Configuring Brokers
      1. Single_Instance Broker
        1. One broker lives within one AZ
        2. RabbitMQ has a network load balancer in front in a single instance broker environment
      2. MQ Brokers
        1. Offers HA architectures to minimize downtime during maintenance
        2. Architecture depends on broker engine type
        3. AmazonMQ for Apache ActiveMQ
          1. With active/standby deployments, one instance will remain available at all times
          2. Configure network of brokers with separate maintenance windows
        4. AmazonMQ for RabbitMQ
          1. Cluster deployment are logical groupings of three broker nodes across multiple AZs sitting behind a Network LB
      3. MQ is good for specific messaging protocols: JMS or messaging protocols like AMQP0-9-1, AMQP 1.0, MQTT, OpenWire, and STOMP
  5. AWS Step Functions
    1. A serverless orchestration service combining different AWS services for business applications
    2. Provides a graphical console for easier application workflow views and flows
    3. Components
      1. State Machine: a particular workflow with different event-driven steps
      2. Tasks: specific states within a workflow (state machine) representing a single unit of work
      3. States: every single step within a workflow = a state
    4. Two different types of workflows with Step Functions:
      1. Standard
        1. Exactly-once execution
        2. Can run for up to 1 year
        3. Useful for long-running workflows that need to have auditable history
        4. Rates up to 2000 executions/sec
        5. Pricing based per state transition
      2. Express
        1. Have an ‘at-least-once’ execution → means possible duplication you have to handle
        2. Only run for up to 5 minutes
        3. Useful for high-event-rate workloads
        4. Use Case: IoT data streaming
        5. Pricing based on number of executions, durations, and memory consumed
    5. States and State Machines
      1. Individual states are flexible
        1. Leverage states to either make decisions based on input, perform certain actions, or pass input
      2. Amazon States Language (ASL)
        1. States and workflows are defined in ASL
      3. States are elements within your state machines
        1. States are referred to by name, name in unique within workflow
    6. Integrates with Lambda, Batch, Dynamo, SNS, Fargate, API Gateway, etc
    7. Different States that exist:
      1. Pass - no work
      2. Task - single unit of work performed
      3. Choice - adds branching logic to state machines
      4. Wait - time delay
      5. Succeed - stops executions successfully
      6. Fail - stops executions and mark as failures
      7. Parallel - runs parallel branches of executions within state machines
      8. Map - runs a set of steps based on elements of an input array
  6. Amazon AppFlow
    1. Fully managed service that allows us to securely exchange data between a SaaS App and AWS
      1. Ex: Salesforce migrating data to S3
    2. Entire purpose is to ingest data
      1. Pulls data records from third-party SaaS vendors and stores them in S3
    3. Bi-directional: allows for bi-directional data transfers with some combinations of source and destination
    4. Concepts:
      1. Flow: transfer data between sources and destinations
      2. Data Mapping: determines how your source data is stored within your destination
      3. Filters: criteria set to control which data is transferred
      4. Trigger: determines how the flow is started
        1. Multiple options/supported types:
          1. Run on demand
          2. Run on event
          3. Run on schedule
  7. Redshift Databases
    1. Fully managed, petabyte-scale data warehouse service in the cloud
    2. Very large relational db traditionally used in big data
      1. Because it is relational, you can use standard SQL and BI tools to interact with it
      2. Best use is for BI applications
    3. Can store massive amounts of data - up to 16 PB of data
      1. Means you do not have to split up your large datasets
    4. Not a replacement for a traditional RDS - it would fall apart as the backend of your web app, for example
  8. Elastic Map Reduce (EMR)
    1. ETL tool
    2. Managed big data platform that allows you to process vast amounts of data using open-source tools, such as Spark, Hive, HBase, Flink, Hudi, and Presto
      1. Quickly use open source tools and get them running in our environment
    3. For this exam, EMR will be run on EC2 instances and you pick the open-source tool for AWS to manage on them
    4. Open-source cluster, managed fleet of EC2 instances running open-source tools
  9. Amazon Kinesis
    1. Allows you to ingest, process, and analyze real-time streaming data
    2. 2 forms of Kinesis:
      1. Data Streams
        1. Real-time streaming for ingesting data
        2. You are responsible for creating the consumer and scaling the stream
        3. Process for Kinesis Data Streams:
          1. Producers creating data
          2. Connect producers to Data Stream
          3. Decide how many shards you are going to create

Shards can only handle a certain amount of data

          1. Consumer takes data in, processes it, and puts it into endpoints

You have to create the consumer

Endpoint could be S3, Dynamo, Redshift, EMR, …

You have to use the Kinesis SDK to build the consumer application

Handle scaling with the amount of shards

      1. Data Firehose
        1. Data transfer tool to get info into S3, Redshift, ElasticSearch, or Splunk
        2. Near-real-time
        3. Plug and play with AWS architecture
        4. Process for Kinesis Data Firehose:
          1. Limited supported endpoints- ElasticSearch service, S3, and Redshift, some 3rd party endpoints supported as well
          2. Place Data Firehose in between input and endpoint
          3. Handles the scaling and the building out of the consumer
    1. Kinesis Data Analytics
      1. Paired with Data Stream/Firehose, it does the analysis using standard SQL
      2. Makes it easy to tie Data Analytics into your pipeline
        1. Data comes in with Streams/Firehose and Data Analytics can transform/sanitize/format data in real-time as it gets pushed through
      3. Serverless, fully managed, auto scaling
    2. Kinesis vs SQS
      1. SQS does NOT provide real-time message delivery
      2. Kinese DOES provide real-time message delivery
  1. Amazon Athena
    1. An interactive query that makes it easy to analyze data in S3 using SQL
    2. Allows you to directly query data in your S3 buckets without loading it into a database
    3. “Serverless SQL”
      1. Can use Athena to query logs stored in S3
  2. Amazon Glue
    1. A serverless data ingestion service that makes it easy to discover, prepare, and combine data
    2. Allows you to perform ETL workloads without managing underlying servers
    3. Effectively replaces EMR - and with Glue, you don't have to spin up EC2 instances or use 3rd party tools to ETL
    4. Using Athena and Glue together:
      1. AWS S3 data is unstructured, unformatted – deploy Glue Crawlers to build a catalog/structure for that data
        1. Glue produces Data Catalog
      2. After glue, we have some options:
        1. Can use Amazon Redshift Spectrum - allows us to use Redshift without having to load data into Redshift db
        2. Athena - use to query the data catalog, and can even use Quicksight to visualize data
  3. Amazon QuickSight
    1. Amazon's version of Tableau
    2. Fully managed BI data visualization service, easily create dashboards
  4. AWS Data Pipeline
    1. A managed ETL service for automating management and transformation of your data, automatic retries for data-driven workflow
    2. Data driven web-service that allows you to define data-driven workflows
    3. Steps are dependent on previous tasks completing successfully
    4. Define parameters for data transformations - enforces your chosen logic
    5. Auto retries failed attempts
    6. Configure notifications via SNS
    7. Integrates easily with Dynamo, Redshift, RDS, S3 for data storage, and integrates with EC2 and EMR for compute needs
    8. Components:
      1. Pipeline Definition = specify the logic of you data management
      2. Managed Compute = service will create EC2 instances to perform your activities or leverage existing EC2
      3. Task Runners = (EC2) poll for different tasks and perform them when found
      4. Data Nodes= define the locations and types of data that will be input and output
      5. Activities = pipeline components that define the work to perform
    9. Use Cases:
      1. Processing data in EMR using Hadoop streaming
      2. Importing or exporting DynamoDB data
      3. Copying CSV files or data between S3 buckets
      4. Exporting RDS data to S3
      5. Copying data to Redshift
  5. Amazon Managed Streaming for Apache Kafka (Amazon MSK)
    1. Fully managed service for running data streaming apps that leverage Apache Kafka
    2. Provides control-plane operations; creates, updates, and deletes clusters as required
    3. Can leverage the Kafka data-plane operations for production and consuming streaming data
    4. Good for existing operations; allows support for existing apps, tools, and plugins
    5. Components:
      1. Broker Nodes
        1. Specify the amount of broker nodes per AZ you want at time of cluster creation
      2. Zookeeper Nodes
        1. Created for you
      3. Producers, Consumers, and Topics
        1. Kafka data-plane operations allow creation of topics and ability to produce/consume data
      4. Flexible Cluster Operations
        1. Perform cluster operations with the console, AWS CLI, or APIs within any SDK
    6. Resiliency in AmazonMSK:
      1. Auto Recovery
      2. Detected broker failures result in mitigation or replacement of unhealthy nodes
      3. Tries to reduce storage from other brokers during failures to reduce data needing replication
      4. Impact time is limited to however long it takes MSK to complete detection and recovery
      5. After successful recovery, producer and consumer apps continue to communicate with the same IP as before
    7. Features:
      1. MSK Serverless
        1. Cluster type within AmazonMSK offering serverless cluster management - auto provisioning and scaling
        2. Fully compatible with Apache Kafka - use the same client apps for prod/cons data
      2. MSK Connect
        1. Allows developers to easily stream data to and from Apache Kafka clusters
    8. Security
      1. Integrates with Amazon KMS for SSE requirements
      2. Will always encrypt data at rest by default
      3. TLS1.2 by default in transit between brokers in clusters
    9. Logging
      1. Broker logs can be delivered to services like CloudWatch, S3, Data Firehose
      2. By default, metrics are gathered and sent to CloudWatch
      3. MSK API calls are logged to CloudFront
  6. Amazon OpenSearch Service
    1. Managed service allowing you to run search and analytics engines for various use cases
    2. It is the successor to Amazon ElasticSearch Service
    3. Features:
      1. Allows you to perform quick analysis - quickly ingest, search, and analyze data in your clusters - commonly a part of an ETL process
      2. Easily scale cluster infrastructure running the OpenSearch services
    4. Security: leverage IAM for access control, VPC security groups, encryption at rest and in transit, and field-level security
    5. Multi-AZ capable service with Master nodes and automated snapshots
    6. Allows for SQL support for BI apps
    7. Integrates with CloudWatch, CloudTrail, S3, Kinesis - can set log streams to OpenSearch Service
      1. Logging solution involving creating visualization of log file analytics or BI tools/imports
  7. Serverless Overview
    1. Benefits
      1. Easy of use: we bring code, AWS handles everything else
      2. Event-Based: can be brought online in response to an event then go back offline
      3. True “pay for what you use” architecture: pay for provisioned resources and the length of runtime
    2. Example serverless services: Lambda, Fargate
  8. Lambda
    1. Serverless compute service that lets you run code without provisioning or managing the underlying server
    2. How to build a lambda function:
      1. Runtime selection: pick from an available run-time or bring your own. This is the environment your code will run in
      2. Set permissions: if your lambda function needs to make an API call in AWS, you need to attach a role
      3. Networking definitions: optionally, you can define the VPC, subnet, and security groups your functions are a part of
      4. Resource definitions: define the amount of available memory will allocate how much CPU and RAM your code gets
      5. Define Trigger: select what is going to kick off your lambda function to start
    3. Lambda has built-in logging and monitoring using CloudWatch
  9. AWS Serverless Application Repository
    1. Serverless App Repository
      1. Service that makes it easy for users to easily find, deploy, or even publish their own serverless apps
      2. Can share privately within organization or publicly
      3. How it works:
        1. You upload your application code and a Manifest File
          1. Manifest File: is known as the Serverless Application Model (SAM) Template
          2. SAM templates are basically CloudFormation templates
      4. Deeply integrated with the AWS lambda service - actually appears in the console
    2. 2 Options in Serverless App Repository:
      1. Publish: define apps with the SAM templates and make them available for others to find and deploy
        1. When you first publish your app, it is set to private by default
      2. Deploy: find and deploy published apps
  10. Container Overview
    1. Container
      1. Standard unit of software that packages up code and all its dependencies
    2. Terms:
      1. Docker File: text document that contains all the commands or instructions that will be used to build an image
      2. Image: Docker files build images, immutable file that contains the code, libraries, dependencies, and configuration files needed to build an app
      3. Registry: like GitHub for images, stores docker images for distribution
      4. Container: a running copy of image
    3. ECS: Elastic Container Service
      1. Management of containers at scale
      2. Integrates natively with ELB
      3. Easy integration with roles to get permissions for containers - containers can have individual roles attached to them
      4. ECS only works in AWS
    4. EKS: Elastic Kubernetes Service
      1. Kubernetes: open source container manager, can be used on-prem and in the cloud
      2. EKS is the AWS-managed version
    5. ECS vs EKS
      1. ECS - simple, easy to integrate, but it does not work on-prem
      2. EKS - flexible, works in cloud and on-prem, but it is more work to configure and integrate with AWS
    6. Fargate
      1. Serverless compute engine for containers that works with both ECS and EKS (requires one of them)
      2. EC2 vs Fargate for container management
        1. If you use EC2:
          1. You are responsible for underlying OS
          2. Can better deal with long-running containers
          3. Multiple containers can share the same host
        2. If you use Fargate:
          1. No OS access - don’t have to manage
          2. Pay based on resources allocated and time run
          3. Better for short-running tasks
          4. Isolated environments
      3. Fargate vs Lambda
        1. Use fargate when you have more consistent workloads, allows for docker use across the organization and a greater level of control for developers
        2. Use lambda when you have unpredictable or inconsistent workloads, use for applications that can be expressed as a single function (lambda function responds to event and shuts down)
  11. EventBridge (formerly known as CloudWatch Events)
    1. Serverless event bus, allows you to pass events from a server to an endpoint
      1. Essentially the glue that holds your serverless apps together
    2. Creating an EventBridge Rule:
      1. Define Pattern: scheduled/invoked/etc?
      2. Select Event Bus: AWS-based event/Custom event/Partner event?
      3. Select your target: what happens when this event kicks off?
      4. Remember to tag it
    3. Remember:
      1. EventBridge is the glue - it triggers an action based on some event in AWS, holds together a serverless application and Lambda functions
        1. An API call in AWS can alert a variety of endpoints
  12. Amazon Elastic Container Registry (ECR)
    1. AWS-managed container image registry that offers secure, scalable, and reliable infrastructure
    2. Private container image repositories with resources-based permissions via IAM
    3. Supported formats include: Open Container Initiative (OCI) images, docker images, and OCI artifact
    4. Components:
      1. Registry
        1. A private registry provided to each AWS account, regional
        2. Can create one or more registries for image storage
      2. Authentication Token
        1. Required for pushing/pulling images to/from registries
      3. Repository
        1. Contains all of your Docker Images, OCI Images, and Artifacts
      4. Repository Policy
        1. Control all access to repository and images
      5. Image
        1. Container images that get pushed to and pulled from your repositories
    5. Amazon ECR Public is a similar service for public image repository
    6. Features:
      1. Lifecycle policies
        1. Helps management of images in your repository
        2. Defines rules for cleaning up unused images
        3. It does give you the ability to test your rules before applying them to repository
      2. Image Scanning
        1. Helps identify software vulnerabilities in your container images
        2. Repositories can be set to scan on push
        3. Retrieve results of scans for each image
      3. Sharing
        1. Cross-Region Support
        2. Cross-Account support
        3. Both are configured per repository and per region > each registry is regional for each account
      4. Cache Rules
        1. Pull through cache rules allow for caching public repositories privately
        2. ECR periodically reaches out to check current caching status
      5. Tag Mutability
        1. Prevents image tags from being overwritten
        2. Configure this setting per repository
    7. Service Integrations:
      1. Bring your own containers - can integrate with your own container infrastructure
      2. ECS - use container images in ECS container definitions
      3. EKS - pull images from EKS clusters
      4. Amazon Linux Containers - can be used locally for your software development
  13. EKS Distro
    1. EKS Distro aka EKS-D
      1. Kubernetes distribution based on and used by Amazon EKS
      2. Same versions and dependencies deployed by EKS
    2. EKS-D is fully your responsibility, fully managed by you, unlike EKS
    3. Can run EKS-D anywhere, on-prem, cloud, etc
  14. EKS Anywhere and ECS Anywhere
    1. EKS Anywhere
      1. An on-prem way to manage Kubernetes clusters with the same practices used in EKS, with these clusters on-prem
      2. Based on EKS-Distro allows for deployment, usage, and management methods for clusters in data centers
      3. Can use lifecycle management of multiple kubernetes clusters and operate independently of AWS Services
      4. Concepts:
        1. Kubernetes control plane management operated completely by customer
        2. Control plane location within customer center
        3. Updates are done entirely via manual CLI or Flux
    2. ECS Anywhere
      1. Feature of ECS allowing the management of container-based apps on-prem
      2. No orchestration needed: no need to install and operate local container orchestration software, meaning more operational efficiency
      3. Completely managed solution enabling standardization of container management across environment
      4. Inbound Traffic
        1. No ELB support - customer managed, on-prem
      5. EXTERNAL = new launch type noted as ‘EXTERNAL’ for creating services or running tasks
      6. Requirements for ECS Anywhere:
        1. On local server, must have the following installed:
          1. SSM Agent
          2. ESC Agent
          3. Docker
        2. Must first register external instances as SSM Managed Instances
          1. Can easily create an installation script within ECS console to run on your instances
          2. Scripts contain SSM activation keys and commands for required software
  15. Auto Scaling DBs on Demand with Aurora Serverless
    1. Aurora Provisioned vs Aurora Serverless
      1. Provisioned is typical Aurora service
      2. Aurora Serverless
        1. On-demand and Auto Scaling configuration for Aurora db service
        2. Automation of monitoring workloads and adjusting capacity for dbs
        3. Based on demand - capacity adjusted
        4. Billed per-second only for resources consumed by db clusters
    2. Concepts for Aurora Serverless
      1. Aurora Capacity Units (ACUs) = a measurement on how your clusters scale
      2. Set minimum and maximum of ACUs for scaling → can be 0
      3. Allocate quickly by AWS managed warm pools
      4. Combo of 2 GiB of memory, matching CPU and networking
      5. Same data resiliency as provisioned - 6 copies of data across 3 AZs
    3. Use Cases:
      1. Variable workloads
      2. Multi-tenant apps - let the service manage db capacity for each individual app
      3. New apps
      4. Dev and Test
      5. Mixed-Use Apps: apps that might serve more than one purpose with different traffic spikes
      6. Capacity planning: easily swap from provisioned to serverless or vise-versa
  16. Amazon X-Ray
    1. Application Insights - collects application data for viewing, filtering, and gaining insights about requests and responses
    2. View calls to downstream AWS resources and other microservices/APIs or dbs
    3. Receives traces from your applications for allowing insights
    4. Integrated services can add tracing headers, send trace data, or run the X-Ray daemon
    5. Concepts:
      1. Segments: data containing resource names, request details, etc
      2. Subsegments: segments providing more granular timing info and data
      3. Service graph: graphical representation of interacting services in requests
      4. Traces: trace ID tracks paths of requests and traces collect all segments in a request
      5. Tracing Header: extra HTTP header containing sampling decisions and trace ID
        1. Tracing header continuing added info is named: X-Amzn-Trace-ID
    6. X-Ray Daemon
      1. AWS Software application that listens on UDP port 2000. It collects raw segment data and sends it to the X-Ray API
      2. When daemon is running, it works alongside the X-Ray SDK
    7. Integrations:
      1. EC2- installed, running agent
      2. ECS- installed within tasks
      3. Lambda- on/off toggle, built-in/available for functions
      4. Beanstalk- a configuration option
      5. API Gateway- can add to stages as desired
      6. SNS and SQS- view time taken for messages in queues within topics
  17. GraphQL interfaces in AppSync
    1. AppSync
      1. Robust, scalable GraphQL interface for app developers
      2. Combines data from multiple sources
      3. Enables interaction for developers via GraphQL, which is a data language that enables apps to fetch data from servers
      4. Seamless integration with React, ReactNative, iOS, and Android
      5. Especially used for fetching app data, declarative coding, and frontend app data fetching
  18. Layer 4 DDoS Attacks aka SYN flood
    1. Work at the transport layer
    2. How it works:
      1. SYN flood overwhelms the server by sending a large number of SYN packets and then ignoring the SYN-ACKs returned by the server
        1. Causes the server to use up resources waiting for a set amount of time for the ACK
        2. There are only so many concurrent TCP connections that a web app server can have open - so attacker could take all the allowed connections causing the server to not be able to respond to legitimate traffic
  19. Amplification Attacks aka Reflection Attacks
    1. When an attacker may send a third party server (such as an NTP server) a request using a spoofed IP address. That server will then respond to that request with a greater payload than the initial request (28-54 xs larger) to the spoofed IP
      1. Attackers can coordinate this and use multiple NTP servers a second to send legitimate NTP traffic to the target
      2. Include things such as NTP, SSDP, DNS, CharGEN, SNMP attacks, etc
  20. Layer 7 Attack
    1. Occurs when a web server receives a flood of GET or POST requests, usually from a botnet or large number of compromised computers
      1. Causes legitimate users to not be able to connect to the web server because it is busy responding to the flood of requests from the botnet
  21. Logging API Calls using CloudTrail
    1. CloudTrail Overview:
      1. Increases visibility into your user and resource activity by recording AWS Management Console actions and API calls
      2. Can identify which users and accounts called AWS, the source IP from which the calls were made and when
      3. Just tracks API calls:
        1. Every call is logged into an S3 bucket by CloudTrail
        2. RDP and SSH traffic is NOT logged
        3. DOES include anything done in the console
    2. What is logged in a CloudTrail logged event?
      1. Metadata around the API calls
      2. Id of the API caller
      3. Time of call
      4. Source IP of API caller
      5. Request parameters
      6. Response elements returned by the service
    3. What CloudTrail allows for:
      1. After-the-fact incident investigation
      2. Near real-time intrusion detection → integrate with Lambda function to create an intrusion detection system that you can customize
      3. Logging for industry and regulatory compliance
  22. Amazon Shield
    1. AWS Shield
      1. Free DDoS Protection
      2. Protects all AWS customers on ELBs, CloudFront, and Route53
      3. Protects against SYN/UDP floods, reflection attacks and other Layer 3 and Layer 4 attacks
    2. AWS Shield-Enhanced
      1. Provides enhanced protections for apps running on ELB, CloudFront, Route53 against larger and more sophisticated attacks
      2. Offers always-on, flow-based monitoring of network traffic and active application monitoring to provide near real-time notifications of DDoS attacks
      3. 24/7 access to the DDoS Response Team (DRT) to help mitigate and manage app-layer DDoS attacks
      4. Protects your AWS bill against higher fees due to ELB, CloudFront, and Route53 usage spikes during a DDoS attack
      5. Costs $3000/month
  23. Web Application Firewall
    1. Web Application Firewall that allows you to monitor the HTTP and HTTPS requests that are forwarded on to CloudFront or Application Load Balancer
    2. Lets you control access to your content
      1. Can configure conditions such as what IP addresses are allowed to make this request or what query string parameters need to be passed for the request to be allowed
      2. The Application Load Balancer or CloudFront will either allow this content to be received or give an HTTP 403 status code
    3. Operates at Layer 7
    4. At the most basic level, WAF allows 3 behaviors:
      1. Allow all requests except the ones you specify
      2. Block all requests except for the ones you specify
      3. Count the requests that match the properties you specify
    5. Can define conditions by using characteristics of web requests such as:
      1. IP addresses that the requests originate from
      2. Country that the requests originate from
      3. Values in requests headers
      4. Presence of a SQL code that is likely to be malicious (ie: SQL Injection)
      5. Presence of a script that is likely to be malicious (ie: cross-site scripting)
      6. Strings that appear in requests - either specific strings or strings that match regex patterns
    6. WAF can:
      1. Can protect against Layer 7 DDoS attacks like cross-site scripting, SQL injections
      2. Can block specific countries or specific IP addresses
  24. GuardDuty
    1. Threat detection service that uses ML to continuously monitor for malicious behavior
      1. Unusual API calls, calls from a known malicious IP
      2. Attempts to disable CloudTrail logging
      3. Unauthorized Deployments
      4. Compromised Instances
      5. Recon by would-be attackers
      6. Port scanning and failed logins
    2. Features
      1. Alerts appear in GuardDuty console and CloudWatch events
      2. Receives feeds from 3rd parties like Proofpoint and CrowdStrike, as well as AWS Security, about known malicious domains and IP addresses
      3. Monitors CloudTrail logs, VPC flow logs and DNS logs
      4. Allows you to centralize threat detection across multiple AWS Accounts
      5. Automated response using CloudWatch Events and Lambda
      6. Gives you ML and anomaly detection
      7. Basically threat detection with AI
    3. Setting up GuardDuty:
      1. 7-14 days to set a baseline = normal behavior
      2. You will only see findings that GuardDuty detects as a threat
    4. Cost
      1. 30 days free
      2. Charges based on:
        1. Quality of CloudTrail events
        2. Volume of DNS and VPC Flow logs data
  25. Firewall Manager
    1. A security management service in a single pane of glass
    2. Allows you to centrally set up and manage firewall rules across multiple AWS accounts and apps in AWS Organizations
      1. Can create new AWS WAF Rules for your App Load Balancers, API Gateways, and CloudFront distributions
      2. Can also mitigate DDoS attacks using shield Advanced for your App Load Balancers, Elastic IP addresses, CloudFront distributions, and more
    3. Benefits:
      1. Simplifies management firewall rules across accounts
      2. Ensure compliance of existing and new apps
  26. Monitoring S3 Buckets with Macie
    1. Macie
      1. Automated analysis of data - uses ML and pattern matching to discover sensitive data stored in S3
      2. Uses AI to recognize if your S3 objects contain sensitive data, such as PII, PHI, and financial data
      3. Buckets:
        1. Alerts you to unencrypted buckets
        2. Alerts you about public buckets
        3. Can also alert you about buckets shared with AWS accounts outside of those defined in your AWS Orgs
      4. Great for frameworks like HIPAA
      5. Macie Alerts
        1. You can filter and search Macie alerts in AWS console
        2. Alerts sent to Amazon EventBridge can be integrated with your security incident and event management (SIEM) system
        3. Can be integrated with AWS Security Hub for a broader analysis of your organization's security posture
        4. Can also be integrated with other AWS Services, such as Step Functions, to automatically take remediation actions
  27. Inspector
    1. An automated security assessment service that helps improve the security and compliance of apps deployed on AWS
    2. Auto assesses apps for vulnerabilities or deviations from best practices
    3. Inspects EC2 instances and networks
    4. Assessment findings:
      1. After performing an assessment, Inspector produces a detailed list of security findings by level of security
      2. These findings can be reviewed directly or as part of detailed assessment reports available via Inspector console to API
      3. 2 types of assessments:
        1. Network Assessments
          1. Network configuration analysis to check for ports reachable from outside the VPC
          2. Inspector agent not required
        2. Host Assessments
          1. Vulnerable software (CVE), host hardening (CIS Benchmarks), and security best practices to review
          2. Inspector agent is required
    5. How does it work:
      1. Create assessment target
      2. Install agents on EC2 instances
        1. AWS will auto install the agent for instances that allow Systems Manager run commands
      3. Create Assessment Templates
      4. Perform Assessment run
      5. Review findings against the rules
  28. Key Management Service (KMS) and CloudHSM
    1. KMS
      1. AWS KMS is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data
      2. KSM Integrations
        1. Integrates with other services (EBS, S3, and RDS, etc) to make it simple to encrypt your data with encryption keys you manage
        2. Controlling your keys
          1. Provides you with centralized control over the lifecycle and permissions of your keys
          2. Can create new keys whenever you wish and you can control who can manage keys separately from who can use them
      3. CMK = Customer Master Key
        1. Logical representation of a master key
        2. CMK includes metadata such as they Key ID, creation date description, and key state
        3. CMK also contains the key material used to encrypt/decrypt data
        4. Getting started with CMK:
          1. You start the service by requesting the creation of a CMK
          2. You control the lifecycle of a CMK as well as who can use or manage it
      4. HSM - Hardware Security Module
        1. A physical computing device that safeguards and manages digital keys and performs encryption and decryption functions
        2. HSM contains one or more secure cryptoprocessor chips
        3. 3 ways to generate a CMK:
          1. AWS creates the CMK for you

Key material for CMK is generated within HSMs managed by AWS KMS

          1. Import key material from your own key management infrastructure and associate it with a CMK
          2. Have they key material generated and used in an AWS CloudHSM Cluster as part of the custom key store feature in KMS
      1. Key Rotation:
        1. If KMS HSMs were used to generate your keys, you can have AWS KMS auto rotate CMKs every year
          1. Auto key rotation is not supported for imported keys, asymmetric keys or keys generated in an AWS CloudHSM cluster using KMS custom key store feature
      2. Policies
        1. Primary way to manage access to your KMS CMKs is with policies
          1. Policies re documents that describe who has access to what
        2. Policies attached to an IAM Identity = identity-based policies (IAM policies), policies attached to other kinds of resources are called resource-based policies
        3. Key Policies
          1. In KMS, you must attach resource-based policies to your customer master keys (CMKs) → these are key policies
          2. All CMKs must have a key policy
        4. 3 ways to control permissions:
          1. Use the key policy- controlling access this way means the full scope of access to the CMK is defined in a single document (the key policy)
          2. Use IAM policies in combo with key policy- controlling access this way enables you to manage all the permissions for your IAM identities in IAM
          3. Use grants in combo with key policy- enables you to allow access to CMK in the key policy, as well as allow users to delegate their access to others
    1. CloudHSM
      1. Cloud-based Hardware Security Module that enables you to easily generate and use your own encryption keys on the AWS Cloud
        1. Basically renting physical device from AWS
    2. KMS vs CloudHSM
      1. KSM
        1. Shared tenancy of underlying Hardware
        2. Auto key rotation
        3. Auto key generation
      2. CloudHSM
        1. Dedicated HSM to you
        2. Full control of underlying hardware
        3. Full control of users, groups, keys, etc
        4. No auto key rotation
  1. Secrets Manager
    1. Service that securely stores, encrypts, and rotates your db credentials and other secrets
      1. Encryption in transit and at rest using KMS
      2. Auto rotates credentials
      3. Apply fine-grained access control using IAM policies
      4. Costs money, but highly scalable
    2. What else can it do
      1. Your app makes an API call to Secrets Manager to retrieve the secret programmatically
      2. Reduces the risk of credentials being compromised
    3. What can be stored?
      1. RDS credentials
      2. Credentials for non-RDS dbs
      3. Any other type of secret, provided you can store it as a key-value pair (SSH keys, API keys)
    4. Important: If you enable rotation, Secrets Manager immediately rotates the secret once to test the configuration
      1. You have to ensure that all of your apps that use these creds are updated to retrieve the creds from this secret using Secrets Manager
      2. If you apps are still using embedded creds, do not enable rotation
      3. Recommended to enable rotation if your apps are not already using embedded creds
  2. Parameter Store
    1. A capability of AWS Systems Manager that provides, secure, hierarchical storage for config data management and secrets management
    2. Can store things like passwords, db strings, AMI IDs, and license credentials as parameter values - can store as plain text or encrypted
    3. Parameter store is free
    4. 2 Big limits to Parameter Store:
      1. Limit to number of parameters you can store (current max is 10k)
      2. No key rotation
  3. Parameter Store vs Secrets Manager
    1. Minimize cost → Parameter Store
    2. Need more than 10k secrets, key rotation, or the ability to generate passwords using CloudFormation → Secrets Manager
  4. Pre Signed URLs or Cookies
    1. All objects in S3 are private by default- only object owner has permissions to access
    2. Pre Signed URLS
      1. Owner can share objects with others by creating a pre-signed URL, using their own credentials, to grant the time-limited permission to download the objects
        1. When you create a pre signed URL for your object, you must provide your security credentials, specify a bucket name and an object key, and indicate the HTTP method (or GET to download the object) as well as expiration date and time
        2. URLs are only valid for specified duration
      2. To generate a pre signed URL for an object, must do through CLI
        1. > aws S3 presign s3://nameofbucket/objectname --expires-in 3600
      3. Way to share an object in a private bucket?
        1. Pre signed URLs
    3. Pre signed Cookies
      1. Useful when you want to provide access to multiple restricted files
      2. The cookie will be saved on the user’s computer and they will be able to browse the entire comments of the restricted content
      3. Use case:
        1. Subscription to download files
  5. IAM Policy Documents
    1. Amazon Resource Names (ARNs)
      1. These uniquely ID a resource within Amazon
      2. All ARNs begin with:
        1. arn:partition:service:region:account_id
          1. Ex: arn:aws:ec2:eu-central-1:123456789012
      3. And end with:
        1. Resource
        2. resource_type/resource
          1. Ex: my_awesome_bucket/image.jpg
        3. resource_type/resource/qualifier
        4. resource_type:resource
        5. Resource_type:resource:qualifier
      4. Note: for global services, for example, we will have no region, so there will be a :: aka omitted value in the arn
    2. IAM Policies
      1. Are JSON docs that define permissions
      2. IAM/Identity Policy = applying policies to users/groups
      3. Resource Policy = apply to S3, CMKs, etc
      4. They are basically a list of statements:
        1. {“Version”:”2012-10-17”,

“Statement”:[

{...},

{...}

]

}

        1. Each statement matches an AWS API request
        2. Each statement has an effect of Allow or Deny
        3. Matched based on action
      1. Permission boundaries
        1. Used to delegate admin to other users
        2. Prevent privilege escalation or unnecessarily broad permissions
        3. Control max permissions an IAM policy can grant
    1. Exam tips:
      1. If permission is not explicitly allowed, it is implicitly denied
      2. An explicit deny > anything else
      3. AWS joins all applicable policies
      4. AWS managed vs customer managed
  1. AWS Certificate Manager
    1. Allows you to create, manage, and deploy public and private SSL certificates use with other services
      1. Integrates with other services - such as ELB, CloudFront distros, API Gateway - allowing you to easily manage and deploy SSL certs in environment
    2. Benefits:
      1. Do not have to pay for SSL certificates
        1. Provisions both public and private certificates for free
        2. You will still pay for the resources that utilize your certificates - such as ELBs
      2. Automated Renewals and Deployment
        1. Can automate the renewal of your SSL certificate and then auto update the new certificate with ACM-integrated services, such as ELB, CloudFront, API Gateway
      3. Easy to set up
  2. AWS Audit Manager
    1. Continuously audit your AWS usage and make sure you stay compliant with industry standards and regulations
    2. It is an automated service that produces reports specific to auditors for PCI compliance, etc
    3. Use Cases:
      1. Transition from Manual to Automated Evidence Collection
        1. Allows you to produce automated reports for auditors and reduce manual
      2. Continuous Auditing and Compliance
        1. Continuous basis, as your environment evolves and adapts, you can produce automated reports to evaluate your environment against industry standards
      3. Internal Risk Assessments
        1. Can create a new framework from the beginning or customize pre built frameworks
        2. Can launch assessments to auto collect evidence, helping you validate if your internal policies are being followed
  3. AWS Artifact
    1. Single source you can visit to get the compliance-related info that matters to you, such as security and compliance reports
    2. What is available?
      1. Huge number of reports available
        1. Service Organization control (SOC) reports
        2. Payment Card Industry (PCI) reports
        3. As well as other certifications - HIPAA, etc
  4. AWS Cognito
    1. Provides authentication, authorization, and user management for your web and mobile apps in a single service without the need for custom code
      1. Users can sign-in directly with a UN/PW they create or through a third party (FB, Amazon, Google, etc)
      2. ⇒ authorization engine
    2. Provides the following features:
      1. Sign-up and sign-in options for your apps
      2. Access for guest users
      3. Acts as an identity broker between your application and web ID providers, so you don’t have to write any custom code
      4. Synchronizes user data across multiple devices
      5. Recommended for all mobile apps that call AWS Services
    3. Use cases:
      1. Authentication
        1. Users can sign in using a user pool or a 3rd party identity provider, such as FB
      2. 3rd Party Authentication
        1. Users can authenticate using identity pools that require an identity pool (IdP) token
      3. Access Server-Side Resources
        1. A signed-in user is given a token that allows them access to resources that you specify
      4. Access AWS AppSync Resources
        1. Users can be given access to AppSync resources with tokens received from a user or identity pool in Cognito
    4. User Pools and Identity Pools
      1. Two main components of Cognito
        1. User Pools
          1. Directories of users that provide sign-up and sign-in options for your application users
        2. Identity Pools
          1. Allows you to give your users access to other AWS Services
          2. You can use identity pools and user pools together or separately
    5. How it works - broadly
      1. When you use the basic authflow, your app first presents an ID token from an authorized Amazon Cognito user pool or third-party identity provider in a GetID request.
      2. The app exchanges the token for an identity ID in your identity pool.
      3. The identity ID is then used with the same identity provider token in a GetOpenIdToken request.
      4. GetOpenIdToken returns a new OAuth 2.0 token that is issued by your identity pool.
      5. You can then use the new token in an AssumeRoleWithWebIdentity request to retrieve AWS API credentials.
      6. The basic workflow gives you more granular control over the credentials that you distribute to your users.
        1. The GetCredentialsForIdentity request of the enhanced authflow requests a role based on the contents of an access token.
        2. The AssumeRoleWithWebIdentity request in the classic workflow grants your app a greater ability to request credentials for any AWS Identity and Access Management role that you have configured with a sufficient trust policy.
        3. You can also request a custom role session duration.
    6. Cognito Sequence
      1. Device/App connects to a User Pool in Cognito - You are authenticating and getting tokens
      2. Once you've got that token, your device is going to exchange that token to an identity pool, and then the identity pool with hand over some AWS credentials
      3. Then you can use those credentials to access your AWS Services
      4. Basic Cognito Sequence:
        1. Request to user pool, authenticates and gets token
        2. Exchanges token and get AWS creds
        3. Use AWS creds to access AWS services
  1. Amazon Detective
    1. You can analyze, investigate and quickly identify the root cause of potential security issues or suspicious activities
    2. Detective pulls data from your AWS resources and uses ML, statistical analysis, and graph theory to build a linked set of data that enables you to quickly figure out the root cause of your security issues
      1. To auto create an overview of users, resources, and the interactions between them over time
    3. Sources for Detective:
      1. VPC flow logs, CloudTrail logs, EKS audit logs, and GuardDuty findings
    4. Use Cases:
      1. Triage Security Findings - generates visualizations
      2. Threat Hunting
    5. Exam Tips:
      1. Operates across multiple services and analyzes root cause of an event
      2. If you see “root cause” or “graph theory”, think Detective
      3. Don’t confuse with Inspector
        1. Inspector = Automated vulnerability management service that continually scans EC2 and container workloads for software vulnerabilities and unintentioned network exposure
  2. AWS Network Firewall
    1. Physical firewall protection - a managed service that makes it easy to deploy physical firewall protection across your VPCs
      1. Managed infrastructure
    2. Includes a firewall rules engine that gives you complete control over your network traffic
      1. Allowing you to do things such as block outbound Server Message Block (SMB) requests to stop the spread of malicious activity
    3. Benefits:
      1. Physical infrastructure in the AWS datacenter that is managed by AWS
      2. Network Firewall works with Firewall Manager
        1. FW Manager with Network Firewall added: Allows you to centrally manage security policies across existing and newly created accounts and VPC
      3. Also provides an Intrusion Detection System (IPS) that gives you active traffic flow inspection
        1. See IPS, think Network Firewall
    4. Use Cases:
      1. Filter Internet Traffic
        1. Use methods like ACL rules, stateful inspection, protocol detection, and intrusion prevention to filter your internet traffic
      2. Filter Outbound Traffic
        1. Provide the URL/domain name, IP address, and content-based outbound traffic filtering
        2. Help you stop possible data loss and block known malicious communicators
      3. Inspect VPC-to-VPC Traffic
        1. Auto inspect traffic moving from one VPC to another as well as across multiple accounts
    5. Exam Tips:
      1. Scenario about filtering your network traffic before it reaches your internet gateway
      2. Or if you require IPS or any hardware firewall requirements
  3. AWS Security Hub
    1. Single place to view all of your security alerts from services like GuardDuty, Inspector, Macie, and Firewall Manager
      1. Works across multiple accounts
    2. Use Cases:
      1. Conduct Cloud Security Posture Management (CSPM)- use automated checks that comply with common frameworks (for ex: Center for Information Security (CIS) or PSI DSS) to help reduce your risk
      2. Correlate security findings to discover new insights
        1. Aggregate all your security findings in one place, allowing security staff to more easily identify threats and alerts
  4. CloudFormation
    1. Overview:
      1. Written in declarative programming language supports either JSON or YAML formatting
      2. Creates immutable architecture - easily create/destroy architecture
      3. Creates the same API calls that you would make manually
    2. Steps in CloudFormation:
      1. Step 1: write code
      2. Step 2: Deploy your template
        1. When you upload your template, CloudFormation engine will go through the process of making the needed AWS API calls on your behalf
    3. Create CloudFormation Stack
      1. Set parameters that are defined in your template and allow you to input custom values when you create or update a stack
        1. Parameters come from the code in the template
    4. 3 sections of CloudFormation template:
      1. Parameters
      2. Mappings = values that fill themselves in during formation
      3. Resource Section
    5. If CloudFormation finds an error, it rolls back to the last known good state
  5. Elastic Beanstalk
    1. The Amazon PaaS tool - one stop for everything AWS
    2. Automation
      1. Automates all of your deployments
      2. You can templatize what you would like your environment to look like
      3. Deployments handled for us- upload code, test your code in a staging environment, then deploy to production
      4. Handles building out the insides of your EC2 instances for you
    3. Configuring Elastic Beanstalk
      1. Pick your platform
        1. Pick language - supports docker which means we can run all sorts of languages/environments inside a container on Elastic Beanstalk
      2. Additional Configurations
        1. Basically bundles all the wizards from across AWS services and gives you a place to configure all these in Beanstalk
    4. Exam Tips:
      1. Bring your code and that is all
      2. Elastic Beanstalk = PaaS tool
        1. It builds the platform and stacks your application on top
      3. Not serverless - Beanstalk creates and manages standard EC2 architecture
  6. Systems Manager
    1. Suite of tools designed to let you view, control and automate both your AWS architecture and on-prem architecture
    2. Features of Systems Manager
      1. Automation Documents [now called Runbooks]
        1. Can be used to control your instances or AWS resources
      2. Run Command
        1. Executes commands on your hosts
      3. Patch Manager
        1. Manage app versions
      4. Parameter Store
        1. Secret Values
      5. Hybrid Architecture
        1. Control you on-prem architecture
      6. Session Manager
        1. Allows you to connect and remotely interact with your architecture
    3. All it takes for EC2/on-prem to be managed by Systems Manager is to:
      1. Install Systems Manager Agent
      2. And give the instance a role/permissions to communicate to the Systems Manager
  7. Caching
    1. Types of Caching
      1. Internal: in front of database to store frequent queries, for example
      2. External: CDN = Content Delivery Network
    2. AWS Caching Options
      1. CloudFront = External
      2. ElastiCache = Internal
      3. DAX = DynamoDB Solution
      4. Global Accelerator = External
  8. Global Caching with CloudFront
    1. CloudFront Overview
      1. Fast Content Delivery Network (CDN) service that securely delivers data, videos, apps, and APIs to customers globally
        1. Helps reduce latency and provide higher transfer speeds using AWS edge locations
      2. First user makes request through CloudFront at an Edge location, CloudFront will go to S3 and grab object, it will hold a copy of that object at the Edge Location
        1. First user’s request is not faster, but all after are pulling from Edge location
    2. CloudFront Settings
      1. Security
        1. Defaults to HTTPS connections with the ability to add custom SSL certifications
        2. Put secure connection on static S3 connections
      2. Global Distribution
        1. Cannot pick specific countries, just general areas
      3. Endpoint Support - AWS and Non-AWS
        1. Can be used to front AWS endpoints as well as non-AWS applications
      4. Expiring Content
        1. You can force an expiration of content from the cache if you cannot wait for the TTL
      5. Can restrict access to your content via CloudFront using signed URLs or signed cookies
    3. Exam Tips:
      1. Solution for external customer performance issue
  9. Caching your data with ElastiCache and DAX
    1. ElastiCache
      1. Managed version of 2 open source technologies
        1. Memcached
        2. Redis
      2. Neither of these tools are specific to AWS but by using ElastiCache, you can spin up 1 or the other, or both to avoid a lot of common issues
      3. ElastiCache can sit in front of almost any database, but it really excels being placed in front of RDS’s
      4. Memcached vs Redis
        1. Both sit in front of database and cache common queries that you make
        2. Memcached
          1. Simple db caching solution
          2. Not a db by itself
          3. No failover, no multi-AZ support, no backups
        3. Redis
          1. Supported as a caching solution
          2. But also has the ability to function as a standalone NoSQL db

“Caching Solution” but also can be the answer if you are looking for a NoSQL solution and DynamoDB isn’t present

Has failover, multi-AZ, backup support

    1. DynamoDB Accelerator (DAX)
      1. It is an In-Memory Cache
        1. Reduce DynamoDB response times from milliseconds to microseconds
      2. DAX lives inside a VPC, you specify which - is highly available
      3. You are in control
        1. You determine the node size and count for the cluster, TTL for the data, and maintenance windows for changes and updates
  1. Fixing IP Caching with Global Accelerator
    1. Global Accelerator
      1. A networking service that sits in front of your apps and sends your users’ traffic through AWS’s global network infrastructure
        1. Can increase performance and help deal with IP Caching
      2. IP Caching Issue
        1. User connecting to ELB, caches that ELB’s IP address (for the period defined by TTL)
        2. If that ELB goes offline and its IP changes, the user will be trying to connect with wrong IP
        3. Global accelerator solves this problem by sitting in front of ELB
          1. User is served 1 of only 2 IP addresses - that never change
          2. Even if ELB’s IP changes, user connection doesn't change
      3. Top 3 features of Global Accelerator:
        1. Masks Complex Architecture
          1. Global Accelerator IPs never change for users
        2. Speeds things up
          1. Traffic is routed through AWS’s global network infrastructure
        3. Weighted Pools
          1. Create weighted groups behind the IPs to test out new features or handle failure in your environment
      4. Creating Global Accelerator
        1. Listeners = port/port range
        2. Listeners direct traffic to one or more endpoint groups (endpoints = such as load balancers)
          1. Each listener can have multiple endpoint groups
          2. Each endpoint group can only include endpoints that are in one Region
          3. Adjust the weight here
      5. Global Accelerator solves IP caching
  2. AWS Organizations
    1. Managing accounts with AWS Organizations
      1. Free governance tool that allows you to create and manage multiple AWS accounts - can control your accounts from a single location
      2. Applying standards across accounts (ex: Prod, Dev, Beta, etc)
    2. Key features in Organizations
      1. It is vital to create a Logging Account
        1. It is best practice to create a specific account dedicated to logging
        2. Ship all logs to one central location with Organizations
          1. CloudTrail supports logs aggregation
      2. Programmatic Creation: easily create and destroy new AWS accounts with API calls
      3. Reserve Instances: RIs can be shared across all accounts
      4. Consolidated Billing
      5. Service Control Policies (SCPs) can limit users’ permissions
        1. SCPs are in JSON format
        2. Once implemented, these policies will be applied to every single resource inside an account
          1. They are the ultimate way to restrict permissions and even apply to the root account
        3. Effectively a global policy and the only way to restrict what the root user can do
        4. SCPs never give permissions, they ONLY TAKE away the possible permissions that can be handed out
          1. Deny rules - deny specific things globally
          2. Allow rules - even more restrictive because it limits all of the permissions we could hand out
  3. Resource Access Manager (RAM)
    1. Free service that allows you to share AWS resources with other accounts and within your organization
    2. Allows you to easily share resources rather than having to create duplicate copies
    3. What can be shared?
      1. Transit Gateways
      2. VPC Subnets
      3. License Manager
      4. Route53 Resolver
      5. Dedicated Hosts
      6. Etc
    4. Can set permissions for what actions are allowed to happen on shared resources
    5. RAM vs VPC Peering
      1. Use RAM when sharing resources within the same region
      2. Use VPC Peering when sharing resources across regions
  4. Setting up Cross Account Role Access
    1. Cross-Account Role Access
      1. Allows you to set up temporary access you can easily control
      2. Set up primary user and then other users assume roles rather than having to create many many accounts
    2. Steps to set up Cross-Account Role Access:
      1. Create an IAM Role
      2. Grant access to allow users to temporarily assume role
    3. Exam Tips
      1. It is preferred to create cross-account roles rather than add additional IAM users
      2. Auditing - temporary access, temporary employees
  5. AWS Config
    1. Inventory management and control tool
    2. Allows you to show the history of your infrastructure along with creating rules to make sure it conforms to the best practices you’ve laid out
    3. 3 things it allows us to do:
      1. Allows us to query our architecture
        1. Can easily discover what architecture you have in your account - query by resource type, tag, even see deleted resources
      2. Rules can be created to flag when something is going wrong/out of compliance
        1. Whenever a rule is violated, you can be alerted or even have it auto fixed
      3. Shows history of Environment
        1. When did something change, who made that call, etc
    4. Can open up specific CloudTrail event that is tied to that event in Config
    5. Can auto remediate issues
      1. Can select remediation actions
        1. For example, can auto kick off an Automation Document that will block public access
    6. Exam Tips
      1. Config = Setting standards
  6. Directory Service
    1. Fully managed version of Active Directory
    2. Allows you to offload the painful parts of keeping AD online to AWS while still giving you the full control and flexibility AD provides
    3. Available Types:
      1. Managed Microsoft AD
        1. Entire AD suite, easily build out AD in AWS
      2. AD Connector
        1. Creates a tunnel between AWS and your on-prem AD
        2. Want to leave AD in physical data center
        3. Get an endpoint that you can authenticate against in AWS while leaving all of your actual users and data on-prem
      3. Simple AD
        1. Standalone directory powered by Linux Samba AD-Compatible server
        2. Just an authentication service
  7. AWS Cost Explorer
    1. Easy-to-use tool that allows you to visualize your cloud costs
      1. Can generate reports based on a variety of factors, including resource tags
    2. What can it do?
      1. Break down costs on a service-by-service basis
      2. Can break out by time, can estimate next month
      3. Filter and breakdown data however we want
    3. Exam tips
      1. Have to set tags as “cost allocation tag” to use
      2. Cost Explorer and Budgets go hand-in-hand
  8. AWS Budgets
    1. Allows organizations to easily plan and set expectations around cloud costs
      1. Easily track your ongoing spending and create alerts to let users know where they are close to exceeding their allotted spend
    2. Types of Budgets - you get 2 free each month
      1. Cost Budgets
      2. Usage Budgets
      3. Reservation Budgets - RIs
      4. Savings Plan Budgets
    3. Exam Tips
      1. Can be alerted on current spend or projected spend
      2. Can create a budget using tags as a filter
  9. AWS Costs and Usage Reports (CUR)
    1. Most comprehensive set of cost and usage data available for AWS spending
    2. Publishes billing reports to AWS S3 for centralized collection
      1. These breakdown costs by the time span (hour, day, month), service and resource, or by tags
      2. Daily updates to reports in S3 in CSV formats
      3. Integrates with other services - Athena, Redshift, QuickSight
    3. Use Cases for AWS CUR:
      1. Within Organizations for entire OU groups or individual accounts
      2. Tracks Savings Plans utilizations, changes, and current allocations
      3. Monitor On-Demand capacity reservations
      4. Break down your AWS data transfer charges: external and inter-Regional
      5. Dive deeper into cost allocation tag resources spending
    4. Exam Tip
      1. Most comprehensive overview of spending
  10. Reducing compute spend using Savings Plans and AWS Compute Optimizer
    1. AWS Compute Optimizer
      1. Optimizes, analyzes configurations and utilization metrics of your AWS resources
      2. Reports current usage optimizations and potential recommendations
      3. Graphs
      4. Informed decisions
      5. Which resources work with this service?
        1. EC2, Auto Scaling Groups, EBS, Lambda
      6. Supported Account Types
        1. Standalone AWS account without Organizations enabled
        2. Member Account - single member account within an Organization
        3. Management Account
          1. When you enable at the AWS Organization management account, you get recommendations based on entire organization (or lock down to 1 account)
      7. Things to know:
        1. Disabled by default
          1. Must opt in to leverage Compute Optimizer
          2. After opting in, enhance recommendations via activation of recommendation preferences
    2. Savings Plans
      1. Offer flexible pricing modules for up to 72% savings on compute
      2. Lower prices for EC2 instances regardless of instance family, size, OS, tenancy, or Region
      3. Savings can also apply to Lambda and Fargate usage
      4. SageMaker plans available for lowering SageMaker instance pricing
      5. Require Commitments
        1. 1 or 3 year options
        2. Pricing Plan options:
          1. Pay all upfront - most reduced
          2. Partial upfront
          3. No upfront
        3. Savings Plan Types:
          1. Compute Savings

Most flexible savings plan

Applies to any EC2 compute, Lambda, or Fargate usage

Up to 66% savings on compute

          1. EC2 Instance Savings

Stricter savings plan

Applies only to EC2 instances of a specific instance family in specific regions

Up to 72% savings

          1. SageMaker Savings

Only SageMaker Instances - any region and any component, regardless of family or sizing

Up to 64% savings

      1. Using and applying Savings Plan
        1. View recommendations in AWS billing console
        2. Recommendations are auto calculated to make purchasing easier
        3. Add to cart and purchase directly within account
        4. Apply to usage rates AFTER your RIs are applied and exhausted
          1. RIs have to be used first
        5. If in a consolidated billing family; savings applied to account owner first, then can be spread to others if sharing enabled
  1. Trusted Advisor for Auditing
    1. Trusted Advisor Overview
      1. Fully managed best practice auditing tool
      2. It will scan 5 different parts of your account and look for places when you could improve your adoption of the recommended best practice provided by AWS
    2. 5 Questions Trusted Advisor Asks:
      1. Cost Optimizations: Are you spending money on resources that are not needed?
      2. Performance: Are your services configured properly for your environment?
      3. Security: Is your AWS architecture full of vulnerabilities?
      4. Fault Tolerance: Are you protected when something fails?
      5. Service Limits: Do you have room to scale?
    3. Want to link Trusted Advisor with an automated response to alert users or fix the problem
      1. Use EventBridge (CloudWatch Events) to kick of Lambda function to fix problem
    4. To get the most useful checks, you will need a Business or Enterprise support plan
  2. Control Tower to enforce account Governance
    1. Control Tower Overview
      1. Governance: Easy way to set up and govern an AWS multi-account environment
      2. Orchestration: automates account creation and security controls via other AWS Services
      3. Extension: extends AWS Organizations to prevent governance drift, and leverabes different guardrails
      4. New AWS Accounts: users can provision new AWS accounts quickly, using central administration-established compliance policies
      5. Simple Terms: quickest way to create and manage a secure, compliance, multi-account environment based on best practices
    2. Features and Terms
      1. Landing Zone
        1. Well-architected, multi-account environment based on compliance and security best practices
        2. Basically a container that holds all of your Organizational Units, your accounts within there OU’s, and users/other resources that you want to force compliance on
        3. Can scale to fit whatever side you need
      2. Guardrails
        1. High-level rules providing continuous governance for the AWS Environment
        2. 2 Types
          1. Preventative
          2. Detective
      3. Account Factory
        1. Configurable account template for standardizing pre-approved configurations of new accounts
      4. CloudFormation Stack Set
        1. Automated deployments of templates deploying repeated resources for governance
        2. Management account would deploy a stack set to either the entire organization unit or the entire organization itself using repeatedly deployed resources
      5. Shared Accounts
        1. Three accounts used by Control Tower
          1. 2 of which are created during landing zone creation: Log Archive and Audit
    3. More on GuardRails
      1. High-level rules written in plain language providing ongoing governance
      2. 2 types:
        1. Preventative:
          1. Ensures accounts maintain governance by disallowing violating actions
          2. Leverages service control policies (SCPs) within Organizations
          3. Statuses of: Enforced or Not Enabled
          4. Supported in all regions
        2. Detective:
          1. Detects and alerts on noncompliant resources within all accounts
          2. Leverages AWS Config Rules

Config Rules - it alerts, does NOT remediate unless you leverage other resources

          1. Statuses: clear, in violation or not enabled
          2. Only apply to certain regions

Only going to work in regions that are supported by Control Tower

Which is currently not every region

    1. Control Tower Diagram
      1. Start with management account - with Organization and Control Tower enabled
      2. Control Tower creates 2 accounts (“shared accounts”)
        1. Log Archive Account
        2. Audit Account
      3. Control Tower places an SCP on every account that exists within our Organization → Preventative Guardrails
      4. Control Tower places AWS Config Rules in each account as well → Detective Guardrails
      5. All of our Config and CloudTrail logs get sent to the Log Archive shared account - to centralize logging
      6. Will also set up notifications for any governance violations that may occur
        1. All governance will be sent to the auditing account - including configuration events, aggregate security notifications, and drift notifications
          1. Go to an SNS topic in the audit account, you can use to alert the correct team
  1. AWS License Manager
    1. Manage Software Licenses
    2. Licenses made easy - simplifies managing software licenses with different vendors
      1. Centralized: helps centrally manage licenses across AWS accounts and on-prem environments
    3. Set usage limits
      1. Control and visibility into usage of licenses and enabling license usage limits
    4. Reduce overages and penalties via inventory tracking and rule-based controls for consumption
    5. Versatile- support any software based on vCPU physical cores, sockets, and number of machines
  2. AWS Personal Health Dashboard (aka AWS Health)
    1. Monitoring Health Events
      1. Visibility of resource performance and availability of AWS services or accounts
      2. View how the health events affect you and your services, resources, and accounts
    2. AWS maintains timeliness and relevant info within the events
    3. View upcoming maintenance tasks that may affect your accounts and resources
    4. Alerts- near instant delivery of notifications and alerts to speed up troubleshooting or prevention actions
      1. Automates actions based on incoming events using EventBridge
      2. Health Event → EventBridge → SNS topic, etc
    5. Concepts
      1. AWS Health Event = notifications sent on behalf of AWS services or AWS
      2. Account-specific event: events specific to your AWS account or AWS organization
      3. Public event = events reported on services that are public
      4. AWS Health Dashboard
        1. Dashboard showing account and public events, as well as service health
      5. Event type code = include the affected services and the specific type of event
      6. Event type category = associated category will be attached to every event
      7. Event status = open, closed, or upcoming
      8. Affected Entities
    6. Exam tip
      1. Look out for questions about checking alerts for service health and automating the reboot of EC2 instances for AWS Maintenance
  3. AWS Service Catalog and AWS Proton
    1. Standardizing Deployments using AWS Service Catalog
      1. Allows organizations to create and manage catalogs of approved IT services for deployments within AWS
      2. Multipurpose catalogs - list things like AMIs, server software, databases, and other pre-configured components
      3. Centralized management of IT service and maintain compliance
      4. End-User Friendly - easily deploy approved items
      5. CloudFormation
        1. Catalogs are written and listed using CloudFormation templates
      6. Benefits:
        1. Standardize
        2. Self-service deployments
        3. Fine-grained Access Control
        4. Versioning within Catalogs- propagate changes automatically
    2. AWS Proton
      1. Creates and manages infrastructure and deployment tooling for users as well as serverless and container-based apps
      2. How it works:
        1. Automate IaC provisioning and deployments
        2. Define standardized infrastructure for your serverless and container-based apps
        3. Use templates to define and manage app stacks that contain ALL components
        4. Automatically provisions resources, configures CI/CD, and deploys the code
        5. Supports AWS CloudFormation and Terraform IaC providers
      3. Proton is an all encompassing tool/service for deployments of your applications
      4. Standardization, empower developers
  4. Optimizing Architectures with the AWS Well-Architected Tool
    1. Review 6 Pillars of the Well-Architected Framework:
      1. Operational Excellence
      2. Reliability
      3. Security
      4. Performing Efficiency
      5. Cost Optimization
      6. Sustainability
    2. Well-Architected Tool
      1. Provides a consistent process for measuring cloud architecture
      2. Enables assistance with documenting workloads and architectures
      3. Guides for making workloads reliable, secure, efficient, and cost effective
      4. Measure workloads against years of AWS best practice
      5. Intended for specific audiences: technical teams, CTOs, architecture and operations teams
  5. AWS Snow Family
    1. Ways to move data to AWS
      1. Internet
        1. Could be slow, presents security risks
      2. Direct Connect
        1. Not always practical - not for short periods of time
      3. Physical
        1. Bypass internet entirely, and bundle data to physically move
    2. Snow Family
      1. Set of secure appliances that provide petabyte-scale data collection and processing solutions at the edge and migrate large-scale data into and out of AWS
        1. Offers built-in computing capabilities, enabling customers to run their operations in remote locations that do not have data center access or reliable network connectivity
      2. Members of the Snow Family:
        1. Snowcone
          1. Ex: climbing a wind turbine to collect data
          2. Smallest device
          3. 8TB of storage, 4 GB memory, 2 vCPUs
          4. Easily migrate data to AWS after you’ve processed it
          5. IoT sensor integration
          6. Perfect for edge computing where space and power are constrained
        2. Snowball Edge
          1. Ex: on a boat
          2. Jack of all trades
          3. 48-81TB storage
          4. Storage, compute (less than Edge Compute), and GPU flavors, varying amount of CPU/RAM
          5. Perfect for off-the-grid computing or migration to AWS
        3. Snowmobile
          1. Literally a semi truck of hard drives
          2. 100PB of storage
          3. Designed for exabyte-scale data center migration
  6. Storage Gateway and Types
    1. Storage Gateway
      1. A hybrid cloud storage service that helps you merge on-prem resources with the cloud
      2. Can help with a one-time migration or a long-term pairing of your architecture with AWS
    2. Types of Storage Gateway:
      1. File Gateway
        1. Caching local files
        2. NFS or SMB Mount - basically a network file share
          1. Mount locally and backs up data into S3
        3. Keep a local copy of recently used files
        4. Versions of File Gateway:
          1. Backup all data into cloud and Storage Gateway just act as the method of doing that - data lives in S3
          2. Or you can keep a local cached copy of most recently used files - so you don’t have to download from S3
          3. Keep data on-prem, backups go to S3
        5. Scenario - extend on-prem storage
        6. Helps with migrations to AWS
      2. Volume Gateway
        1. Backup drives
        2. iSCSI mount
          1. Backing up these disks that the VMs are reading/writing to
        3. Same cached or stored mode as File Gateway - all backed up to S3
        4. Can create EBS snapshots and restore volumes inside AWS
          1. Easy way to migrate on-prem volumes to become EBS volumes inside AWS
        5. Perfect for backups and migration to AWS
      3. Tape Gateway
        1. Ditch the physical tapes and backup to Tape Gateway
        2. Stores inside S3 Glacier, Deep Archive, etc
        3. Directly integrated as a VM on-prem so it doesn’t change the current workflow
        4. It is encrypted
    3. Exam Tips
      1. Storage Gateway = Hybrid Storage
        1. Complement existing architecture
  7. AWS DataSync
    1. Agent-based solution for migrating on-prem storage to AWS
    2. Easily move data between NFS and SMB shares and AWS storage solution
    3. Migration Tool
    4. Using DataSync:
      1. On-prem
        1. Have on-prem server and install DataSync agent
      2. Configure DataSync service to tell it where data is going to go
        1. Secure transmission with TLS
      3. Supports S3, EFS, and FSx
  8. AWS Transfer Family
    1. Allows you to easily move files in and out of S3 or EFS using SFTP, FTP over SSL (FTPS), or FTP
    2. How does it transfer:
      1. Legacy users/apps have processes to transfer data, if want to now transfer to S3 or EFS can put Transfer Family (SFTP, FTPS, FTP) where the old endpoint was and have that service transfer to AWS S3/EFS
    3. Transfer Family Members
      1. AWS Transfer for SFTP and AWS Transfer for FTPS - transfers from outside of your AWS environment into S3/EFS
      2. AWS Transfer for FTP - only supported within the VPC and not over public internet
    4. Exam Tips
      1. Bringing legacy app storage to cloud
      2. DNS entry (endpoint) stays the same in legacy app
        1. We just swap out the old endpoint to become S3
  9. Migration Hub
    1. Single place to track the progress of your app migration to AWS
    2. Integrates with Server Migration Service (SMS) and Database Migration Service (DMS)
    3. Server Migration Service (SMS)
      1. Schedule movement for VMWare server migration
      2. All scheduled time, takes a copy of your underlying VSphere volume, bring that data into S3
      3. It converts that volume in S3 into an EBS Snapshot
      4. Creates and AMI from that
      5. Can use AMI to launch EC2 instance
      6. Essentially gives you an easy way to take your VM architecture and convert it to an AMI
    4. Database Migration Service (DMS)
      1. Takes on-prem/EC2/RDS old Oracle (or SQL Server) and runs the AWS Schema Conversion Tool on it
      2. To convert it to an Amazon Aurora DB
        1. Takes on-prem/EC2/RDS MySQL DB and consolidate it with DMS into Aurora
  10. Directory Service
    1. Fully managed version of Active Directory
    2. Allows you to offload the painful parts of keeping AD online to AWS while still giving you the full control and flexibility AD provides
    3. Available Types:
      1. Managed Microsoft AD
        1. Entire AD suite, easily build out AD in AWS
      2. AD Connector
        1. Creates a tunnel between AWS and your on-prem AD
        2. Want to leave AD in physical data center
        3. Get an endpoint that you can authenticate against in AWS while leaving all of your actual users and data on-prem
      3. Simple AD
        1. Standalone directory powered by Linux Samba AD-Compatible server
        2. Just an authentication service
  11. AWS Cost Explorer
    1. Easy-to-use tool that allows you to visualize your cloud costs
      1. Can generate reports based on a variety of factors, including resource tags
    2. What can it do?
      1. Break down costs on a service-by-service basis
      2. Can break out by time, can estimate next month
      3. Filter and breakdown data however we want
    3. Exam tips
      1. Have to set tags as “cost allocation tag” to use
      2. Cost Explorer and Budgets go hand-in-hand
  12. AWS Budgets
    1. Allows organizations to easily plan and set expectations around cloud costs
      1. Easily track your ongoing spending and create alerts to let users know where they are close to exceeding their allotted spend
    2. Types of Budgets - you get 2 free each month
      1. Cost Budgets
      2. Usage Budgets
      3. Reservation Budgets - RIs
      4. Savings Plan Budgets
    3. Exam Tips
      1. Can be alerted on current spend or projected spend
      2. Can create a budget using tags as a filter
  13. AWS Costs and Usage Reports (CUR)
    1. Most comprehensive set of cost and usage data available for AWS spending
    2. Publishes billing reports to AWS S3 for centralized collection
      1. These breakdown costs by the time span (hour, day, month), service and resource, or by tags
      2. Daily updates to reports in S3 in CSV formats
      3. Integrates with other services - Athena, Redshift, QuickSight
    3. Use Cases for AWS CUR:
      1. Within Organizations for entire OU groups or individual accounts
      2. Tracks Savings Plans utilizations, changes, and current allocations
      3. Monitor On-Demand capacity reservations
      4. Break down your AWS data transfer charges: external and inter-Regional
      5. Dive deeper into cost allocation tag resources spending
    4. Exam Tip
      1. Most comprehensive overview of spending
  14. Reducing compute spend using Savings Plans and AWS Compute Optimizer
    1. AWS Compute Optimizer
      1. Optimizes, analyzes configurations and utilization metrics of your AWS resources
      2. Reports current usage optimizations and potential recommendations
      3. Graphs
      4. Informed decisions
      5. Which resources work with this service?
        1. EC2, Auto Scaling Groups, EBS, Lambda
      6. Supported Account Types
        1. Standalone AWS account without Organizations enabled
        2. Member Account - single member account within an Organization
        3. Management Account
          1. When you enable at the AWS Organization management account, you get recommendations based on entire organization (or lock down to 1 account)
      7. Things to know:
        1. Disabled by default
          1. Must opt in to leverage Compute Optimizer
          2. After opting in, enhance recommendations via activation of recommendation preferences
    2. Savings Plans
      1. Offer flexible pricing modules for up to 72% savings on compute
      2. Lower prices for EC2 instances regardless of instance family, size, OS, tenancy, or Region
      3. Savings can also apply to Lambda and Fargate usage
      4. SageMaker plans available for lowering SageMaker instance pricing
      5. Require Commitments
        1. 1 or 3 year options
        2. Pricing Plan options:
          1. Pay all upfront - most reduced
          2. Partial upfront
          3. No upfront
        3. Savings Plan Types:
          1. Compute Savings

Most flexible savings plan

Applies to any EC2 compute, Lambda, or Fargate usage

Up to 66% savings on compute

          1. EC2 Instance Savings

Stricter savings plan

Applies only to EC2 instances of a specific instance family in specific regions

Up to 72% savings

          1. SageMaker Savings

Only SageMaker Instances - any region and any component, regardless of family or sizing

Up to 64% savings

  1. Migrating Workloads to AWS using AWS Application Discovery Service or AWS Application Migration Service (MGN)
    1. Application Discovery Service
      1. Helps you plan your migrations to the cloud via collection of usage and configuration data from on-prem servers
      2. Integrates with AWS Migration Hub which simplifies migrations and tracking migration statuses
      3. Helps you easily view discovered services, group them by application, and track each application migration
      4. How do we discover our on-prem servers?
        1. 2 Discovery Types
          1. Agentless

Completed via the Agentless Collector

It is an OVA file within the VMWare vCenter

OVA file = deployable file for a new type of VM appliance that you can deploy in vCenter

Once you deploy the OVA, it identifies hosts and VMs in vCenter

Helps track and collect IP addresses and MAC addresses, info on resource allocations (memory and CPUs), and host names

Collects utilization data metrics

          1. Agent-Based

Via an AWS Application Discovery Agent that is deployed

Install this agent on each VM and physical server

There is an installer for Linux and for Windows

Collects more info than agentless process

Static Configuration data, time-series performance info, network connections and OS processes

    1. Application Migration Service (MGN)
      1. An automated lift and shift servicer for expediting migration of apps to AWS
      2. Used for physical, virtual, or cloud servers to avoid cutover windows or disruptions - flexible
      3. Replicates source servers into AWS and auto converts and launches on AWS to migrate quickly
      4. 2 key features are RTO and RPO:
        1. Recovery Time Objective
          1. Typically just minutes; depending on OS boot time
        2. Recovery Point Objective
          1. Measured in the sub-second range
          2. Can recover at any point after migration
  1. Migraging DBs from On-Prem to AWS Database Migration Service (DMS)
    1. Migration tool for relational, Data Warehouses, NoSQL databases, and other data stores
      1. Migrate data into cloud or on-prem: either into or out of AWS
      2. Can be one-time migration or continuous replicate ongoing changes
    2. Conversion Tool called Schema Conversion Tool (SCT) used to transfer database schemas to new platforms
    3. How does DMS work?
      1. It's basically just a server running replication software
      2. Create a source and target connections
      3. Schedule tasks to run on DMS server to move data
      4. AWS creates the tables and primary keys *if they don't exist on the target)
        1. Optionally create you target tables beforehand
      5. Leverage the SCT for creating some or all of your tables, indexes, and more
      6. Source and target data stores are referred to as endpoints
    4. Important Concepts
      1. Can migrate between source and target endpoints with the same engine types
      2. Can also utilize SCT to migrate between source and target endpoints with different engines
      3. Important to know that at least 1 endpoint most live within an AWS Service
    5. AWS Schema Conversion Tool (SCT)
      1. Convert
      2. Supports many engine types
        1. Many types of relational databases including both OLAP and OLTP, even supports data warehouses
        2. Supports many endpoints
          1. Any supported RDS engine type: Aurora, Redshift
        3. Can use the converted schemas with dbs running on EC2 or data stored in S3
          1. So don't have to migrate to a db service, per se, can be EC2 or S3
    6. 3 Migration Types:
      1. Full Load
        1. All existing data is moved from sources to targets in parallel
        2. Any updates to your tables while this is in progress are cached on your replication server
      2. Full Load and Change Data Capture (CDC)
        1. CDC guarantees transactional integrity of the target db- only one
      3. CDC Only
        1. Only replicate the data changes from the source db
    7. Migrating Large Data Stores via AWS Snowball
      1. With terabyte migrations, can run into bandwidth throttle/network throttles on network
      2. Can leverage Snowball Edge
        1. Leverage certain Snowball Edge devices and S3 with DMS to migrate large data sets quickly
      3. Can still leverage SCT to extract data into Snowball devices and then into S3
      4. Load converted data
        1. DMS can still load the extracted data from S3 and migrate to chosen destination
      5. Also CDC compatible
  2. Replicating and Tracking Migrations with AWS Migration Hub
    1. Migration Hub
      1. Single place to discover existing servers, plan migration efforts and track migration statuses
      2. Visualize connection and server/db statuses that are a part of your migrations
      3. Options to start migrations immediately or group servers into app groups first
      4. Integrates with App Migration Service or DMS
      5. ONLY discovers and plans migrations and works with the other mentioned services to actually do the migrations
    2. Migration Phases
      1. Discover- find servers and databases to plan you migrations
      2. Migrate - connect tools to Migration Hub, and migrate
      3. Track
    3. Server Migration Service (SMS)
      1. Automate migrating on-prem services to cloud
      2. Flexible - covers broad range of supported VMs
      3. Works by incremental replications of server VMs over to AWS AMIs that can be deployed on EC2
      4. Can handle volume replication
      5. Incremental Testing
      6. Minimize downtime
  3. Amplify
    1. For Quickly deploying web apps
    2. Offers tools for front-end web and mobile developers to quickly build full-stack applications on AWS
    3. Offers 2 services:
      1. Amplify Hosting
        1. Support for common single-pane application (SPA) frameworks like React, Angular, and Vue
          1. Also supports Gatsby and Hugo static site generators
        2. Allows for separate prod and staging environments for the frontend and backend
        3. Support for Server-Side Rendering (SSR) apps like Next.js
          1. Remember cannot do dynamic websites in S3, so any answer with Server-Side Rendering would be Amplify
      2. Amplify Studio
        1. Easy Authentication and Authorization
        2. Simplified Development
          1. Visual development environment to simplify creation of full-stack web or mobile apps
        3. Ready-to-use components, easy creation of backends and automated connections between the frontend and backend
    4. Exam Tip
      1. Amplify is the answer in scenario based questions like managed server-side rendering in AWS, easy mobile development, and developers running full-stack applications
  4. Device Farm
    1. For testing App Services
    2. Application testing service for testing and interacting with Android, iOS, and web apps on real devices
    3. 2 primary testing methods:
      1. Automated
        1. Upload scripts or use built-in tests for automatic parallel tests on mobile devices
      2. Remote Access
        1. You can swipe, gesture, and interact with the devices in real time via web browser
  5. Amazon Pinpoint
    1. Enables you to engage with customers through a variety of different messaging channels
      1. Generally used by marketers, business users, and developers
    2. Terms:
      1. Projects
        1. Collection of info, segments, campaigns, and journeys
      2. Channels
        1. Platform you intend to engage your audience with
      3. Segments
        1. Dynamic or imported; designates which users receive specific messages
      4. Campaigns
        1. Initiatives engaging specific audience segments using tailored messages
      5. Journeys
        1. Multi-step engagements
      6. Message Templates
        1. Content and settings for easily reusing repeated messages
    3. Leverage Machine Learning modules to predict user patterns
    4. 3 Primary uses:
      1. Marketing
      2. Transactions - order confirmations, shipping notifications
      3. Bulk Communications
  6. Analyzing Text using Comprehend, Kendra, and Textract
    1. Comprehend
      1. Uses Natural Language Processing (NLP) to help you understand the meaning and sentiment in your text
        1. Ex: automate understanding reviews as positive or negative
      2. Automating comprehension at scale
      3. Use Cases:
        1. Analyze call center analytics
        2. Index and Search product reviews
        3. Legal briefs management
        4. Process financial data
    2. Kendra
      1. Allows you to create an intelligent search service powered by machine learning
      2. Enterprise search applications - bridge between different silos of information (S3, file servers, websites), allowing you to have all the data intelligently in one place
      3. Use Cases:
        1. Research and Development Acceleration
        2. Improve Customer Interaction
        3. Minimize Regulatory and Compliance Risks
        4. Increase Employee productivity
        5. Can do research for you
    3. Textract
      1. Uses Machine Learning to automatically extract text, handwriting, and data from scanned documents
      2. Goes beyond OCR (Optical Character Recognition) by adding Machine Learning
      3. Turn text into data
      4. Use Cases:
        1. Convert handwritten/filled forms
  7. AWS Forecast
    1. Time-series forecasting service that uses Machine Learning and it built to give you important business insights
    2. Can send your data to forecast and it will automatically learn your data, select the right Machine Learning algorithm, and then help you forecast your data
    3. Use cases:
      1. IoT, DevOps, Analytics
  8. AWS Fraud Detector
    1. AWS AI service built to detect fraud in your data
    2. Create a fraud detection machine learning model that is based on your data - can quickly automate this
    3. Use Cases:
      1. Identify suspicious online transactions
      2. Detect new account fraud
      3. Prevent Trial and Loyalty program abuse
      4. Improve account takeover detection
  9. Working with Text and Speech using Polly, Transcribe, and Lex
    1. Transcribe
      1. Speech to text
    2. Lex
      1. Build conversational interfaces in your apps using NLP
    3. Polly
      1. Turns text into lifelike speech
    4. Alexa uses: Transcribe → Lex (sends answer/text to) → Polly
  10. Rekognition
    1. Computer vision product that automates the recognition of pictures and video using deep learning and neural networks
    2. Use these processes to understand and label images and videos
    3. Main use case is Content Moderation
      1. Also facial detection and analysis
      2. Celebrity recognition
      3. Streaming video events detection
        1. Ring, ex
  11. SageMaker
    1. To train learning models
    2. Way to build Machine Learning models in AWS Cloud
    3. 4 Parts
      1. Ground Truth: set up and manage labeling jobs for training datasets using active learning and human labeling
      2. Notebook: managed Jupyter notebook (python)
      3. Training: train and tune models
      4. Inference: Package and deploy Machine Learning models at scale
    4. Deployment Types:
      1. Online Usage- if need immediate response
      2. Offline Usage- otherwise
    5. Elastic Inference - used to decrease cost
  12. Translate
    1. Machine learning service that allows you to automate language translation
    2. Uses deep learning and neural networks
  13. Elastic Transcoder
    1. For converting media files
    2. Allows businesses/developers to convert (transcode) media files from original source format into versions that are optimized for various devices
    3. Benefits:
      1. Easy to use - APIs, SDKs, or via management console
      2. Elastically scalable
  14. AWS Kinesis Video Streams
    1. Way of streaming media content from a large number of devices to AWS and then running analytics, Machine Learning, and playback and other processing
      1. Ex: Ring
      2. Elastically scales
      3. Access data through easy-to-use APIs
      4. Use Cases:
        1. Smart Home- ring
        2. Smart city- CCTV
        3. Industrial Automation