Virtualization and Containerization
Linux
ChromeOS
- An operating system by Google based on the Linux kernel.
Windows
- An operating system by Microsoft.
OSX
- An operating system by Apple, now known as macOS.
Client Router
- A router used in a client network to manage network traffic.
Client Users
- End-users who access services or applications on a network.
Server
- A computer or system that provides resources, data, services, or programs to other computers, known as clients, over a network.
Academic Use
- The material is intended for educational purposes.
Case Study: Software Development
- A software development company faces challenges in testing products on different operating systems.
- Testing on Windows, Linux, and macOS requires multiple computers, leading to wasted time.
- Each project has heavy hardware requirements, increasing the need for more machines.
- Developers typically code on a single platform but need to test on different platforms to ensure consistent output.
- The absence of cross-platform testing can lead to user dissatisfaction.
Local Testing vs. Server Deployment
- Local testing uses URLs like
http://localhost/myapp. - Deployment to a test server uses URLs like
https://test-server1.acme.org/myapp/. - The client-server model involves clients (consumers) and servers (providers) of web content.
What are Servers?
- Servers are dedicated, scaled-up computers with multiple CPUs, RAM, and storage.
- They are designed to handle multiple concurrent requests.
- Servers are located in data centers and feature redundant hardware (e.g., dual power supply) for 24x7 operation.
PCs/Servers Before Virtualization
- Single OS per machine creates a 1:1 relationship between hardware, OS, and application.
- Software and hardware are tightly coupled.
- Multiple applications often conflict, making maintenance difficult.
- Resources are underutilized, with nominal CPU utilization in servers around 5%.
- Sizing servers is an issue to cater to spikes in demand.
- Efficiency means maximizing benefit from consumed resources.
What is Virtualization?
- Definition: Creating virtual versions of hardware platforms, storage, and network resources.
- Virtualization allows multiple virtual instances of operating systems, applications, or resources to run on a single physical machine.
- It enables efficient resource utilization, flexibility, and scalability.
- The relationship changes from 1:1 (OS to hardware) to multiple virtual instances on a single hardware platform.
- Example: Using a Virtual Machine (VM) within a Windows operating system.
Advantages of Virtualization
- Maximize resources (Resource efficiency) – Virtualization can reduce the number of physical systems needed.
- Virtualization allows maximum use of the hardware investment, which results in increased efficiency of data center servers because multiple virtual machines can be hosted on one server
- Ability to run multiple systems – With virtualization, you can also run multiple types of applications and even run different operating systems for those applications on the same physical hardware.
- Energy Savings - Using virtual machines results in more compute power from a single server. When fewer servers are used, it results in less energy consumption. The long-term outcome of virtualization is less space and power used in data centers. This also reduces the carbon footprint.
- Cost savings – By virtue of the above reduced hardware footprint which leads to reduced hardware costs, energy consumption, and IT management expenses.
- Scalable Resource Usage — More resources (RAM/CPU/DISK) can be allocated/deallocated on the virtual machines wherever needed..
- Ease of Management — Unused virtual machines (VMs) can be kept indefinitely off until when required, and do not consume any resources other than disk space.
- Isolation & security — Data Isolation from each VMs, as applications can be kept on separate VMs. Separate VMs do not pose a risk to each other. We can run non-trustworthy application and testing on the VMs - This concept is known as a sandbox environment, where it is used to innovate, test and securely analyze code without affecting the rest of the system, by containing potential issues like bugs and errors, and helping to discover and reduce security threats. Sandboxes replicate production setups but remain isolated. This ensures that any issues or errors do not impact real-world operations
Drawbacks of Virtualization
- Virtualization introduces complexity with an additional management layer.
- Potential performance overhead due to the additional layer requiring resources.
- Potential security vulnerabilities.
- Increased management complexity and compatibility issues.
What is a Virtual Machine (VM)?
- A virtual machine (VM) is a software-based emulation of a physical computer, created and managed by a hypervisor.
- It runs its own operating system (guest OS) and apps independently, using its own virtual hardware (CPU, memory, storage, network interface).
- Multiple VMs can run on a single physical machine.
- Provides isolation, flexibility, and easy recovery through snapshots.
The Hypervisor
- The hypervisor is the enabler behind virtualization.
- A hypervisor (also called a virtual machine monitor or VMM) is software, firmware, or hardware that creates and manages virtual machines (VMs) by abstracting the physical hardware.
- It allows multiple operating systems to run on the same physical machine simultaneously by allocating resources such as CPU, memory, and storage to each VM.
- Abstraction (Recap again!) is a key concept in computing, which involves simplifying complex systems by breaking them down into more manageable parts and focusing on the essential features while hiding the unnecessary details.
- The hypervisor abstracts the different underlying hardware, so every virtualized instance of an operating system will have a standard set of hardware, making VM migrations across different hosts easy.
Role of a Hypervisor
- Resource Allocation – Manages CPU, RAM, storage, and networking for multiple VMs.
- VM Isolation – Ensures that VMs run independently without interfering with each other.
- Security – Provides separation between VMs to prevent unauthorized access and system failures.
- Migration – Allows live migration of VMs between physical servers for load balancing and fault tolerance, where VMs can be easily backed up, restored, and migrated in case of hardware failures.
- Scalability – Enables efficient utilization of hardware resources, supporting cloud computing and virtualization.
Type 2 Hypervisor - Hosted
- Runs inside an operating system of a physical host machine and acts as management console to the guest OSes.
- Bare-metal Hypervisor – Also called native, is a layer of software directly installed on top of a physical server. There is no Host OS installed or any other software is in between and hence it is called bare-metal .
- Virtualization – the Host Machine running this hypervisor serves only for virtualization purposes only and not for other purposes.
- Physical Resources – allows for over provisioning of more physical resources to virtual machines – ie Dynamic disks which grow as more storage is consumed by each VM. Overcommitment without careful monitoring may lead to a out of physical disk space situation.
- Mostly used in enterprise environments with large data centres
- Common type 1 hypervisors: VMware ESXi, Microsoft Hyper-V . Citrix XenServer, KVM (Kernel-based Virtual Machine):
Type 2 Hypervisors (Hosted Hypervisor)
- Hosted Hypervisor – runs inside an operating system of a physical host machine and acts as management console to the guest OSes.
- Host Machine and Host OS– the physical machine running where an operating system is installed directly on the hardware.
- Examples: Windows, Linux, macOS
- Guest Virtual Machine – Created by the hypervisor, it is an instance of a Virtual Machine with specific hardware capabilities (eg 2 vCPU, 16GB Ram, 30 GB storage)
- Guest OS - is an independent instance of an operating system. OS does not necessarily need to be the same as the Host OS.
- Examples: Windows, Linux, macOS
- They are frequently used for desktop virtualization, software development and testing, and running multiple operating systems on a single machine.
- Common type 2 hypervisors: VMware Workstation, Oracle VirtualBox, Parallels Desktop for Mac
When do we use which type?
- Type 1 hypervisor
- Also known as Bare metal hypervisor.
- Runs on Underlying physical host machine hardware.
- Best suited for Large, resource-intensive, or fixed-use workloads.
- Can it negotiate dedicated resources? Yes.
- Knowledge required System administrator-level knowledge.
- Ideal deployment environment Type 1 = Production/Enterprise
- Examples VMware ESXi, Microsoft Hyper-V, KVM.
- Type 2 hypervisor
- Also known as Hosted hypervisor.
- Runs on Underlying operating system (host OS).
- Best suited for Desktop and development environments.
- Can it negotiate dedicated resources? No.
- Knowledge required Basic user knowledge.
- Ideal deployment environment Type 2 = Development/Education
- Examples Oracle VM VirtualBox, VMware Workstation
Virtualization vs Emulation : VMs
- The Problem : You have a Surface Pro laptop (ARM) .
- You want to install a VM built on x86-64 architecture.
- Architecture Mismatch: x86-64 and ARM are fundamentally different CPU architectures, meaning their instruction sets and underlying hardware are incompatible.
- Virtualization, even with hypervisors, relies on the host CPU's ability to interpret and execute the instructions of the guest OS.
- Emulation/Translation: x86-64 hypervisors running on x86-64 hardware, like those found on x86-64 Windows or Linux systems, cannot natively run ARM operating systems (OS) VMs as they are designed to virtualize x86-64 architectures, not ARM. Conversely, ARM based hypervisors, cannot also run x64 OS VMs.
- To run ARM OS on x64 hardware, you would need to use software emulation, which translates ARM instructions into x64 instructions, but this is significantly slower than running native code.
- QEMU (Quick Emulator) is a type 2 hypervisor which is able to interpret ARM instructions and translate them into x64 instructions by the emulator, with a performance overhead.
- Example of ARM laptops :
* Apple hardware – Macbooks (M1 and later ) running apple silicon (ARM CPUs)
* Microsoft Surface Laptop (7th Generation and later)
Virtualization vs Emulation : Application experience with a Surface Pro (ARM)
- Windows on ARM devices utilizes emulation technology to allow existing, unmodified x86 (32-bit) and x64 (64-bit) Windows apps to run on ARM processors, with performance penalties.
- Windows 11 on ARM extends the emulation to support both x86 and x64 apps, while Windows 10 on ARM only supports x86 apps
- Performance: While emulation allows x86/64 apps to run on ARM, performance may be slightly lower than if the apps were natively built for ARM. Emulation works as a software simulator, just-in-time compiling blocks of x86 instructions into Arm64 instructions with optimizations to improve performance of the emitted Arm64 code.
- Support : Peripherals and devices only work if the drivers they depend on are built into Windows, or if the hardware developer has released ARM64 drivers for the device, else it will not work.
- Microsoft store apps offer native app versions via the Universal Windows Platform (UWP). These apps are designed to run on any Windows device. (both ARM and x64 build available- run as native!)
Virtualization and Emulation
- For Macbooks using M1-M4 CPUs, UTM is a free, open-source virtualization and emulation software for macOS and Apple platforms, allowing the running of Windows, Linux, and other operating systems in virtual machines on the Mac
- UTM uses Apple's Hypervisor framework to run ARM64 operating systems on Apple Silicon at near -native speeds and can also virtualize x86/x64 operating systems on Intel Macs using QEMU.
- Virtualization: Allows for running multiple operating systems simultaneously on Mac.
- Emulation: Can emulate various processor architectures, including x86, ARM, and more.
- Specialized x86-64 VM running within a ARM based macOS
TLDR
- Native Virtualization provides direct access to the hardware resources to give much greater performance than software emulation
Other types of virtualization
- The type of virtualization that we have been since week 3 is classified as server virtualization. Besides the commonly discussed server virtualization, there are other kinds of virtualization in use.
- Different types of virtualization exist because they cater to various needs and offer specific benefits for different scenarios, such as resource optimization, application virtualization, data management, reducing costs, enhancing flexibility, security, and improved data management.
Different types of virtualization
- Application Virtualization - allows users to access and use applications from a separate computer than the one on which the application is installed, essentially streaming the application to the user's device.
- Server Virtualization – allows multiple virtual servers (each with its own operating system and applications) to run on a single physical server, improving resource utilization, reducing costs and enhancing flexibility and scalability.
- Desktop Virtualization – technology that separates a user's desktop environment (OS/app/data) from the physical device they use to access it, allowing users to access their virtual desktops remotely. We will be looking at this in detail as it supports end-user computing.
- Storage Virtualization – pools multiple different physical storage devices into a single, virtualized pool, presenting a unified view to applications and simplifying storage management)
- Network Virtualization – process of combining hardware and software network resources and network functionality into a single, software-based administrative entity known as a virtual network.
- Data Virtualization - provides a single, unified view of data from multiple sources, enabling users to access and manipulate data without needing to know its physical location or format.
Virtualization and Cloud Computing
- Virtualization is the key technology that powers cloud computing (*covered in week 7 lesson), making it more flexible, scalable, and efficient. Without virtualization, cloud computing providers would be unable to scale the hardware.
- Virtualization allows for the following goals to be met for cloud computing companies.
* Scalability & Flexibility – Cloud providers can quickly allocate or scale resources based on demand. Virtualization enables dynamic provisioning, so businesses can easily increase or decrease computing power without needing additional physical hardware.
* Isolation & Security – Each virtual resource operates independently, meaning one workload doesn’t interfere with another. This isolation enhances security and improves stability by preventing system failures from affecting multiple users.
* Disaster Recovery & Backup – Virtualized environments make it easier to create backups and restore systems in case of failures. Cloud computing relies on virtualized snapshots to quickly recover lost data or systems.
* Multi-Tenancy & Cost Efficiency – Cloud providers use virtualization to serve multiple customers on shared infrastructure, ensuring cost-effectiveness while maintaining separate environments for each user.
End User Computing - Desktop virtualization
- Desktop virtualization is a technology that separates a user's desktop environment (including operating system, applications, and data) from the physical device they use, allowing access from any device, anywhere.
- They come in 3 different formats and the benefits they provide are as follows.
* a) Remote Desktop Services
* b) Virtual Desktop Infrastructure (VDI)
* c) Desktop as a Service (DaaS)
* Enhanced Security: Centralized storage making it easier to implement security measures and prevents data loss from device theft.
* Cost Savings: Reduces IT maintenance by shared resources centrally, leading to better resource utilization and reduced hardware costs.
* Flexibility: Users can access their virtual desktops and applications from any device, as long as they have network access.
* Scalability: IT administrators can add/update/delete virtual desktops from a central location, simplifying administration and reducing costs.
3 Types of Desktop Virtualizations – Remote Desktop Services (RDS)
- Remote Desktop Services (RDS): Terminal Services RDS, (formerly known as Windows Terminal Services since late 1990s) allows users to access Windows desktops and applications remotely. It uses Microsoft Windows Server and is simpler to implement then the other 2 implementations.
- In its simplest form, RDS allows multiple users to share a single server instance for their desktop sessions. The Windows Terminal Server contains session-based sharing capabilities that allow multiple users to access desktops and applications simultaneously on a single instance of the Windows Terminal Server.
- TLDR : A single powerful (many CPU + large RAM + large storage) compute host (Could be physical or virtual host) is just providing the resources for multiple users , each using a session on the host terminal server.
3 Types of Desktop Virtualizations –– Virtual Desktop Infrastructure (VDI)
- Virtual Desktop Infrastructure (VDI): an implementation using a Type 1 Hypervisor. VDI hosts individual desktop environments on a central server, allowing users to access them remotely from various devices. This offers greater control and flexibility but requires managing the servers.
- Virtual desktops can share resources on a central server, leading to better resource utilization and reduced hardware costs. Every user has his/her own VM, independent from other users. The VMs may still be hosted on the same host as other users.
- TLDR : 1 or more much more powerful server(s) now providing VM access to end users.
3 Types of Desktop Virtualizations – Desktop as a Service (DaaS)
- Desktop as a Service (DaaS): VDI on cloud. DaaS is a cloud-based service where a third party provides and manages virtual desktops and applications, allowing users to access them remotely. This eliminates the need for on-premises infrastructure, and relies on the third-party provider.
- Users may log in to their virtual desktop from anywhere, via any device, and their desktop will look exactly the same as when they last visited from a different location. All they need is an internet connection.
- Since the data lives in a centralized, remote location, it can be constantly backed up – no need for users to manage back-ups on their own or worry about data existing on a computer at the office but not at home.
- TLDR : – DaaS is VDI on cloud.
Differences between VDI vs RDS vs DaaS
- Virtual Desktop Infrastructure (VDI)
- HOSTING METHODS Desktops and applications hosted on physical servers by an organization, in-house
- REQUIREMENTS Requires organizations to build out their own virtualization hardware and servers
- LICENSING Vendors offer different license models based on persistent or nonpersistent desktops on a per-user or per-machine basis
- RESOURCES High amount of labor and resources required for setup
- Remote Desktop Services (RDS)
- HOSTING METHODS Desktop and application sessions hosted on a shared desktop on Windows Server
- REQUIREMENTS Requires a Windows Server virtual desktop environment
- LICENSING Requires a client access license for each unique user that establishes a connection with the RDS host
- RESOURCES Medium amount of labor and resources to set up Windows Server and RDS
- Desktop as a Service (DaaS)
- HOSTING METHODS Desktops and applications hosted in the cloud by a third-party vendor
- REQUIREMENTS Requires no server or data center investment
- LICENSING With public cloud-hosted DaaS, the vendor takes care of the licensing as part of the fees. However, private cloud-hosted DaaS can be very complicated.
- RESOURCES Low amount of resources and labor required for setup
Recap : Understanding Key Concepts behind the Virtual Machine
- From week 3 – You have been using the virtual machine in Oracle Virtualbox.
- A simple example of server virtualization using Oracle Virtualbox involves running multiple virtual machines (VMs) on a single physical computer. This means you can have different operating systems, like Windows 11 and Linux, each running within its own isolated VM, all on the same physical server.
- Physical Server: Your personal computer or a dedicated server acts as the physical host.
- Oracle VirtualBox: This software acts as a hypervisor, creating and managing the virtual machines.
- Virtual Machines (VMs): Your imported VM simulates a complete computer system, including an operating system, applications, and storage.
- Resource Sharing: The hypervisor allocates physical resources (CPU, memory, storage, etc.) to the VMs, allowing them to run concurrently.
Recap : Steps in installing Virtual Machine
- Prerequisites - Before a host can run virtual machine, the host machine needs to enable virtualization (VT-X or AMD-V) turned on in BIOS / UEFI.
- 1) Create new VM – A New VM is defined , by allocating virtual CPUs, RAM , disk and network. Resources cannot exceed the host’s capacity. (E.g. if host has 4 logical processors, vCPUs cannot exceed 4)
- 2) Install Operating System - the VM can be imported (as you have done in week 3) or installed via bootable thumbdrive / ISO files (e.g. The Windows OS ISO can be downloaded from the Microsoft website) . The ISO file is virtually attached as a virtual DVD drive to the VM and works as bootable media. VMs can also be cloned from existing VMs, which creates a copy while leaving the original intact.(Useful when you need to test many VMs)
- Post installation, install Virtualbox Guest Additions for driver support!
Recap : Features of Virtual Machines
- Snapshots - Recall you had created snapshots after you had imported the VM? This allows you to ‘save’ the state of the VM, so you are able to restore the state of the VM to when you took the snapshot (Think photograph!)
- State of VM disks / Contents of VM memory / VM settings are saved.
- This is probably the most useful feature of VMs.
- Shared folders: Using shared folders, hosts and guest VMs can share files.
- Operational Hint - (Avoid snapshots of VM in flight or long chain of sequential snapshots – these increase the risk of data corruption! )
- Above are examples of the extra layer of complexity when managing VMs as compared to non-VM.
- Ability to create multiple different snapshots – which will diverge in different paths/OS timelines
Features of Virtual Machines
- Connectivity options - To connect up the VM to the host, we can use the below methods.
- Bridged Network: The host server and the VM are connected to the same network, and the host shares its IP address with the VM
- NAT (Network Address Translation): VMs use an IP translated from the host’s IP (using NAT device) and communicate on a private network set up on the host computer
- Host-only Network: VMs use a private network but do not have translated IP addresses to connect to an external network, therefore can only communicate to other VMs on the isolated host network
- Bridged mode offers more network freedom, while NAT provides a layer of security and simplifies network configuration.
- More complex network setups can be simulated using vSwitches , but this is beyond the scope of this module.
- (*Note : Bridged mode won’t work in RP due to WIFI security policy but will work at your home WIFI )
NAT Mode
- Note VMs have no direct connection to the outside world. They require the host to translate network requests for them.
Case study : Software Development
- Continuing on with the software development case study. As a developer creating software for the ever increasing platforms to support, and differing versions were a nightmare to resolve.
- Eg . The developer’s version of database was different from the server’s version, which led to unexplainable issues when the application was run.
Applications and Dependencies
- "Dependency hell“ is a loose term that refers to the frustration and difficulty of managing seemingly never ending software dependencies. This happens when software packages require specific versions of other packages to function correctly.
- In reality – only 1 application is deployed on a single VM to ensure stability.
Challenges of Traditional Deployments
- Configuration conflicts - Eg pointing to different databases for different environments, different library versions , Windows vs Linux file paths of c:\data vs /data differences .
- Environment mismatches –"It works on my machine" problem due to differing OS, libraries, or configurations.
- Apps often depend on specific versions of languages, frameworks, or services (e.g., Java 8 vs Java 11).
- Resource overhead - Virtual Machines are considered bloated, as each instance contains its own copy of the operating system and libraries and then applications. E.g. a Windows Server 2022 Operating System needs 32GB of disk space allocated just for the OS files alone.
- Maintenance Overhead - Additionally, the operating systems also require maintenance (security patches, etc) .
Key Features of Containers
- Containerization is a lightweight form of virtualization that packages applications and their dependencies into isolated units called containers.
- Lightweight, fast, consistent across environments
- Just like shipping containers, they allow portability of code since it just works, and it can be run everywhere!
- Containers share the host operating system kernel via the container runtime, unlike VMs that each run a full OS stack.
- Containers are:
- Fast to start (seconds)
- Lightweight - Use less memory and disk (MBs vs GBs)
- Easily portable across environments (Windows/Linux)
- Scales efficiently in cloud or orchestration platforms (like Kubernetes)
- Facilitates continuous integration and deployment (CI/CD)
- Containers use a container engine which puts together all the components needed to create a container image (code, runtime library, tools and settings). It is responsible for managing the container lifecycle, including building, running, and distributing containers.
- Docker™ is a popular container engine.
- The container runtime runs on the guest OS and is the lower-level component responsible for actually running the container.
- Instead of an operating system, containers share the kernel of the host machine’s OS.
- Like an OS, the kernel includes dependencies and allows an application to run.
- Without the need for a complete OS, the container uses less resources.
Use Case for Containers
- Because of their design, containers simplify application deployment, enhancing scalability and portability, and facilitating microservices architecture.
- They make them suitable for use in scenarios involving microservices for Web Applications , cloud deployments, CI/CD pipelines, and legacy application modernization.
- CI/CD pipelines (Continuous Integration/Continuous Delivery (or Deployment), is a software development practice that aims to automate and streamline the process of building, testing, and releasing software.
- Microservices are a collection of independently deployable and loosely coupled services/applications that are tailored to specific business needs and deployed using container technology. (It is a software architecture design which is beyond the scope of this module)
Deploying Containers
- Containers can be deployed within VMs since they are less resource intensive. VMs and containers can exist together for maximum portability.
- Containers enable DevOps (Development and IT Operations), which aims to simplify the development process for an application.
- Standardized Environments: Containers provide a consistent environment for applications, regardless of the underlying infrastructure.
- Same container can be deployed everywhere !
Containers enable cloud native apps.
- Containers enable app modernization; the process of updating application technology to be compatible in cloud environments.
- DevOps uses containers to quickly develop and update applications.
- App modernization also includes transforming traditional apps into cloud native applications.
- Cloud native apps are built and deployed using cloud technologies from beginning to end of the process (cloud native development).
Comparing VMs and Containers
- Virtual Machine
- Resource Usage Uses multiple hardware components of server
- Memory Efficiency Requires loading the entire OS before starting, hence less efficient.
- Deployment Size Single server can support multiple VMs
- Isolation Applications on VMs are strongly isolated from each other, minimal interference due to efficient isolation mechanisms.
- Deployment architecture Sandbox environment that isolates VMs from system issues
- Deployment time Deployment is lengthy as separate instances are needed for execution
- Boot Time Boot time is faster than a physical machine Slower than container (minutes)
- Runs on Hypervisor (e.g. Hyper-V / Vmware ESXI
- When to use Better for running multiple operating systems, providing strong isolation, and managing legacy applications
- Container
- Resource Usage Lightweight / uses fewer resources than VM
- Memory Efficiency No need to virtualize, hence uses less memory.
- Deployment Size Single server can support more containers than VM
- Isolation Containers share OS resources, so applications are not isolated. Prone to issues as there are no built-in isolation mechanisms
- Deployment architecture Container images can be more efficiently deployed than VM
- Deployment time Easy to deploy with a single containerized image usable across all platforms
- Boot Time Boot time is faster than physical machine and VM - Fast (seconds)
- Runs on Container engine (e.g. docker / podman / runc)
- When to use Better for building cloud-native applications, microservices, and rapid deployment cycles, where portability and efficiency are crucial.
Case Study : Containers at scale
- Spotify's total number of monthly active users is over 675 million, as of the fourth quarter of 2024. With the demand of such a global service – they use containers, but managing 1000s of containers manually is an issue.
- Running containers at scale means managing hundreds or thousands of containers across multiple servers (hosts), while ensuring availability, performance, and automation. Each container serves 100s of users efficiently
- Kubernetes is used on top of containers as a Container Orchestration Platform.
Containers at scale : Features
- Kubernetes
- Orchestration Automates container deployment, scaling, and health checks
- Service Discovery Routes traffic to healthy containers
- Auto-scaling Adds/removes containers based on CPU, memory, or custom metrics
- Rolling Updates Updates containers without downtime, with rollback if needed
- Self-healing Automatically restarts or replaces failed containers
- Multi-node Support Spreads containers across a cluster of servers
- Security & Isolation Namespaces, Role Based Access Control , secrets management
- Analogy – Docker is just a container. Kubernetes is PSA, managing containers at scale.
Summary
- By the end of this lesson, we have
- Explain the benefits of using virtualization technology using virtual machines and the different hypervisors.
- Install and configure a virtual machine on hardware meeting specifications.
- Describe containerization technology (e.g. Docker) and discuss how it differs from traditional virtualization
- Explain virtualization as a key enabler of cloud computing.