Chapter 14 - Virtual Machines

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/166

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

167 Terms

1
New cards

Traditionally, a PC or server hosted a single...

OS for running applications

2
New cards

Virtualization technology allows a PC or server to simultaneously run...

more than one OS or more than one session of the same OS

3
New cards

In this case, the system is said to host a number of

virtual machines

4
New cards

Virtual Machine Concept

see diagram

<p>see diagram</p>
5
New cards

Virtualization was used during the 1970s in

IBM's mainframe systems

6
New cards

Virtualization came mainstream in the early

2000's when it became commerically available on x86

7
New cards

One application, one server

easier to support and administer

8
New cards

As hardware improved, servers became

underutilized, and each one required power, cooling maintenance

9
New cards

VMs relieved the stress of

underutilized servers

10
New cards

The software for virtualization is called a

virtual machine monitor, or VMM, or hypervisor

11
New cards

It acts as a layer between the hardware and the VMs to

act as a resource broker

12
New cards

Hypervisor allows multiple VMs to safely...

coexist on a single physical host

13
New cards

Each VM has its own OS, which can be...

same or different from host OS

14
New cards

Consolidation Ratio

Number of VMs that can run on a host

15
New cards

Today there are more virtual servers than

physical servers

16
New cards

Original hypervisors provided ratios from

4:1 up to 12:1

17
New cards

Reasons for Virtualization

•Legacy hardware: run old application on modern hardware
•Rapid deployment: physical server may take weeks, VM may take minutes
•Versatility: run many kinds of applications on one server
•Consolidation: replace many physical servers with one
•Aggregating: combine multiple resources into one virtual resource, such as storage
•Dynamics: new VM can easily be allocated, such as for load-balancing
•Ease of management: easy to deploy new VM for testing software
•Increased availability: VMs on a failed host can quickly be restarted on a new host

18
New cards

Consolidation

replace many physical servers with one

19
New cards

Legacy hardware

run old application on modern hardware

20
New cards

rapid deployment

physical server may take weeks, VM may take minutes

21
New cards

versatility

run many kinds of applications on one server

22
New cards

aggregating

combine multiple resource into one virtual resource, such as storage

23
New cards

dynamics

new VM can easily be allocated, such as for load-balancing

24
New cards

ease of management

easy to deploy new VM for testing software

25
New cards

increased availaibility

VMs on a failed host can quickly be restarted on a new host

26
New cards

Virtualization is a form of

abstraction

27
New cards

Just as an OS abstracts disk I/O commands from the user, virtualization abstracts...

physical hardware from VMs it supports

28
New cards

The virtual machine monitor or

hypervisor provides abstraction

29
New cards

The VM Monitor acts as a broker, or...

traffic cop, acting as a proxy for the VM as they request resources of the host

30
New cards

A VM is configured with

some number of processors, some amount of RAM, storage resources, and network connectivity

31
New cards

It can then be powered on like a physical server...

loaded with an OS and utilized like a physical server

32
New cards

It is limited to seeing only the

resources it has been configured to see

33
New cards

One physical has may support

many VMs

34
New cards

The hypervisor facilitates I/O from the...

VM to the host and back again to the correct VM

35
New cards

Privileged instructions must be caught and handled by

the hypervisor

36
New cards

Performance loss can occur with

hypervisors

37
New cards

A VM instance is defined

in files

38
New cards

A configuration file defines the number of

virtual processors (vCPUs), amount of memory, I/O device access, and network connectivity

39
New cards

The storage the VM sees may just be files in the

physical file system

40
New cards

When the VM is booted

additional files for logging, paging, and other functions are created

41
New cards

These files may be copied to

back up the VM or migrate it to a new host

42
New cards

The new VM may also may also quickly be created from

a template that defines hardware and software settings for a specific case

43
New cards

execution management of VMs

scheduling, memory management, context switching, etc

44
New cards

device emulation and access control

emulating devices required by VMs, mediating access to host devices

45
New cards

execution of privileged operations

rather than run them on host hardware

46
New cards

management of VMs (lifecycle management)

configuration of VMs and controlling VM states (start, pause, stop)

47
New cards

administration

hypervisor platform and software admin activities

48
New cards

Hypervisor functions

- Execution management of VMs
- Devices emulation and access control
- Execution of privileged operations by hypervisor for guest VMs
- Management of VMs (also called VM lifecycle management)
- Administration of hypervisor platform and hypervisor software.

49
New cards

Type 1 Hypervisor

1. runs directly on host hardware much like an OS
2. directly controls host resources
ex: VMware ESXi, microsoft hyper-v, Xen

50
New cards

Type 2 Hypervisor

1. Hypervisor runs on host's OS
2. relies on host OS for hardware interactions
ex: VMware workstation, oracle VM virtual box

51
New cards

Hypervisor Type 1 and 2

see diagram

<p>see diagram</p>
52
New cards

Type 1 performs

better than Type 2

53
New cards

Type 1 is more ______ than Type 2

secure

54
New cards

Type 2 can run on

a system being used for other things, like a user's workstation

55
New cards

Paravirtualization is a software-assisted

virtualization technique

56
New cards

The OS is modified so that calls to the hardware are

replaced with calls to the hypervisor

57
New cards

This is faster with less overhead

but requires a modified OS

58
New cards

Paravirtualization support has been offered in

Linux since 2008

59
New cards

Paravirtualization diagram

see diagram

<p>see diagram</p>
60
New cards

Both AMD and Intel processors provide support for

hypervisors

61
New cards

AMD-V and Intel VT-X provide hardware-assisted

virtualization extensions for the hypervisor to use

62
New cards

Intel processors offer extra instructions called

VMX (Virtual Machine Extensions)

63
New cards

Hypervisors can use these instructions rather than

performing these functions in code. OS does not require modification in this case

64
New cards

A virtual appliance consists of applications and an OS distributed as

a virtual machine image

65
New cards

A virtual appliance is independent of hypervisor or

processor architecture

66
New cards

A virtual appliance can run on either a

type 1 or type 2 hypervisor

67
New cards

deploying a virtual appliance is easier than

installing an OS, installing the apps, configuring, and setting it up

68
New cards

besides application use, a security virtual applicance (SVA) is a

security tool that monitors and protects the other VMs

69
New cards

SVA's can monitor the state of the VM including

registers, memory, and I/O devices as well as network traffic, through a special API of the hypervisor.

70
New cards

Another approach to virtualization is

container virtualization

71
New cards

Software running on top of the host OS

kernel provides an isolated execution environment

72
New cards

Unlike hypervisor VMs, containers do not aim

to emulate physical servers

73
New cards

Instead, all containerized applications on a host share a

common OS kernel

74
New cards

This eliminates the need for

each VM to run its own OS and greatly reduces overhead.

<p>each VM to run its own OS and greatly reduces overhead.</p>
75
New cards

Much of container technology

was develop for Linux

76
New cards

In 2007, the linux process API was extended to permit the

containerization of the user environment

77
New cards

Originally called process containers

the name later became control groups (cgroups)

78
New cards

normally all processes are descendants of the

init process forming a single process hierarchy

79
New cards

control groups allow for multiple process

hierarchies in a single OS

80
New cards

the hierarchy is associated with

system resources at configuration time

81
New cards

Control groups provide

resource limiting, prioritization, accounting, control

82
New cards

resource limiting

limit how much memory is usable

83
New cards

priortization

some groups can get a larger share of CPU or disk I/O

84
New cards

accounting

can be used for billings purposes

85
New cards

control

groups of processes can be frozen or stopped and restarted

86
New cards

For container, only a small container

engine is needed

87
New cards

it sets up each container as an

isolated instance by requesting resources from the OS

88
New cards

each container application then

directly uses the resources of the host OS

89
New cards

Container lifecyle

setup, configuration, management

90
New cards

setup

enabling the Linux kernel containers, installation of tools and utilities to create the container environment

91
New cards

Configuration

specify IP address, root file system, and allowed devices

92
New cards

Management

startup, shutdown, migration

93
New cards

In a VM environment, a process executes inside

a guest virtual machine

94
New cards

An I/O request is sent to the guest OS to an

emulated device the guest OS sees

95
New cards

The hypervisor sends it through to the host OS

which sends it to the physical device

96
New cards

By contrast, an I/O request in a container environment is routed through

kernel control group indirection to the physical device.

97
New cards

Data Flow for I/O operation via Hypervisor and Container

see diagram

<p>see diagram</p>
98
New cards

Container Advantages

1. by sharing the OS kernel, a system may run many containers compared to the limited number of VMs and guest OSs in a hv env
2. app performance is close to native system performance

99
New cards

Container Disadvantages

-Container applications are only portable across systems with the same OS kernel and virtualization support features.
-An app for a different OS than the host is not supported.
-May be less secure if there are vulnerabilities in the host OS.

100
New cards

Container File System

Each Container sees its own isolated file system