Windows Server 2019 Module 4 Section 3 Implementing Storage Spaces in Windows Server

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/54

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

55 Terms

1
New cards

A storage pool is a collection of

one or more physical disks

2
New cards

Physical Disk are disks such as

SATA or Serial-attached SCSI(SAS) disks

3
New cards

What are the requirements for a storage pool

One physical disk is required to create a storage pool, and a minimum of two physical disks are required to create a resilient mirror virtual disk.

A minimum of three physical disks are required to create a virtual disk with resiliency through parity.

Three-way mirroring requires at least five physical disks.

Disks must be blank and unformatted.

No volume can exist on the disks.

4
New cards

Types of ways to attact a disk

SCSI

SAS

SATA

NVM Express(NVME)

5
New cards

If you want failover clustering with storage pools you can’t use

SATA, USB, or SCSI disks

6
New cards

Storage Spaces are

virtual disks created from free space in a storage pool.

7
New cards

Storage spaces have attributes such as

resiliency level

storage tiers

fixed provisioning

precise administrative control

8
New cards

The primary advantage of Storage Spaces is that you

no longer need to manage single disks, instead you manage them as one unit

9
New cards

Virtual disks are the equivalent of

a logical unit number (LUN) on a storage area network (SAN).

10
New cards

storage space can be managed by

Windows storage management application programming interface(API)

Windows Management Instrumentation(WMI)

PowerShell

File and Storage Services role in Server Manager

11
New cards

Virtual Disk are like a

Physical Disk from the perspective of users and application. But are more flexible

12
New cards

Virtual disk allow for

both thick and thin provisioning and Just in Time(JIT) allocation. They also have built in mirroring and parity similarly to RAID.

13
New cards

to create a virtual disk you need

Disk Drive

Tiered Storage Spaces

Write-back Caching

14
New cards

Disk drives can be made available from the Windows OS by using a

drive letter

15
New cards

A virtual disk can use any format for storage but for data deduplication or FSRM to work it needs to be

NTFS or ReFS

16
New cards

Tiered Storage Spaces feature allows you to use a

combination of disks in a storage space

17
New cards

The purpose of write-back caching is to

optimize writing data to the disks in a storage space

18
New cards

Write back caching is limited to

1GB

19
New cards

PMem is used as a cache to

accelerate the active working set, or as capacity to guarantee consistent low latency on the order of microseconds.

20
New cards

Types of storage layouts

Simple

two-way and three-way mirros

parity

21
New cards

Simple storage space

has data striping but no redundancy

Requires at least 2 disk

22
New cards

two and three way mirrors storage spaces

contain multiple copies of the data.

require at least 2 disk

has striping

23
New cards

Parity storage spaces

resembles a simple space because data writes across multiple disks. However, parity information also writes across the disks when using parity storage. Parity information can be used to calculate data if a disk is lost. Parity enables Storage Spaces to continue to perform read-and-write requests even when a drive has failed. The parity information always rotates across available disks to enable input/output (I/O) optimization.

need at least three physical disk

24
New cards

virtual disk can be created form

storage pools

25
New cards

what things do you need to consider when creating a virtual disk

sector size, derive allocation, and your provisioning scheme

26
New cards

A storage pools sector size is set the

moment it is created

27
New cards

default sector sizes

If the list of drives being used contains only 512 and 512e drives, the pool sector size is set to 512e. A 512 disk uses 512-byte sectors.

A 512e drive is a hard disk with 4,096-byte sectors that emulates 512-byte sectors.

If the list contains at least one 4-kilobyte (KB) drive, the pool sector size is set to 4 KB.

28
New cards

requirements for a pool to support failover clustering

all drives in the pool must support Serial Attached SCSI (SAS)

29
New cards

Types of Drive Allocations

Automatic Allocation

Manual Allocation

Hot Sapre

30
New cards

Automatic Allocation

This is the default allocation when you add any drive to a pool. Storage Spaces can automatically select available capacity on data-store drives for both storage-space creation and just-in-time (JIT) allocation.

31
New cards

Manual Allocation

A storage pool wont use this type of allocation and it must be selected. This property makes it possible for administrators to specify that only certain Storage Spaces can use particular types of drives.

32
New cards

Hot Spare

reserver drives that the storage space won’t use when creat a storage space. However, if a failure occurs on a drive that is hosting columns of a storage space, replaces the failed drive.

33
New cards

The two provisioning schemes

Thin

Fixed

34
New cards

Thin Provisioning Schemes

enables storage to be allocated readily on a just-enough and JIT basis. Storage capacity in the pool is organized into provisioning slabs that aren't allocated until datasets require the storage.

35
New cards

Fixed Provisioning Scheme

This Scheme also uses flexible provisioning slabs. The difference is that it allocates the storage capacity upfront when you create the space.

36
New cards

Performacne can be improved by using

SSD’s and by using patrity

37
New cards

Some uses for storage spaces

Implement and easily manage scalable, reliable, and inexpensive storage.

Aggregate individual drives into storage pools, which are managed as a single entity.

Use inexpensive storage with or without external storage.

Use different types of storage in the same pool (for example, SATA, SAS, USB, and SCSI).

Delegate administration by pool.

Use the existing tools for backup and restore, and use Volume Shadow Copy Service (VSS) for snapshots.

Manage either locally or remotely, by using Microsoft Management Console (MMC) or Windows PowerShell.

Utilize Storage Spaces with Failover Clusters.

38
New cards

Improvements to storage spaces direct in windows server 2019

Deduplication and compression for ReFS

Native support for Persistent Memory

Nested Resiliency for two-node hyper-converged infrastructure at the edge

Two-server clusters using a USB flash drive as a witness

Windows Admin Center

Performance history

Scale up to 4 petabytes (PB) per cluster

Storage-class memory support for VMs

Manually delimit the allocation of volumes to increase fault tolerance

Drive latency outlier detection

39
New cards

Deduplication and Compression for Resilient File System

Allows you to store up to ten times more data on the same volume.

40
New cards

Native support for Persistent memory

allow you to use memory as cache to accelerate the active working set, or as capacity to guarantee consistent, low latency on the order of microseconds.

41
New cards

Nested Resiliency for two-node hyper-converged infrastructure at the edge

a two-node Storage Spaces Direct cluster can provide continuously accessible storage for apps and VMs even if one server node stops working, a drive fails in the other server node.

42
New cards

Two-server clusters using a USB flash drive as a witness

a low-cost solution that you plug into your router to function as a witness in two-server clusters. If a server ceases operation and then comes back up, the USB drive cluster knows which server has the most up-to-date data.

43
New cards

Windows Admin Center

Create, open, expand, or delete volumes with just a few selects. Monitor performance such as input/output (I/O) and I/O operations per second (IOPS) latency from the overall cluster down to the individual solid-state drive (SSD) or hard disk drive (HDD).

44
New cards

Performance history

Get effortless visibility into resource utilization and performance with built-in history.

45
New cards

How much storage can you have per cluster

4PB per cluster

46
New cards

Drive latency outlier detection

Easily identify drives with abnormal latency with proactive monitoring and built-in outlier detection

47
New cards

Manually delimit the allocation of volumes to increase fault toleranceThis enables administrators to manually delimit the allocation of volumes in Storage Spaces Direct. Doing so can significantly increase fault tolerance under certain conditions, but it also imposes some added management considerations and complexity.

Manually delimit the allocation of volumes to increase fault tolerance

48
New cards

Storage-class memory support for VMs

This enables NTFS-formatted direct access volumes to be created on non-volatile dual inline memory chapters (DIMMs) and exposed to Microsoft Hyper-V VMs. This enables Hyper-V VMs to take advantage of the low-latency performance benefits of storage-class memory devices.

49
New cards

Storage space direct componets

Network, Internal drives, Two servers, software storage bus, storage pools, storage Spaces, cluster shared volumes, and scale out file server

50
New cards

Network componet

used for the host to communicate. Best practice is to have the network adapters be capable of Remote Direct Memory Access (RDMA) or have two network adapters to ensure performance and minimize latency.

51
New cards

Internal disks componets

each server or storage node had internal disks or a JBOD it connects to

52
New cards

Max servers a storage space direct can be connected to

16

53
New cards

what combines the storage for each node

Software storage bus component

54
New cards

Cluster Shared Volumes (CSVs)

consolidate all volumes into a single namespace that's accessible through the file system on any cluster node

55
New cards

Scale-Out File Server

provides access to the storage system by using SMB 3.0. only needed in disaggregated configurations in which the Storage Spaces Direct feature only provides storage and isn't implemented in hyper-converged configurations in which Hyper-V runs on the same cluster as the Storage Spaces Direct feature.