1/54
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
A storage pool is a collection of
one or more physical disks
Physical Disk are disks such as
SATA or Serial-attached SCSI(SAS) disks
What are the requirements for a storage pool
One physical disk is required to create a storage pool, and a minimum of two physical disks are required to create a resilient mirror virtual disk.
A minimum of three physical disks are required to create a virtual disk with resiliency through parity.
Three-way mirroring requires at least five physical disks.
Disks must be blank and unformatted.
No volume can exist on the disks.
Types of ways to attact a disk
SCSI
SAS
SATA
NVM Express(NVME)
If you want failover clustering with storage pools you can’t use
SATA, USB, or SCSI disks
Storage Spaces are
virtual disks created from free space in a storage pool.
Storage spaces have attributes such as
resiliency level
storage tiers
fixed provisioning
precise administrative control
The primary advantage of Storage Spaces is that you
no longer need to manage single disks, instead you manage them as one unit
Virtual disks are the equivalent of
a logical unit number (LUN) on a storage area network (SAN).
storage space can be managed by
Windows storage management application programming interface(API)
Windows Management Instrumentation(WMI)
PowerShell
File and Storage Services role in Server Manager
Virtual Disk are like a
Physical Disk from the perspective of users and application. But are more flexible
Virtual disk allow for
both thick and thin provisioning and Just in Time(JIT) allocation. They also have built in mirroring and parity similarly to RAID.
to create a virtual disk you need
Disk Drive
Tiered Storage Spaces
Write-back Caching
Disk drives can be made available from the Windows OS by using a
drive letter
A virtual disk can use any format for storage but for data deduplication or FSRM to work it needs to be
NTFS or ReFS
Tiered Storage Spaces feature allows you to use a
combination of disks in a storage space
The purpose of write-back caching is to
optimize writing data to the disks in a storage space
Write back caching is limited to
1GB
PMem is used as a cache to
accelerate the active working set, or as capacity to guarantee consistent low latency on the order of microseconds.
Types of storage layouts
Simple
two-way and three-way mirros
parity
Simple storage space
has data striping but no redundancy
Requires at least 2 disk
two and three way mirrors storage spaces
contain multiple copies of the data.
require at least 2 disk
has striping
Parity storage spaces
resembles a simple space because data writes across multiple disks. However, parity information also writes across the disks when using parity storage. Parity information can be used to calculate data if a disk is lost. Parity enables Storage Spaces to continue to perform read-and-write requests even when a drive has failed. The parity information always rotates across available disks to enable input/output (I/O) optimization.
need at least three physical disk
virtual disk can be created form
storage pools
what things do you need to consider when creating a virtual disk
sector size, derive allocation, and your provisioning scheme
A storage pools sector size is set the
moment it is created
default sector sizes
If the list of drives being used contains only 512 and 512e drives, the pool sector size is set to 512e. A 512 disk uses 512-byte sectors.
A 512e drive is a hard disk with 4,096-byte sectors that emulates 512-byte sectors.
If the list contains at least one 4-kilobyte (KB) drive, the pool sector size is set to 4 KB.
requirements for a pool to support failover clustering
all drives in the pool must support Serial Attached SCSI (SAS)
Types of Drive Allocations
Automatic Allocation
Manual Allocation
Hot Sapre
Automatic Allocation
This is the default allocation when you add any drive to a pool. Storage Spaces can automatically select available capacity on data-store drives for both storage-space creation and just-in-time (JIT) allocation.
Manual Allocation
A storage pool wont use this type of allocation and it must be selected. This property makes it possible for administrators to specify that only certain Storage Spaces can use particular types of drives.
Hot Spare
reserver drives that the storage space won’t use when creat a storage space. However, if a failure occurs on a drive that is hosting columns of a storage space, replaces the failed drive.
The two provisioning schemes
Thin
Fixed
Thin Provisioning Schemes
enables storage to be allocated readily on a just-enough and JIT basis. Storage capacity in the pool is organized into provisioning slabs that aren't allocated until datasets require the storage.
Fixed Provisioning Scheme
This Scheme also uses flexible provisioning slabs. The difference is that it allocates the storage capacity upfront when you create the space.
Performacne can be improved by using
SSD’s and by using patrity
Some uses for storage spaces
Implement and easily manage scalable, reliable, and inexpensive storage.
Aggregate individual drives into storage pools, which are managed as a single entity.
Use inexpensive storage with or without external storage.
Use different types of storage in the same pool (for example, SATA, SAS, USB, and SCSI).
Delegate administration by pool.
Use the existing tools for backup and restore, and use Volume Shadow Copy Service (VSS) for snapshots.
Manage either locally or remotely, by using Microsoft Management Console (MMC) or Windows PowerShell.
Utilize Storage Spaces with Failover Clusters.
Improvements to storage spaces direct in windows server 2019
Deduplication and compression for ReFS
Native support for Persistent Memory
Nested Resiliency for two-node hyper-converged infrastructure at the edge
Two-server clusters using a USB flash drive as a witness
Windows Admin Center
Performance history
Scale up to 4 petabytes (PB) per cluster
Storage-class memory support for VMs
Manually delimit the allocation of volumes to increase fault tolerance
Drive latency outlier detection
Deduplication and Compression for Resilient File System
Allows you to store up to ten times more data on the same volume.
Native support for Persistent memory
allow you to use memory as cache to accelerate the active working set, or as capacity to guarantee consistent, low latency on the order of microseconds.
Nested Resiliency for two-node hyper-converged infrastructure at the edge
a two-node Storage Spaces Direct cluster can provide continuously accessible storage for apps and VMs even if one server node stops working, a drive fails in the other server node.
Two-server clusters using a USB flash drive as a witness
a low-cost solution that you plug into your router to function as a witness in two-server clusters. If a server ceases operation and then comes back up, the USB drive cluster knows which server has the most up-to-date data.
Windows Admin Center
Create, open, expand, or delete volumes with just a few selects. Monitor performance such as input/output (I/O) and I/O operations per second (IOPS) latency from the overall cluster down to the individual solid-state drive (SSD) or hard disk drive (HDD).
Performance history
Get effortless visibility into resource utilization and performance with built-in history.
How much storage can you have per cluster
4PB per cluster
Drive latency outlier detection
Easily identify drives with abnormal latency with proactive monitoring and built-in outlier detection
Manually delimit the allocation of volumes to increase fault toleranceThis enables administrators to manually delimit the allocation of volumes in Storage Spaces Direct. Doing so can significantly increase fault tolerance under certain conditions, but it also imposes some added management considerations and complexity.
Manually delimit the allocation of volumes to increase fault tolerance
Storage-class memory support for VMs
This enables NTFS-formatted direct access volumes to be created on non-volatile dual inline memory chapters (DIMMs) and exposed to Microsoft Hyper-V VMs. This enables Hyper-V VMs to take advantage of the low-latency performance benefits of storage-class memory devices.
Storage space direct componets
Network, Internal drives, Two servers, software storage bus, storage pools, storage Spaces, cluster shared volumes, and scale out file server
Network componet
used for the host to communicate. Best practice is to have the network adapters be capable of Remote Direct Memory Access (RDMA) or have two network adapters to ensure performance and minimize latency.
Internal disks componets
each server or storage node had internal disks or a JBOD it connects to
Max servers a storage space direct can be connected to
16
what combines the storage for each node
Software storage bus component
Cluster Shared Volumes (CSVs)
consolidate all volumes into a single namespace that's accessible through the file system on any cluster node
Scale-Out File Server
provides access to the storage system by using SMB 3.0. only needed in disaggregated configurations in which the Storage Spaces Direct feature only provides storage and isn't implemented in hyper-converged configurations in which Hyper-V runs on the same cluster as the Storage Spaces Direct feature.