Cloud and Data Center

Cloud and the Datacenter.

It is time for us to put our heads in the clouds.

Well, not really.

What I mean to say is that in this section of the course,

we're going to discover cloud computing,

virtualization and the datacenter.

We're going to be talking about virtual network devices,

those things that make it possible for us to move

into the cloud in the first place,

as well as the basics of cloud computing,

some key cloud computing concepts

and the fundamentals of virtualization.

Then we're going to dive into how a datacenters architecture

is going to be set up because more and more

network technicians are finding themselves working

at large datacenters these days,

instead of small businesses.

And so we have to be able to be ready to support

all these large cloud services within these datacenters.

In this section of the course,

we're going to be covering things from domain one,

networking fundamentals and domain two,

network implementations.

This includes objectives 1.2, 1.7, 1.8 and 2.1.

Objective 1.2 requires that you explain the characteristics

of network topologies and network types.

Objective 1.7 is focused on our ability to explain

basic corporate and datacenter network architectures.

Objective 1.8 wants you to be able to summarize

cloud concepts and connectivity options,

and objective 2.1 wants you to compare and contrast

various devices, their features

and their appropriate placement on the network.

So let's get started talking all about the cloud

and the datacenter.

Virtual network devices

represent a major shift in the way

data centers are being designed,

fielded, and operated.

We have started virtualizing everything out there.

We started with virtual servers

and now, we're in the virtual routers

and switches and firewalls.

We even have virtual desktops

where you don't have to have

a physical computer anymore

and instead, you operate everything

through a web browser.

We also have VoIP,

which is virtualizing voice and telephones.

And we have software-defined networking

and cloud computing.

Now, we're going to talk about all of these things

in this section of the course.

Let's talk about virtualization first.

Virtual servers have the ability

to allow multiple virtual instances

to exist within a single physical server.

Now, if I have one hardware server,

as shown here on my screen,

I can end up having six or seven

or eight virtual servers

residing inside of it.

In this example, I have a file server,

a network server, a mail server,

a database server, a web server.

You're going to start to get the idea here.

Now, all of those servers

can be running the same or different

operating systems, as well.

They might be running Windows or Mac or Linux,

all of those simultaneously

on this one piece of hardware.

This allows us to have a lot

of our cost savings for an IT budget

because you only have to have

one physical computer or server

and this might cost me $10,000 or $20,000,

instead of having five or 10 of these

$10,000 or $20,000 servers.

This is going to allow us to consolidate

all of our physical servers,

saving us power, space, and cooling costs as well

inside of our data centers.

Now, the physical hardware can actually use

multiple network interface cards

and bind them all together

using something known is link aggregation.

And this can be used with other techniques as well

to increase our available bandwidth

because if I have one server

that's running six or seven

or eight virtual servers,

one of my biggest limitations

is going to be network connectivity.

The machine and the operating system

that we're going to use

on this physical piece of hardware

is what's known as our hypervisor.

This is a specialized software

that enables virtualization to occur

on the physical machine.

Now, the hypervisor is going to be

the software that emulates our physical hardware

for each of those six virtual machines inside of it.

They are going to pretend and think

that they each have their own physical hardware.

But that physical server that they're seeing

is actually just software

that the hypervisor is giving to it.

Now, this is what's known

as a virtual machine monitor or a VMM.

Now, what are some examples

of popular hypervisors or VMMs?

Well, one of the most popular ones

is VMWare's ESXi,

which is a great freeware program

that anyone can use.

There is also Microsoft's Hyper-V.

There is Virtual Box, which again

is another free open source product.

And of course, VMWare's Workstation

if we're doing it in a desktop environment.

Now, there are two different types

of hypervisors out there

called type one and type two.

Type one hypervisors

are going to be where you have the OS,

the operating system,

sitting on top of your hypervisor

and that's sitting directly

on top of the hardware.

Now, when you're dealing with type two,

this is where you have a hosted environment.

For example, I have a Macintosh desktop

and I run Virtual Box in it.

So, on my Mac hardware,

I'm running a Mac operating system.

And then on top of the Mac operating system,

I'm running a hypervisor

known as Virtual Box,

and inside that hypervisor,

I'm running Windows.

Now, this is going to take up

a lot more processing power

but it works really well

when dealing with a desktop environment.

Now, for servers, on the other hand,

it's much better to go and use a type one

because I'm removing that extra layer

of operating system,

in my case, that Macintosh operating system,

and doing what's known as a bare-bones hypervisor

or bare-metal hypervisor

that just runs the hypervisor

as the operating system.

And then, I can run the other operating systems

that I want to host,

like Windows Server, Mac, or Linux,

inside the hypervisor.

By removing that extra layer of software

inside of the type two by moving to a type one,

I am going to get better performance.

Now, next thing we need to talk about

is virtual storage solutions.

With all these virtual servers,

I have to have a place to store all of their data,

and that can be really hard

if you only have one physical server.

So, we might look at things

like a network attached storage device

or a storage area network.

Network attached storage or a NAS

is a disk storage that's delivered as a service

over your TCP/IP network.

For example, here in my offices,

we have two NAS servers.

These each have a bunch of hard drives

in our network and essentially allows

anyone in the office to access it

and use it for file storage

over our TCP network.

Now, the second way of doing things

is using a SAN, or a storage area network,

and this is becoming very common

if you're using virtualized servers,

like a data farm like Amazon Cloud Services

or Google Cloud or Azure Cloud

or something like that.

When you have a storage area network,

this is a very specialized type

of local area network

that's designed for data transfer and storage.

They use fiber optic cables because of their high speed

and they transfer data at a block level

with a very specialized protocol

instead of relying on a TCP/IP protocol

like you use with a NAS.

This allows it to become much, much faster

in a standard NAS, and that's why

storage area networks are used

when you're doing a lot of heavy server activities

and you need a big area of fast storage.

Now, there is a third way

of doing storage over a network,

and this is known as a fiber channel or FC.

Now, fiber channel can also be fiber channel

over Ethernet, FCOE,

or it's also known as iSCSI.

Now, a fiber channel is a special purpose hardware

that provides one to 16 gigabits per second

of storage area network speed.

So, it's basically a specialized type

of storage area network.

Fiber channel over Ethernet

removed the need for specialized hardware

and instead, ran your fiber channel

over a regular Ethernet network.

This allowed you to kind of put up

this bundleness over a standard TCP/IP

or Cat5 or Cat6 network,

and that would help you reduce the cost

and make it easier for the solution

to be implemented in your network.

Then, we have this thing known as iSCSI,

and iSCSI is the IP Small Computer System Interface,

And at a very low cost

and it's built using Ethernet switches,

but that also makes it have a limitation

of less than 10 gigabits per second

because again, our Ethernet,

the fastest place of Cat6a

or Cat7 is going to be limited

to 10 gigabits per second.

So, when you're dealing with iSCSI,

it is slower but it is going to save you

a bunch of money cause you don't have

to deal with the fiber channel stuff

and the dedicated hardware.

This relies on configuration

that allowed jumbo frames

going over your network

but again, it's slower,

and it's not necessarily good

for large network solutions

that need fast high-quality storage.

Now, I just mentioned jumbo frames.

I think this is probably the first time

we've talked about that.

Now, by default when you get a frame

that's going over your Ethernet,

there is a maximum transmission unit size

and the default is 1500.

Now, with a jumbo frame, you're setting things

that are bigger than 1500 MTUs

or maximum transmission unit size.

That's all a jumbo frame means.

But you have to configure a network

to support that cause otherwise, by default,

it's going to drop those jumbo frames.

Now, one of the new types of storage

that's covered in this version

of the Network+ is known as InfiniBand,

and this is a newer virtualized storage technology.

It is basically a switched fabric topology

for high-performance computing.

The throughput that we're talking about here

is greater than 600 gigabits per second

with very, very low latency

of half a microsecond.

It is extremely fast.

It is extremely dedicated.

And it is extremely expensive.

Unless you're working at something

like a cloud computing center

or a really high-speed data center,

you're probably not going to be dealing

with InfiniBand any time soon.

Now, this is a direct or switch connection

that goes between the servers

and the storage systems

with these specialized plugs

that you can see here on the screen.

Now, again, where are you going to find these?

High-performance computing centers.

If you're running a very large

high-capacity processing,

high-capacity storage solution,

you may want to look into InfiniBand.

But for most of us who are working

at small offices, home offices,

medium-sized businesses,

or even Fortune 500 companies,

you're likely never going

to run into an InfiniBand solution.

This is an enterprise class solution

for really big data centers

who focus on big data.

Next, we're going to talk about virtual firewalls and routers.

So, going back to our first image,

I have this virtual server

with six or seven machines on it.

Now, how am I going to connect

all these virtual servers together?

Because I don't have a router or a switch

and enough plugs to plug everything

into physical devices.

Well, I can do it by using physical devices

if I create enough network cards,

or a better way is to virtualize those two.

And to do that, we can fully virtualize our network

and therefore, we're going to need

virtual switches, virtual routers,

and virtual firewalls.

Now, different manufacturers are offering

virtualized versions of their most popular devices.

So, if you're a Cisco fan

and you like their PIX firewall,

you can go ahead and buy a virtual PIX firewall.

If you like Cisco routers,

you can buy a virtual one

and they'll send you the software to install

into your virtual environment.

Virtualized routers and firewalls

are going to provide you

with all of the same features

as their physical counterpart,

without all that pesky wiring.

Now, how do we designate a virtual firewall

or a router on the diagram?

We're going to use the exact same symbol

we would for a firewall or a router

and instead, we're going to put

these little dashes around it.

So, you can see here

that it is a virtual router

instead of a physical router

because of these dashed lines

instead of solid lines.

So, in addition to these virtual routers

and virtual firewalls,

we also have virtual switches

because again, we have to overcome that problem

of all of these virtual servers

being on one broadcast domain.

If we have virtual switches,

we can actually use layer two VLANs and trunking,

like we learned about in the last section of the course,

through these virtual switches.

This also provides us with quality of service

and security, and as you can see here,

inside that blue box,

that is going to be my virtual server.

In there, I have three virtual servers

on one physical piece of hardware,

and I have a virtual switch

with three different VLANs.

Then that's connecting over a single network interface

to a real switch in my real data center,

and this is going to connect to a real router,

and then connect out to the real Internet.

Next, we want to talk about virtual desktops,

and this is where you can run a desktop

inside a web browser.

You can use it from the web, a laptop,

a tablet, a smartphone,

and it's really great for people on the go.

The thing I really love about virtual desktops,

which we refer to as VDI,

it's going to be a virtual desktop infrastructure,

and they are going to be really easy to secure

and really easy to upgrade because essentially,

you get a brand-new desktop

every time you log in.

Because all that you're doing

is creating a virtual file on a server.

You're not having to actually build a new computer.

So, the machine on your desk now

just becomes a dumb device built to access

this VDI environment.

All of the important stuff

is actually kept in my server room.

And you can see here on the screen

that I'm a user on the go with my tablet,

and I'm sitting at Starbucks,

and down here, I can then reach

through the Internet to the router,

to the switch, reach into my desktop,

back up to my server farm,

and get the data.

And just as I was sitting at home

or in the office, or at Starbucks,

it all looks the same.

It doesn't matter where I'm sitting.

I can do all the same features.

Virtual desktops are really starting

to take over and they're really useful.

Now, what are virtual desktops bad for?

Well, if you have high-performance computing requirements

like video editing, gaming, or desktop publishing,

or any of those things that requires

a lot of graphic and computing power behind it,

virtual desktops is probably not for you.

But for the average user

who is surfing the Internet,

doing PowerPoint and Word and Excel,

virtual desktops are phenomenal,

and they are really, really a great way

to increase the security

and lower your total cost of ownership overtime.

Now, that is the textbook answer what I just gave you,

that virtual desktops only work well

for low performance requirements.

But I will tell you here in 2020,

that is not true anymore.

It is true that there are actual

gaming virtual desktops that you can get into.

There are editing and video editing ones, as well.

For instance, in my company,

we actually have a subscription to shadow.tech,

which is shadow PC is what it's called.

You can log in through the cloud

to this virtual desktop on their servers

and it provides you full gaming PCs,

and we actually use them for our video editing

because they're really powerful.

And so, for 25 or 30 bucks a month,

we get access to a really powerful computer

in the cloud and no matter

where my video editors are in the world,

they can log in, access all the information,

and do what they need.

But for the exam, when you hear about VDI,

you want to think about low power desktops

for your standard Word and PowerPoint and Internet,

not something like a high-end gaming machine.

I just wanted to go ahead and clarify that now

because sometimes, I get students who say,

but what about Shadow PCs, that's a VDI?

And yeah, it is, and it's a great VDI

and it works really, really well.

But for an enterprise,

generally you're going to use VDI

for low performance requirements.

All right, let's go back

to your regularly scheduled lesson.

And lastly, here we want to talk

about software-defined networking or SDN.

This is going to provide the administrator

with an easy-to-use front end

to configure physical and virtual devices

throughout your network.

All of your configurations

can be automatically done

by using software-defined networking.

It is really great and as an administrator,

you get an overview of your entire network.

In the old days, before we had SDN,

if I had three switches like I do all here on the left,

and I wanted to configure them,

I would need to go to each

and every one of those switches,

either logically or by remoting

in using secure shell,

and make my changes.

Now, with software-defined networking,

I just go to my software-defined networking controller,

I make the change once there,

and it pushes that configuration change

across the entire network

and all of my devices.

It'll reconfigure the access list.

It'll reconfigure the MAC filtering.

And all those different things I need

as part of software-defined networking.

It changes the routers and the tables

and all of that stuff,

all from one centralized console.

It is really great and it is a great way

to virtualize your network

when you're using virtual switches.

And doing this can really consolidate it

with your real-world stuff as well

because SDN doesn't only have to work virtually,

it can configure those physical devices too.

And so, it is something

you really need to take a look at

as you get up into the networking world.

VoIP or Voice Over IP.

Now, Voice Over IP is a system

that digitizes your voice traffic

so it can be treated like any other data on the network.

You can do this by connecting it

to what's known as an ATA device,

which essentially is going to convert your analog voice

to something digital that you can use

and push over the network.

So if you've already used Vonage

and you had that physical device

that you plugged your phone into

and then connected that to your network,

that is an ATA device.

Now, you can use a fully digital environment as well.

Something like Skype or Cisco phone

that's connected over RJ45 or Cat5 cables

and they use the power of Ethernet to go back

to a call manager to make your phone calls.

Either way, we're going to talk about both

of these architectures a little bit more in this lesson.

Now, when you're dealing with VoIP,

it uses a protocol known as SIP,

the Session Initiation Protocol.

This is used to set up, maintain,

and tear down each one of those calls

and they give you that dial tone and maintain consistency.

Now, VoIP is really popular because it can save your company

a lot of money and provide enhanced services

over a traditional phone system.

When you talk about a traditional phone system,

you'll usually hear this referred to as a PBX,

which is a Private Branch Exchange,

or you may even hear it called POTS,

which is the Plain Old Telephone System.

Now, the reason you get caller ID automatically,

and then you can change names

and you can even do video over it,

it's all because you're using VoIP,

which is a digital service.

There are all sorts of things that you can do

with a great VoIP solution that you just can't do

on a private branch exchange or PBX.

So, how does VoIP actually work?

Well, you can run it inside a desktop computer

in your browser, like using Google Voice,

or you can use it with a handset,

or you can see an IP phone that is dedicated

like you see here on the screen.

These are all different ways that you can use VoIP.

Now, if I'm trying to talk to you

from a digital saying like, Google Voice or Skype,

and I want talk to your regular analog phone.

Well, we're going to have to make some kind

of transition between my digital world

and your analog phone, right?

So, how can we do that?

Well, when I pick up the phone, an IP phone,

it does a session initiation protocol

down to what's called a Call Agent.

That call agent then sets up the call through a router

and it goes up and connects to another router or a gateway.

That router or gateway is going to be tied to a PBX,

a private branch exchange

and this is going to initiate the call and that is going to create

this session initiation protocol that we're using.

And once that protocol is connected,

we can then use RTP, the real-time protocol

to actually pass the voice traffic during that session.

Now, when we go from the IP phone, through the router,

to the other router, and then to the PBX to make the call,

that then goes into the analog system

of the plain old telephone system.

Now, if all of that is really confusing,

don't worry too much about it because for the exam,

you don't need to know how this typology works,

except to know that VoIP is used to set up,

maintain and terminate the call using SIP,

the session initiation protocol.

And once that call is set up and being maintained,

the actual voice traffic goes through

the real-time protocol or RTP.

Now, next we have those PBX's and we can also create

a virtual private branch exchange.

Now, how do these relate to VoIP?

Well, this allows you to have the ability

to outsource your telephone system,

because everything now is just ones and zeros.

There's no reason that I can't just route this over to India

and have them handle my PBX for me

or some other cloud provider.

Now, this is going to use VoIP to send

all of your data to the provider,

and they're going to connect it to whatever telephone system

you need to use, whether that's US-based or overseas.

The benefit of this is that you can have this virtual PBX

that provide your voicemail, your caller ID,

your messaging systems, and everything else for you.

All you need to do is get and pay

for that standard dial tone and then you're in business

and you can start making phone calls all day long

at a really inexpensive rate.

Cloud Computing.

The last piece of virtualization technology

that we want to cover is cloud computing.

Now that's because most cloud

is just a mixture of a bunch of virtualization technologies.

So in this lesson, we're going to talk about how that works.

When you're dealing with cloud computing,

there really are four major ways that it's going to be done.

There's a private cloud, a public cloud,

a hybrid cloud, and a community cloud.

Now, when you're dealing with a private cloud,

this is where your systems and your users

only have access to other devices

within the same private cloud or system.

And this is going to add to the security of that cloud.

When you're dealing with a public cloud,

the systems and the users can interact with devices

on public networks,

such as the internet and other cloud providers.

Now, when you deal with the hybrid cloud,

this is going to be a combination of both public

and private clouds.

Now, what would this look like?

Well, let's take the example of a private cloud.

Something like the GovCloud,

which is made by the US government.

This is hosted on Amazon and AWS servers,

as well as Microsoft servers,

but only government agencies can touch it.

And only government data is going to be stored within it.

This is considered a private cloud.

Now on the other hand, there's a public cloud

and this might be something like Google Drive,

because anybody can sign up for Google Drive and use it.

My data is in Google Drive,

your data's on Google Drive.

And we can even commingle our data inside of Google drive.

This is what makes it a public cloud.

Now in a hybrid solution,

we can mix a little bit of both.

For example, we might have parts of it

that are going to be private

and parts of it that are going to be public.

Maybe our accounting data is very special

and we want to make sure it's in a private cloud,

but our human resource data might be in a public cloud.

And we can mix those two things together

by connecting them.

It really depends on how you want to implement this

inside your organization.

But when you're dealing with a hybrid cloud,

you have the ability to have both parts of public

and parts of private working together.

For example, in the US government,

they have a private government cloud

that we already talked about called the GovCloud,

but they've also used some public cloud services

like Microsoft Teams as part of their CVR environment,

which was used during their COVID-19 pandemic response

back in 2020.

Now, CVR is just an acronym

that stood for commercial virtual remote environment.

And basically it was just a rebranded version

of Microsoft Office 365 and Microsoft Teams

that was issued to government and military employees to use

during the pandemic.

The fourth type of cloud is known as a community cloud.

Now a community cloud inside of cloud computing

is a collaborative effort in which infrastructure is shared

between several organizations from a specific community

with common concerns.

And these can be managed internally

or by third parties,

and then they can be hosted internally or externally.

For example, there could be a cloud service

that's created specifically for banks

by a group of banks.

Each of these banks may provide either funding,

expertise or personnel to create this cloud.

And then all the members could benefit from its use.

To make this work effectively,

the community cloud solution is usually going to be designed

as a multi-tenant platform.

This way it can be accessed

by each of the contributing members,

but they each have their own portion of it

and their own data.

This is why we call it a community cloud,

because it's built

and used by a specific community of users.

Now, in addition to these four types of clouds,

there's also going to be five models of cloud computing.

The five models of cloud computing are NaaS,

IaaS, SaaS, PaaS and DaaS.

This stands for Network as a Service,

Infrastructure as a Service,

Software as a Service,

Platform as a Service,

and Desktop as a Service.

Let's take a look at each one of these.

First, we have Network as a Service or NaaS.

This is going to allow for the outsourcing of your network

to a service provider.

This is where all those virtual routers

and switches and firewalls are going to come into play.

All of this can be hosted offsite

at the provider's location,

and they put it into their data center.

And you as the customer are going to be charged for usage

based on the number of hours

or the amount of bandwidth that's being used.

Essentially your network capabilities are going to be provided

as a common utility.

A great example of this is going to be Route 53

or Amazon's VPC, Virtual Private Cloud offerings.

Both of these are Network as a Service options.

Now, in addition to Network as a Service,

there's Infrastructure as a Service or IaaS.

Now Infrastructure as a Service

is going to allow outsourcing of the infrastructure

of your servers and desktops to the service provider,

in addition to outsourcing that network.

Now, if I wanted to host virtual servers hosted by Amazon,

I can use AWS, or if I want to use Microsoft,

I'm going to use Azure.

All of that would be Infrastructure as a Service.

All of this virtualized equipment

is being hosted offsite at that service provider's location.

And again, I'm going to be charged for usage

based on the hours used,

processing power used,

bandwidth, the amount of memory,

and any other factors like that, that they want to use.

Essentially, it's being charged like a utility

where you get a monthly bill every month

for the amount you've used.

For example, my website, diontraining.com,

we use Infrastructure as a Service to host it

because we've outsourced the servers

and that hosting to this third party cloud company.

And they charge us

based on the amount of bandwidth that we use

and the number of requests we get each month from students.

Next, we have Software as a Service or SaaS.

This is where the user

is going to interact with a web-based application.

And the details of how that application work

is actually going to be hidden from the user,

because honestly, our users don't really care

about the details anyway,

they just want the end product

and the end result.

Some great examples of this are things like Office 365

or Google Docs or Google Sheets.

All of these provide the end user with an application

that is web-based and we're paying for it as a utility.

Now, again, as an end-user,

all I care about is,

can I do what I need to do with the software?

For example, if I need to make a spreadsheet,

I can fire up my web browser,

go to sheets.google.com

and then start creating that spreadsheet.

If I can do this, then the SaaS product is doing its job,

and I am a satisfied customer.

Now I don't have to install any software

or configure it.

All I have to do is log into the website

and use that to create spreadsheets.

So it meets my needs.

Now, another great thing about SaaS products

is they are really easy, really fast,

and as an administrator,

I don't have to worry

about all the backend configuration and details.

Instead, my company just pays an annual service fee

or a monthly service fee,

and we get access to that piece of software in the cloud.

These days, more and more software

sold under the Software as a Service model.

In my company,

we use things like Google Sheets and Google Docs,

but we also use QuickBooks Online.

And we also use Adobe.

These days more and more software sold

under the Software as a Service model.

In fact, if using Adobe Photoshop or Adobe Premiere,

those are still considered Software as a Service

because there's a monthly fee associated with it

that you have to keep paying

in order to use that software.

Now, this software is still going to be installed

locally on your machine,

and it's not truly web-based,

but that's only because of the processing needs

of these tools because you're dealing with video editing

and photo editing.

These tools had to be installed locally

to take advantage of your computer,

and its more intensive processing power.

But every time you log into that software,

it's going to reach out to Adobe

and check to see if your software license

is still up to date,

because you have to pay that monthly fee

because it's Software as a Service.

Now, one of the key features of Software as a Service

is that they are constantly being updated

and upgraded on your behalf.

In the old days, when you bought Adobe,

it was a traditional software.

You bought the license for one version of that software,

and that was it.

But with Software as a Service,

you're paying this monthly fee time and time again,

and in trade for that,

you're getting a constantly updated piece of software

with the latest and greatest version,

all the security bugs are being worked out,

and you're getting new features all the time.

Next, we have Platform as a Service or PaaS.

Now Platform as a Service

is going to give you a development platform

if you're a company that's going to be developing applications

and you don't want to maintain your own infrastructure.

When I build my courses in my lab environments,

I use Platform as a Service,

because I don't want to have to worry

about building all the virtual labs myself

and all the underlying code to host them.

Instead, I want to build the virtual machine

and give the experience of troubleshooting

and maintaining that particular equipment

within a particular lab.

This way I don't have to deal with all the hosting needs

and the networking needs

and all the other stuff that goes into providing these labs.

Instead, I can get what I need right out of it

by using Platform as a Service.

This is why I outsource

under this Platform as a Service model.

It makes my life much easier.

Now, some great examples of Platform as a Service

are things like Pivotal, OpenShift, and Apprenda,

as well as many other solutions out there.

If you work as a coder or a web programmer,

or you work in a software development firm,

there's going to be a lot of Platforms as a Service

that's are going to be available to you

and are things you're going to have to be familiar with.

Finally, we have Desktop as a Service or DaaS.

Now Desktop as a Service

is going to provide you with a desktop environment

that is accessible from the internet

in the form of a cloud desktop,

or a Virtual Desktop environment.

A Virtual Desktop is often provided under the term VDI

or Virtual Desktop Infrastructure.

Now, when you purchase a Desktop as a Service product,

you're going to receive a working desktop environment

and any applications that you may need to use

within that desktop.

For example, Amazon has a VDI product

known as Amazon Workspaces.

With Amazon Workspaces,

your organization can purchase a fully managed,

persistent desktop virtualization service

that allows your users to access their data,

applications and resources from anywhere,

anytime from any supported device.

These desktops can be their Windows or Linux-based desktops,

and they can be deployed to thousands of users

in just a couple of minutes.

In my company,

we actually use a specialized Desktop as a Service platform

known as Shadow PC.

Now, unlike most virtual desktop services

that focus on providing low power desktops

for Office workers,

Shadow PC focuses on providing users

with a very powerful gaming PC,

so they can access it from anywhere in the world,

even from something like a Chromebook

or an Android device.

Now, why do I use a Shadow PC?

Are the folks on my team big gamers?

Well, they may be,

but that really isn't our use case for it.

Instead, we do a lot of video editing

to bring courses like this one to you.

And instead of having to constantly supply my team

with new hardware and software

to do their job as video editors

and shipping those PCs all over the world,

I instead opt to provide a Shadow PC.

This allows them to log in with any laptop,

desktop, Chromebook, tablet or mobile phone,

and they can access a very powerful computer

that has all the Adobe software they need

to do their editing as well as doing all their other jobs.

Because this gaming PC by design

has a very powerful graphics card

that can support rendering our videos

in a short amount of time.

For one monthly fee, I can give my team

a constantly updated piece of superior hardware

without the large upfront cost

of buying all those high-end PCs

and shipping them all over the world.

Remember, all of these different cloud models

have their own benefits,

and it's important to realize what each one does.

These can be NaaS, IaaS,

SaaS, PaaS and DaaS,

and each of those five could be provided as a public,

private, hybrid or community cloud solution.

It just depends on your use case.

Now, one of the previous organizations I work for

operate a very large private cloud variant

of Desktop as a Service for their model.

They had over 25,000 plus users,

and they did this because it had a higher security ability

'cause they could quickly patch

and update all of these VDI-based solutions

whenever they needed to.

So it's really important for you

to consider all of your options based on these things,

as you're picking out a cloud solution to implement

within your organization,

based on your specific needs.

Cloud concepts in this lesson,

we're going to discuss some of the different

cloud computing concepts that you need to be aware of.

These include things like elasticity,

scalability, multitenancy

as well as the different security implications

that you need to consider when using a cloud-based solution.

First, let's talk about the concepts

of elasticity and scalability because many people

get these two concepts completely confused.

Now, when we're referring to Cloud elasticity,

we're attempting to match the resources allocated

with the actual amount of resources needed

at any given point in time.

with elasticity, our Cloud-based servers and networks

can grow or shrink dynamically

as they need to in order to adapt to a changing workload.

And all this happens automatically

using automated workflows and orchestration.

Now each different Cloud provider

is going to define elasticity

just a little bit differently.

For example, Amazon web services

defines elasticity as the ability to acquire resources

as you need them and release resources

when you no longer need them.

Microsoft Azure on the other hand

is going to define elasticity as the ability

to quickly expand or decrease computer processing,

memory and storage resources to meet changing demands

without worrying about capacity planning

and engineering for peak usage.

When it comes to elasticity,

I want you to think of a rubber band.

Now let's say I have a single rubber band

and you handed me some envelopes.

I could take my rubber band

and I could hold those envelopes together with them.

Let's pretend you hit on me 10 envelopes, no problem.

I put my rubber band around them

and they're all bundled together now.

Now, you may decide that you want to hand me

another five envelopes, okay.

I put those five with the other 10.

And so now I have 15 total envelopes

and my rubber band will stretch out

to hold all of them without breaking most of the time.

Now, if you ask me for the first 10 envelopes back,

I can remove those and give them to you.

And now the rubber band will shrink down

and still hold the remaining five envelopes

in a nice little bundle.

This is the idea of elasticity at work.

It's going to quickly stretch to meet the larger needs

or contract down into the smaller needs,

depending on what number of envelopes

I actually need inside my bundle.

Well, elasticity in the Cloud works exactly the same way.

Elasticity is going to be focused on meeting

the sudden increases and decreases

in your workloads that are going to be experienced

for a short period of time.

Elasticity is very dynamic and it can quickly add

or remove cloud resources to meet your needs.

Now, elasticity is often used in public Cloud services,

especially under a pay as you go model.

For example, with AWS or Amazon Web Services,

you can quickly add or remove additional bandwidth,

storage or resources to your cloud based systems.

And they'll just add or remove that and some of their costs

to your monthly bill.

Now, on the other hand, we also have scalability

when we talk about cloud services.

Now scalability is designed to be more

of a static or long-term solution.

Scalability is going to be used

to handle the growing workload

that's required to maintain good performance and efficiency

for a given software or application.

Commonly, you're going to find scalability used

when the persistent or long-term deployment of resources

is going to be necessary to handle the workload

in a more static manner.

For example, the underlying learning management software

that I use allows me to scale up

or scale down based on how big

my student base actually grows.

So I can sign up for an annual contract

based on a given number of students

that I expect to have this year.

If I expect to have between one and 5000 students,

I'll pay one amount.

If I expect 5000 to 10,000, I pay a little bit more.

If I expect 10,000, 15,000,

I pay even more, and so on.

This cloud based solution,

I don't have to actually pay for what I use,

but instead, I'm paying for a certain capacity

based on the total amount of people

that I plan to provide services to.

Now, this is different than elasticity.

If this was an elastic plan,

you might see it set up that at the end of each month,

they count the number of users I had

and then charge me one cent per day, per user,

for each one that was signed up.

This way, I could increase or decrease

the number of users each and every day.

And therefore the workload being processed

by their cloud service would go up or down

each and every day.

And then I only would pay for what I need

each and every day.

Now, in the case of this learning management system,

we can only change our plan one time per year.

So once we pick it, we're stuck with it

for a good amount of time.

Therefore, when we're dealing with scalability,

we're dealing with more of a long-term solution,

as opposed to a more elastic approach

that can change every day or every hour

or even every minute.

Now, in both cases, whether we're going to be using elasticity

or scalability, we need to have the ability

to add or remove more resources to our networks.

This is done using either vertical or horizontal scaling.

Now don't get confused here because this type of scaling

is used in both elasticity and scalability.

So just because you see the word scaling

doesn't mean it's tied just to scalability.

Now remember, the big difference here

is that with elasticity, we're looking at a short term

addition or subtraction of resources, but with scalability,

we're going to be focused

on more long-term planning and adoption.

So let's go back to the idea of vertical

and horizontal scaling here for a minute.

Vertical scaling, also known as scaling up,

is used to increase the power of our existing resources

in the working environment.

For example, let's say you have a laptop

that has four gigabytes of Ram in it,

and it begins to slow down over time

because you're doing a lot of work today.

Well, if you wanted to vertically scale up that laptop,

you could add more Ram to it.

And this would speed up that device.

In the cloud environment,

we can often scale up our compute resources.

Now, if you're using Amazon LightSail, for example,

to host your brand new blog,

you could start out with a minimal setup

that costs about $3.50 a month.

This will give you a 512 megabytes of Ram,

a one core processor and 20 gigabytes of storage space.

Now, as your new blog gets more popular

and more people start reading it,

you may start to notice that your server is slowing down.

Well, you can simply scale up using vertical scaling

by selecting the next higher plan,

which would charge you $5 per month.

And this will double the amount of memory

and storage that you're going to get.

A few weeks go by and your blog

is getting more and more fans.

And again, it starts to slow down.

So you can again scale up by moving to the next plan,

which is $10 per month.

And again, you're going to double

the amount of memory you have.

You can keep doing this

every time your site begins to slow down,

but eventually, you're going to reach the highest level plan.

And in the case of AWS's LightSail product,

this is a virtual server with 32 gigabytes of Ram,

an eight core processor and 640 gigabytes of storage

for about $160 per month.

Now, the other option we have is to use horizontal scaling,

which is known as scaling out.

With scaling out, you can add additional resources

to help handle the extra load that's being experienced.

Essentially, instead of having one server to host your blog,

we would now have two servers to host your blog.

And as you gain more readers,

we're going to load bounce between them.

If we get more readers,

we're going to add a third and a fourth,

and we'll keep adding more servers

and doing more load balancing

between all those different instances

as you have more and more demand,

this is the idea of scaling out.

So, which type should we use?

Well, vertical scaling or scaling up

is really easy to use because you simply add faster

and better components to your existing single server.

This makes it easier to use,

and it works well for long-term scalability.

Normally when you're dealing with scalability,

you're going to be dealing with vertical scaling.

Now, on the other hand, if you want to use horizontal

scaling or scaling out, you're going to need to ensure

that your system is designed to support it.

With scaling out, you need to have a method

of breaking up a sequential piece of logic,

into smaller pieces that each can be executed in parallel

across multiple machines.

Essentially, we're going to have a lot of different horses

all running in the same direction.

So we need to make sure they know that.

When you're dealing with elasticity,

normally you're going to be seeing scaling out

or horizontal scaling being used,

instead of vertical scaling or scaling up.

Another benefit of using horizontal scaling

over vertical scaling is that as you're adding

more and more machines to the pool,

we're not relying on a single machine anymore.

For this reason, scaling out will provide more redundancy

and it will result in less downtime.

The next cloud computing concept that we need to cover

is known as multitenancy.

Now multitenancy means that a cloud computing architecture

will allow customers to share computing resources

in a public or private cloud.

In multitenancy, each of the tenant's data

is going to be isolated and remains invisible

to all of the other tenants.

Now, when I talk about a tenant,

I'm really talking about a customer

of business or an organization.

Now in traditional, on-premise server environments,

there's only going to be one tenant using your server.

You, this is because your server

is going to be sitting in your data center,

on premise, inside your organization.

This is a lot like having a single family home

out in the suburbs.

Everybody who lives inside of that house

is part of the organization.

In that case, your family.

Now here, you're going to have a single tendency solution

or dedicated solution by having one house

with one family living inside of it.

But if you lived in a big city like New York,

you might instead choose to live

in a large apartment building.

In this large building,

there might be 50 different apartments.

Each family that lives in this building

is assigned their own apartment.

And they're only going to have one tenant

inside that apartment,

but there are 50 tenants inside this building.

This allows that one tenant or family

to be able to live in that one particular apartment.

Now, when you go home from work,

you're going to go into your apartment.

You're going to shut the door and now none of your neighbors

can see what you're doing.

You're invisible to them and you have your privacy,

even though you're in a multi-tenant environment.

Well, a multi-tendency server

works much the same way, with the physical server

being divided up into individual portions

that different tenants can use,

while keeping all the other data

from the other tenants invisible

to the other tenants inside that same physical server.

Now a multi-tenant solution will be able to provide

increased storage and access

compared to a single tenancy solution,

because use a larger pool of shared resources

that becomes available to everybody inside the group.

Now multi-tenant solutions also provide

a better use of resources and a lower overall cost

for each individual tenant or customer,

because we're all sharing the costs

over a larger pool of customers.

So for example,

let's say you wanted to host your own website.

You could go out and spend $10,000

on a physical, single tenant server.

And you would know that you have 100%

of the computing power, memory

and storage at all times available to you

because you're not going to share it with anybody,

but if you instead wanted to simply use

a shared hosting solution using multitenancy,

you might pay only a few dollars per month

for that same capacity.

Now, multitenancy isn't without its drawbacks though,

when you use a multitenancy solution,

there are two major concerns.

First, we have the noisy neighbor effect.

If you've ever lived in an apartment building,

you've probably dealt with this in real life.

You go out and you rent a really nice apartment

in a multi-tenant building.

Everything is going great, and everything is fine

until a noisy neighbor moves in next door.

Now the noisy neighbor may be playing loud music at 2:00 AM

and waking you up or it might be a bunch of college students

who decided it was a great deal to save money

by putting five of them into a one bedroom studio apartment,

either way they're moving in has started to cause

all sorts of problems for you.

Now, the same thing can happen in a multitenancy solution.

Let's say you're running your company's emails automations

on a multi-tendency solution.

Something like MailChimp or Active Campaign.

Well, those are multi-tendency solutions.

There are a lot of different companies here

that are using those services.

And usually, those companies are going to have between 50

and 100 customers on a single email server.

Now, the problem is that if your company

gets assigned to a shared server

with somebody else who is sending out a lot of spam,

everyone on that shared server

is going to see their email delivery rates drop

because you're all using the same server

and the same IP address.

And that IP addresses reputation

is being hurt by that noisy neighbor.

In this case, the spammer

who is part of our multitenancy solution.

The same thing can happen with shared web hosts.

Maybe there's 20 sites on a single server

and one of them starts getting really popular.

Well, that's actually a bad thing for the rest of you

because that popular site is now using an unfair amount

of resources on that shared server.

And this can reduce the performance

for all the other tenants.

After all, once we begin to rely on virtualization

and cloud computing for our deployments,

it becomes important to recognize

that our data might be hosted on the same physical server

as another organization's data,

if we're using a multi-tendency solution.

So we have to be aware of these risks.

Now, by choosing one of these solutions,

we are introducing some vulnerabilities

into the security of our systems.

First, if the physical server crashes

due to something one organization does,

it can affect the other organizations

hosted on this same physical server.

Again, this is the concept of the noisy neighbor

that we just discussed.

Similarly, if one organization

has not maintained the security

of their virtual environment,

that's being hosted on that server,

there is a possibility that an attacker

could utilize that as a jumping off point

and use that to the detriment of all the other organizations

based on that same shared server.

Just as there are concerns when you conduct

interconnection of your networks with somebody else's,

you also have to be concerned

when hosting multiple organizations data

on the same physical server,

that's being run by giving cloud provider.

Therefore it's important for us to properly configure,

manage and audit user access to the virtual servers

that are being hosted here.

Also, you need to ensure that your cloud-based servers

have the latest patches, anti-virus,

anti-malware and access controls in place

if you're going to be using the infrastructure as a service,

as part of your cloud service model.

Now, to minimize the risk

of having a single physical service resources

being overwhelmed, it's a good idea

to set up your virtual servers in the cloud

with proper fail-over, redundancy and elasticity.

By monitoring the network's performance

and the physical service resources,

you should be able to balance the load

across several physical machines

instead of relying on just one single machine.

After all, elasticity and scalability

are some of the main benefits

of our moving to the cloud in the first place.

So we might as well take advantage of them.

Most cloud security is going to rely

on the same security practices that you would perform

for other servers and networks

in your regular organization.

Things like ensuring complex passwords are used,

strong authentication mechanisms have to be in place

and strong encryption being used to protect

your data at rest, in transit or in process.

Now your cloud environment

should have strong policies in place to ensure

that it is clear what things a user can do

and what they can't do with that given cloud service.

Remember, that data that you're hosting in the cloud

is on somebody else's physical servers.

If you're using a public cloud model,

you also need to be concerned about that are remnants

that could be left behind when a cloud server

is de-provisioned after the demand for the service

is reduced using our principles of elasticity.

This occurs because when a service is scaled out

using horizontal scaling,

a new virtual instance is going to be created

on a physical server.

This new instance is going to take up some hard drive space

on that physical server to represent the virtual hard disk,

the operating system and all the associated configuration

and data files for this new virtual instance.

When this virtual server is no longer needed

because the load has gone down,

the virtual machine can be de-provisioned

as we scale back in, and this means we're going to shut it down

and the files are going to be deleted.

Now, when this occurs,

those confidential data files from the virtual machine

are still left on the physical server's storage system.

And these deleted files become known as data remnants.

These data remnants could be recovered by an attacker

and therefore it could breach

the confidentiality of your data.

For this reason, cloud infrastructures

that rely on virtualization can introduce

a data remnant vulnerability to your company

because these physical servers are not being controlled

by your organization and are instead controlled

by the physical servers of the cloud organization.

Now, our final security concern

that you need to think about is a virtual machine escape

because after all, these cloud-based servers

all rely on virtualization.

So what is a virtual machine escape?

Well, a virtual machine escape or a VM escape

is going to occur when an attacker

is able to break out of one of these

normally isolated virtual machines.

And then they begin to interact directly

with the underlying hypervisor.

Now from this underlying hypervisor,

the attacker can then migrate themselves

out to one of the tenant servers

and into another tenant server that is contained

within another virtual machine

hosted on the same vertical server.

This allows them to jump from tenant to tenant.

Now the good news is that VM escapes

are extremely difficult to conduct

because they rely on exploiting the physical resources

that are shared

between the different hosted virtual machines.

But it is still a vulnerability

that you need to be aware of.

To mitigate this vulnerability,

virtual servers should be hosted on the same physical server

as the other virtual machines

in the same network or network segment

based upon their classification level.

This way, if someone's able to escape

out of the virtual machine,

they can only access a similar type

or classification of data.

This works well if you're running your own private cloud,

but if you're running on top of a public cloud,

you really don't get to control which physical servers

your virtual machines and cloud-based instances

are being run on.

For this reason, your organization

needs to consider carefully what data it's going to allow

to be stored in the cloud

and what data they want to maintain full control over

by using an on-premise solution.

So in this section of the course,

we've talked a lot about virtualization in the cloud.

And you may be wondering,

why do I really care here in Network+?

Because as a network technician,

we're not really going to be doing

a lot of cloud or virtualization, right?

Well, that's true, but there is some things

you just want to be comfortable with

because we keep moving more and more into virtualization

and more and more into the cloud.

So, in this lesson, I want to show you a tool called GNS3.

Now, this tool is a virtualized networking environment

and it allows you to connect different virtual machines

or even physical machines together using this tool.

Now, you don't need to be an expert on this tool.

I'm just showing you that the tool exists

and some of the things you can do with the tool.

For the Network+ Exam, no one is going to ask you about GNS3.

It's not on the objectives.

This is just for you to get a little bit of experience

to see how it works,

and if it interests you, you can go to gns3.com,

download the tool for free,

and start playing with it and using their tutorials.

So, let's jump onto the computer and take a look at GNS3.

All right, so here we are inside of GNS3.

Now, you'll notice at the center of my screen is a router.

This is actually a VyOS router,

and I can go in and configure this router

as if it was a physical device.

Everything you see here on my screen right now

are virtual devices.

Over on the left side of the screen,

I have Firefox, Kali Linux, and a Network Defense Appliance.

All of those are virtual machines that exist within GNS3.

They're just a piece of software.

Then, I have these switches,

the Network Outside and the DMZ Switch.

Both of those are virtual devices

that are emulating a real piece of hardware

that would normally be something that would cost you

hundreds or thousands of dollars.

And here in GNS3, you get to use them for free.

The same thing with the router here in the center.

It's connecting my two networks,

my Outside Network and my DMZ.

And then in my DMZ, I have a Snort machine,

a LAMP, which is a web server,

Metasploitable2, which has a web server, an SSH server.

It has an FTP server, a Send Mail server,

and all sorts of other stuff,

and then DVWA, which, again,

is another vulnerable web application.

This particular lab is one I use in my PenTest+ course.

But I just want to pull it up here to show you how it works.

So, what I'm going to do is show you

that we can actually go into this router

and we can run configurations on it,

just as if we were doing it on a real piece of hardware.

The great thing about this

is if you really want to get in depth

and practice your on-the-job skills,

you can download GNS3, load up some routers,

connect them together, and make your own virtual network.

To access the router,

you just right-click on it and go to Console.

This will launch the Console,

which is a terminal for you to configure the router.

So, here we are at the prompt,

and it's going to say Welcome to VyOs,

which is the operating system that runs this virtual router.

For me to be able to use it, I need to log in,

and the default is vyos, and the password is vyos.

Now, once I'm logged in,

I can run any command that this router supports.

The first one I want to run

is actually called show configuration.

And if I do show configuration,

you're going to be able to see

the configuration of this router.

Currently, there are three interfaces or plugs

that would be plugged into this router or Ethernet cords,

and those are eth0, eth1, and eth2.

eth0 is my Outside Network,

so this has the address of 10.1.1.1.

So, if I go back to my diagram,

you can see my Outside Network is here.

So, that address, 10.1.1.1,

is this Ethernet, Ethernet Zero,

that's connected on this side of the router.

Now, if I want to figure out what's on this side

of the router, I'm looking for the DMZ Switch.

And you can see that is right here.

That is 10.10.10.1.

That's its IP address.

And then, we have eth2, which will be its connection

to the outside world, which is currently off,

but it would be 172.16.10.1,

and that would be another connection

that would come out here,

connecting us to a modem or a switch

or something that would give us connectivity

out to the Internet.

Right now, I have this set up,

so this all works in a Local Area Network with two networks,

my left-side network and my right-side network,

which is my Outside and my DMZ.

Now, we used nmap earlier with Kali

so I'm going to use that again,

and we're going to go ahead and take a look

at this particular machine called the LAMP Server.

The IP address for this is 10.10.10.10,

and that comes off of this switch,

which was the 10.10.10.1.

And so, when I go over to Kali,

I'm going to be able to log into that Kali machine,

and we're just going to use nmap 10.10.10.10.

Now, I assume this is a web server

because it was called a LAMP Server,

which stands for Linux, Apache, MySQL, and PHP,

which are four core technologies to run a web server.

When I run that nmap scan,

it comes back with three ports that are open.

We have Port 22, which is ssh,

which allows me to have a connection

through the Console like you see here on Linux.

You have Port 80, which is http,

which is the web browser that we'd be loading that website.

And we have Port 880, which is a web proxy.

Again, these are ports that should look familiar to you.

Now, what this proves

is that I have all of these virtual machines

and they are networked together properly.

Because I was able to send a command from Kali

through this switch, through this router,

through this switch, and all the way up to the LAMP Server

to say what ports are you running,

and then bringing that all the way back

over to the Kali machine.

We can do the same thing

over to this Metasploitable2 machine,

which is 10.10.10.11.

So, I'm just going to do nmap 10.10.10.11,

and you'll see the exact same thing happen.

We went out, we pulled all of those ports and said

which ones are open, tell me about them,

and then, they give that information back to Kali.

So, this is just a very simple demonstration

of what you can do inside the GNS3.

The point of this lesson

isn't to make you an expert in GNS3.

In fact, I didn't teach you how to install it

or even how to operate it, really.

I just want to show you the fact

that you can have one machine made up of software.

All of these was running inside of my own work station.

There were seven different machines,

two different switches, and a router,

and all of them were just a series of ones and zeros.

It was no cost to set up

because there's no physical hardware.

It's all virtualized.

We could take this same lab and I could put it up on AWS.

And if I did that, it would be in the cloud

and I can operate it from there.

This is the idea with virtualization.

It's bringing the costs down

and increasing our capabilities.

I hope this got you a little bit interested in GNS3.

If you have some time,

I recommend downloading it and playing it.

It's really interesting,

the things you can do inside of GNS3.

Infrastructure as Code.

In this lesson, we're going to talk about

infrastructure as code,

which is used for automation and orchestration.

So let's talk about this IAC or infrastructure as code.

Essentially infrastructure as code is the ability

to manage and provision the infrastructure through code

instead of through manual processes.

The term infrastructure here is also rather generic.

It can refer to virtual machines

that contain servers or clients

or virtual devices like switches, routers, firewalls,

and other security appliances.

To use infrastructure as code effectively,

we need to also use scripted automation and orchestration.

Now scripted automation and orchestration

are used in cloud computing all of the time.

This allows our development security and operations teams

or the dev sec ops team

to rapidly deploy things like a new router, switch

or even an entire network,

complete with servers and security devices.

The best part of all of this is that it is less error prone,

and it's a lot faster than having our network technicians

or system administrators building out these things manually.

The great thing here is that if we use scripted automation,

we're relying on a computer script

to do most of the hard work.

And once you have a well-written script,

it can be reused over and over again,

and it will never make a mistake.

So this allows us

to get a lot of our deployments done faster

and in a much more secure way.

Now, when we're talking about infrastructure as code or IAC,

it really comes down to three key areas

when you're doing your implementation.

This is scripting, security templates and policies.

Now scripting will let you perform a series of actions

in a particular order or sequence,

and it can even include some basic logic

to ensure the right things are being deployed

based on the current conditions,

security templates and policies

are then going to be deployed,

and these contain a series of configuration files

that are applied to the different devices

being deployed in your environment.

These might include network settings,

access control lists, group policies, or permissions.

Now automation is great,

but where infrastructure as code really excels

is through the use of orchestration.

Orchestration is the process of arranging or coordinating

the installation and configuration of multiple systems.

In most implementations,

it really comes down to running the same task

on a bunch of different servers or devices

all at the same time,

but not always on every single server or device.

This is where machine learning and logic

are going to come into play.

If you're using some robust orchestration,

that's been properly configured and tested,

you can lower your overall IT costs,

speed up your deployments and increase your security.

So it really becomes a win-win-win for our organizations.

Now, as an aspiring network technician,

you might be worried that infrastructure as code

might put you at a job someday,

but really it's just being used to automate

the most boring and tedious portions of your job.

It is designed to ease your burden

and allow you to focus on more higher level tasks

instead of just installing

a hundred more virtual switches or routers

using some boring checklist.

So don't worry, have no fear,

there is nothing but goodness here

when it comes to infrastructure as code.

Also infrastructure as code is the basis of everything we do

in horizontal scaling or scaling out

within our cloud environments

when we need to use elasticity.

So it is really important to embrace it.

Now, one of the things you have to be careful of though,

is that when you're using infrastructure as code,

people in your organization who believe they might have

some kind of a special project,

I like to call these the special snowflakes.

Remember with infrastructure as code,

we're trying to embrace standardization,

templates and scripts.

So when you have people

who think they have a special snowflake,

this can lead to trouble.

After all, if they have a special snowflake project,

they believe they have to be able to go

and create their own infrastructure

to support their project

instead of relying on the standard infrastructure

that you provide to everyone else through IAC.

These people don't really care about your standardization

and all of your scripting and all the efficiencies

that you've already gained by embracing

infrastructure as code using orchestration.

Instead, they want to create something as a one-off system.

And when that happens,

you end up with this special snowflake

and these special snowflake systems are any system

that is different from the standard configuration template

that's used within your organization's

infrastructure as code architecture.

Now, the problem with this is that it adds risk

to your overall security posture.

And it also adds a lot of configuration problems

and long-term supportability problems for you

because it's a one-off system.

The lack of consistency that you're going to find

in a special snowflake system

is going to lead to a lot of issues for you down the road,

especially in terms of security

and your ability to support it

after it's moved into production.

This is because you have a one-off system

and it is by definition unique,

and it doesn't look or act like

every other system that you support.

Think about it this way.

Pretend you're in a large environment,

that's operating in the cloud

and you have thousands upon thousands of virtual machines.

Now, out of all those virtual machines,

we have just one that's different.

When somebody calls up and says

something isn't working properly.

Now you have to figure out,

is it something with that special machine

that's causing the problem,

or is this a bigger problem across your entire cloud?

This is now a really big support issue

for you and your team

and it can lead to a lot of security headaches

in the long run.

For this reason, I always want to eliminate

these special snowflakes

because we want everything to be consistent.

By keeping things consistent

and using carefully developed and tested scripts

we can end up using orchestration

extremely efficiently and securely,

which maintains a good, solid baseline for our networks.

Connectivity options.

In this lesson,

we're going to talk about the different connectivity options

that are available when connecting to cloud-based solutions,

including virtual private networks or VPNs,

and a private-direct connection to your cloud provider.

Now, as we go through these options,

I want to point out that we're not talking about your ability

to use software as a service,

as part of a cloud technology in this lesson, necessarily.

Instead, we're more focused

on connecting our enterprise networks

to our public cloud service providers known as CSPs.

Now, for example,

let's pretend your organization decided to offload

all of its on-premise servers over to the cloud,

including your internet servers, like your file servers,

your proxy servers, your mail servers, and others.

Well, how are your network clients

going to access those resources?

We need to ensure that when Susan in accounting

or Bob in human resources logs under the network,

they can actually reach that domain controller

and be authenticated and then access the ShareDrive

just like they could when the server was down the hall

in our data center,

even though the server may now be across the country,

sitting in one of Amazon's or Microsoft's data centers.

So we need to talk about connectivity options.

The first type of connectivity we need to cover

is known as a virtual private network or VPN.

By using a virtual private network solution,

you can establish a secure connection

between your on-premise network, your remote offices,

your client devices,

and the cloud provider's global network.

This type of connection will usually be created

as a site-to-site VPN between your EdgeRouter

and the cloud service providers network.

When using a VPN solution like this,

usually you're going to rely on a traditional IPsec VPN

to create an encrypted connection

between your cloud providers network

and your own enterprise network,

all over the public internet,

using this encrypted VPN tunnel.

This allows you to extend your network

using a highly available, managed

and Elastic Cloud VPN solution

to protect your network traffic

instead of letting it traverse the internet directly.

Well, the VPN works well in most cases.

If you're running a large enterprise networking,

you need higher speeds and redundancy,

you may instead choose to use a private-direct connection

to your cloud provider.

These are sold under different names,

depending on the cloud provider you're using.

If you're with Amazon Web Services or AWS,

they call this a Direct Connect Gateway.

If you're with Microsoft Azure,

they call this an Azure Express Route.

So what is a private-direct connection?

Well, it's going to allow you to extend your preexisting,

on-premise data center or office network

into the cloud provider's network

so that you can directly connect

to your virtual private cloud

inside that cloud provider's network.

Now, by using a private-direct connection,

you can bypass the internet directly

and instead establish a secure and dedicated connection

from your infrastructure

to the cloud provider's infrastructure

using a dedicated leased line

or a similar type of WAN connection.

So what's the difference between using a VPN

and a private-direct connection?

Well, in general, a private-direct connection

will support faster speeds and better performance.

For example, if using an AWS-managed VPN service,

you can only achieve a maximum speed

of four gigabytes per second

when you're connecting your enterprise network

to your virtual private cloud that's hosted by Amazon.

Now on the other hand,

if you have a private-direct connection,

which they call AWS Direct Connect,

you can get speeds up to 40 gigabytes per second.

Additionally, private-direct connections can support

multiple connections into multiple VPCs

that are hosted in the cloud.

And this provides us with redundancy.

Whereas with a VPN solution,

we can only support one VPN connection to one VPC at a time.

But with everything in the cloud,

there's always going to be trade-offs.

Yes, a private-direct connection has better performance

and better redundancy,

but it's also a more expensive connection

than a VPN connection.

So with AWS, for example, if you're using a VPN solution,

this costs you about 9 cents per gigabyte

of data transferred.

But if you're using a private-direct solution,

this costs between 20 and 30 cents per gigabyte

of data transferred,

making it two to three times more expensive.

So when it comes to connecting your enterprise networks

to your virtual private clouds,

that are hosted by Amazon or your Azure Virtual Networks,

remember, you can either use a VPN

or a private-direct connection.

It just depends on the level of performance you need

and the amount of costs that you're willing to accept.

Data center architecture.

In this lesson, we're going to discuss the architecture

within our data centers.

When we're talking about a data center,

this is any facility that's composed of network,

computers and storage that businesses

and other organizations are going to use

to organize, process, store,

and disseminate large amounts of data.

Now that's a pretty generic definition,

but this is because a data center

is used to describe a lot of different things these days

from the very small to the massively large.

For example, one of the smaller organizations

I worked for had a small data center

that was roughly 150 square feet in size.

Within it, we had a single rack of networking equipment

and about five racks that contained various servers.

Now, on the other hand,

one of the largest data centers in the world

is located in Bluffdale, Utah.

This data center is known as the Utah Data Center

or its official title,

the Intelligence Community Comprehensive National

Cybersecurity Initiative Data Center,

and it is 1.5 million square feet in size

spread across 20 buildings.

This data center is massive

and it uses around 65 megawatts of electricity

just to run it each day.

Now this data center costs the US government

about $1.5 billion to build

and they spend over $40 million each year

just on the electricity bills.

This is a massive data center.

Now, personally, I haven't worked at that data center,

but I've worked at some really large data centers.

But nothing at that size or scale.

Now, Amazon, for example,

uses data centers that range in size

between 150,000 to 215,000 square feet.

And they host about 50,000 to 80,000 servers

in each of their data centers.

As a network technician,

you may very well end up working

at one of these data centers one day.

So let's explore their architecture just a little bit.

In this lesson, we're going to talk about five key areas.

The three-tiered hierarchy used by data centers,

software-defined networking that allows our data centers

to operate effectively,

the spine and leaf architecture that's used by data centers,

the traffic flows that are used by our data centers,

and the on-premise versus hosted data centers

and what the differences between them.

So first let's look at the three-tiered hierarchy

which consists of the core,

the distribution or aggregation layer,

and then the access or edge layer.

The core layer is going to consist of the biggest

and fastest and most expensive routers

that you're ever going to end up working with.

The core layer is going to be considered the backbone

of our network

and it's used to merge geographically separated networks

into one logical and cohesive unit.

In general, you're going to have at least two routers

at the core level operating in a redundant configuration.

After all, if you only had one core router

and it went offline,

the entire network would grind to a screeching halt.

So we don't want to do that.

Next, we have the distribution or aggregation layer.

This layer is located under the core layer

and it's going to provide boundary definition

by implementing access control lists and filters.

Now here at the distribution or aggregation layer,

we're going to be defining policies

for the network at large.

Normally, you're going to see layer three switches

here being used because this distribution layer

is going to ensure packets are being properly routed

between different subnets and VLANs

within your enterprise network.

Finally, we're going to get down to the access or edge layer.

This layer is located beneath the distribution

or aggregation layer,

and it's going to be used to connect

to all of your endpoint devices like your computers,

your laptops, your servers, your printers,

your wireless access points, and everything else.

These access or edge devices

are going to usually be regular switches,

and they're going to be used to ensure packets

are being converted to frames

and delivered to the correct end point devices when needed.

Now you may be wondering

why do I need to use this three-tiered hierarchy?

Well, by using this type of hierarchy,

we can get better performance, management, scalability,

and redundancy from our networks.

It also is going to give us a better way

to troubleshoot our network

because normally if we find an issue,

we're going to find it and isolate it down to a single access

or edge layer device.

Now, we can work on fixing that

while the rest of the network keeps continuing to operate

with no issues at all

and then we can get our network all the way back up

and running 100%.

Normally inside your data center,

you're going to find your core layer devices

as well as your distribution or aggregation layer devices

for the local network in that building.

If you have remote branch offices or other locations,

each one of those is going to have its own distribution

or aggregation layer switch inside

its main distribution frame too.

In both cases,

your network will then branch out

to the intermediate distribution frames

where you're going to find your access or edge layer devices.

Now, when we talk about this three-tiered model,

this is a traditional network

you're going to find in most enterprises.

Next, let's move into software-defined networking or SDN.

Software-defined networking

is a network architecture approach that enables the network

to intelligently and centrally be controlled

or programmed using software applications.

This helps operators manage

the entire network consistently and holistically,

regardless of the underlying network technology.

Essentially, we're going to take our physical networks

and we can completely virtualize them

or create a layer of abstraction

between the physical devices and the logical architecture

that they're going to represent.

Now with software-defined networking,

we can create complex networks very quickly and easily,

leveraging increased network size and expanding their scope

as well as their ability to rapidly change.

In fact, one of the great things

about software-defined networks

is that they can be changed automatically

by the network itself using automation and orchestration.

On the other hand,

all of this rapid change does make it harder for us

as humans to keep up with it

and fully understand the data flows on our network

at any given time.

Now, when it comes to software-defined networking,

there are several pieces that we need to consider

including the application layer, the control layer,

the infrastructure layer, and the management plane.

These three layers are going to allow the network

to be decoupled from the underlying hardware itself.

The application layer is going to focus

on the communication resource requests

or information about the network as a whole.

The control layer is then going to use that information

from the applications and decide how to route a data packet

on that network.

It also makes decisions about how traffic

should be prioritized,

how it should be secured,

and where it should be forwarded to.

The infrastructure layer is going to contain

the actual networking devices that receive the information

from the control layer about where to move the data,

and then it's going to perform those movements.

Now with software-defined networking,

these underlying infrastructure devices can be physical

or virtual devices depending on your network configuration

or a combination or hybrid of the two.

Remember the whole concept with software-defined networking,

is probably this layer of abstraction

between the real underlying devices

and the control and data flow

that's going to happen on that network.

Software-defined networking is also critical for use

in our successful cloud applications

because it gives us the scalability and elasticity

and agility that we need inside those networks.

Now, the fourth part of software-defined networks

is our management plane.

Now the management plane is going to be used

to monitor traffic conditions and the status of the network.

Basically, the management plane is going to allow us

to oversee the network and gain insight into its operations.

This will also allow us to make configuration changes,

to set things up the way we want,

and make sure they're working the way we need them to.

So for the exam,

I want you to remember that software-defined networking

or SDN is broken down into four parts.

The application layer, the control layer,

the infrastructure layer, and the management plane.

Next, let's discuss the spine and leaf architecture.

Now, the spine and leaf architecture

is an alternative type of network architecture

that's used specifically within the data center.

Now, when we use our three-tiered hierarchy

that we covered earlier,

we talked about the fact that we connected

the core layer down to the distribution layer

and then down to the access layer or edge layer

with all of our end point devices.

With a spine and leaf architecture,

instead we're going to be focused on communication

within the data center itself only specifically

to the server firm and the portions of it.

The spine and leaf architecture consists

of two switching layers known as a spine and a leaf.

Now the leaf layer

is going to consist of all the access switches

that aggregate traffic from the different servers

and then connect directly into the spine layer

or the networks core.

This spine contains the switches

that will interconnect all the leaf layer switches

into a full mesh topology.

This leads to increased performance and redundancy

for all the servers that are connected to the leaf layer

and in turn to the spine layer.

Now, as you may be wondering,

why did I move from the three-tiered networks

to software-defined networks before starting to discuss

the spine and leaf architecture?

Well, this is because many of our spine

and leaf architectures rely on software-defined networks

to operate.

By using a spine and leaf architecture,

we can actually get faster speeds and lower latency

than the traditional three-tiered hierarchy as well.

By using a spine and leaf architecture,

we can actually take shortcuts

in getting data from place to place,

and this all happens best

when we're using software-defined networks

in combination with a spine and leaf design.

If you're installing a spine and leaf architecture,

normally you're going to install two switches

into each server rack.

This is known as top of rack switching

because the switches are physically installed

at the very top of the server rack,

and each server inside that rack

will have a connection to each of the two switches

that are residing inside that rack.

Now, these switches are essentially going to be

the leafs inside our spine and leaf architecture.

And they're going to connect back to the spine

which is going to serve as the backbone

of our data center network.

This spine and leaf architecture can also be combined

with our standard three-tier hierarchy.

Now, when we do this,

all the servers in the data center will connect

to the leaf layers,

and the leaf layers will connect to the spine.

But the spine will then connect directly

to the core layer of the three-tiered model

whereas all the other non-data center devices

will connect to the access layer

and then up to the distribution or aggregation layers

before connecting into the core layer.

Next, we need to discuss traffic flows

in relation to our data centers.

Now there's two main types.

We have North-South and East-West.

These two terms are used to describe

the direction of traffic flow into or out of a data center.

Now, when we have North-South traffic,

this is going to refer to communication traffic

that enters or leaves the data center

from a system fiscally residing outside of the data center.

So when we talk specifically about North traffic,

this is traffic that is exiting your data center.

Southbound traffic on the other hand

is referring to traffic that is entering your data center.

In both cases, this data is exiting

or entering the data center going through a firewall

or other network infrastructure boundary device

such as a router.

Conversely, we also have East-West traffic.

Now, East-West traffic refers to data flow

within a data center.

For example, if we're using a spine and leaf architecture,

any data flow between various servers in the data center,

even if it goes between different leafs

would be considered East-West traffic

because that data is not leaving our data center.

Now due to the increased use of software-defined networking,

virtualization, private cloud, and converge networks,

more and more traffic that we're using

is being classified as East-West traffic,

because it's still virtually part of your data center.

So in summary,

if the data is entering the data center,

it's considered Southbound traffic.

If it's leaving the data center,

it's considered Northbound traffic.

If it's moving within the data center,

it's considered East-West traffic.

Finally, we need to talk about on-premise

versus hosted data centers.

This discussion is really going to come down

to where are you going to store your data?

Now, if you're using an on-premise data center,

you're using a traditional private data infrastructure

where your organization has its own data center

that houses all of its servers and networking equipment

that it's going to use to be able to support its operations.

We call this on-premise because it's usually located

in the same building as your main office.

Sometimes though, you're going to have multiple offices

spread across a large geographic region or across the globe.

Normally, one of your offices is going to be considered

your headquarters and will host your on-premise data center

in this type of organization.

And then all the other offices around the globe,

they'll be called branch offices.

Now these branch offices

usually will not host their own servers,

but instead they will host them

in your on-premise data center at your headquarters.

If you have a fast enough connection

between each branch office and the headquarters,

you can host everything at your main data center.

But if you have slower connections,

you may need to host some services locally

inside the branch office too.

For example, when I was an IT director

for an organization spread across multiple countries,

we had our own on-premise data center at our headquarters

where we hosted most of the services

including our domain controllers, email, proxy servers,

and other services,

but we still maintain a local file server

in each branch office for them to use for their share drives

because otherwise they would be transferring gigabytes

and gigabytes of data to and from our data centers

over a small connection.

And this would really slow down the entire network.

Now, this solution for us worked really well

because the branch office locations

only need to access their own files on their share drive

not any files from the main office.

Now, if they instead need to have a shared file server

with us at the head office,

we would have opted for something else

like a content engine or a caching engine

instead of using a local file server

to meet their needs.

Now, the other option though,

is to host your data using a co-located data center.

Now in a co-located data center arrangement,

your organization places their servers

and networking equipment in the data center environment

that's owned by another company.

Essentially, you're going to rent space in their data center

instead of having to build your own.

When I started my very first company back in 1999,

I was doing website design, website hosting,

and building network architectures

for small and medium sized businesses.

Because I was just starting out,

I couldn't afford the millions of dollars of the cost

to make my own data center.

So we wanted to host our web servers

and we decided to lease two racks

in a larger company's data center.

For a fixed price each month,

I was able to put whatever equipment I wanted

that would fit into those two racks

and they would provide me with a Cat5 connection

with a hundred megabits per second of bandwidth,

power, and backup batteries, and generators.

Beyond that it was up to me to drive their facility,

install and maintain all my own servers

and networking equipment

and run all the configurations I needed.

Now, the last option we have

is to move everything into a cloud-based platform

like Amazon Web Services or Microsoft Azure.

In those cases we can't use co-location

because they're not going to allow us put our own servers

into their facilities.

Instead we would have to migrate all of our data

out of our servers and our data centers

and put them into their servers and their data centers.

All right, as you can see there are lots of different ways

to architect your data centers and your networks.

Everything from a three-tiered traditional model

to a more modern software-defined network

that implements spine and leafs to moving completely

to the cloud.

It really depends on your business case

and your organization's needs.