Network Security

Network security.

In this section of the course

we're going to talk about network security.

So up to this point, we've really only focused

on making our networks functional and working

in order to support our business needs.

But we really haven't focused a lot on security just yet.

And now we really need to understand the basics of security,

the threat against our networks and how to manage the risk

so that we can effectively use our networks

to meet the operational needs of our business and its users.

So in this section of the course,

we're going to focus on the fundamentals of security,

including the CIA triad, threats,

vulnerabilities and exploits, risk management,

the principles of security, defense in depth

and our different methods of authentication.

We're going to be talking all about

domain 4 network security in this section of the course

and covering three different objectives,

namely 4.1, 4.3 and 4.5.

Objective 4.1 states that you must explain

common security concepts.

4.3 states that given a scenario,

you must apply network hardening techniques,

and 4.5 states that you must explain the importance

of physical security.

So let's get started learning all about network security

and how to start protecting it networks.

The CIA Triad.

In this lesson, we're going to talk about the CIA Triad.

Now this is important because by default,

our networks are fundamentally not secure.

When they were developed and all these different

networking standards were created many years ago,

security was not in the discussion.

Instead, over the years,

we've tried to bolt on and add on security as we go,

to make these networking protocols more secure.

But to begin with, a network is a very unsecure place.

To make matters worse, networks are increasingly

becoming more connected with other networks.

If my company begins to do a partnership with your company,

we may decide to tie our networks together during that time.

And this introduces all sorts of risks and vulnerabilities

from your network into my network and vice versa.

So we have to be aware of this.

These risks don't just exist between business partners

or between different people on the internet,

but they also exist within our own organizations.

Your organization may have numerous sub networks,

and when you tie them all together,

that's going to start bringing other risks into your networks.

We have to be careful to in order to minimize and eliminate

these risks over time, and that's where network security

is going to come into play.

If we can understand the various threats that are facing our

networks, then we're going to be better able

to defend our networks against the onslaught

of cyber attacks that we are facing on a daily basis.

The way we look at security in our networks

is based on something called the CIA Triad.

Now this stands for Confidentiality,

Integrity and Availability.

Those are the three tenants that make up this Triad

that give us security.

Now when I can provide all three of these things,

I can secure the data inside the center of this triangle.

Now this sounds really, really easy, but in reality,

it's really, really hard.

We're going to talk more about these three components

of the CIA Triad in this lesson.

Our first one is C, for Confidentiality.

Now confidentiality is concerned

with keeping your data safe and private.

We want to use things like encryption and authentication

to verify that somebody has the need to know

and that they should be allowed to see that data.

By using encryption, we can ensure that that data

can only be read or decoded by the intended recipient,

and that person is going to have a secret encryption

or decryption key to be able to read it.

Now to do this, we can use either symmetric encryption

or asymmetric encryption.

Now if you're not familiar with those concepts,

we're going to cover them here for you

to bring you up to speed.

Symmetric encryption is something that is the basis

of confidentiality.

Both the sender and the receiver are going to use

the exact same key,

which is why we call it symmetric encryption

or symmetric key cryptography.

Now we can go from plain text to ciphertext using one key,

and then the other person who wants to read it,

will use that same key to decrypt it from ciphertext

back into plain text,

so they can actually read it in normal language.

Now there are several different types

of symmetric encryptions out there,

but in the case of the Network+ exam,

we only really need to focus on three of them.

This is DES, Triple DES and AES.

The first of these is DES.

Now DES is the Data Encryption Standard,

and it uses a 56-bit encryption key to secure its data,

using symmetric encryption.

DES was actually developed all the way back

in the mid 1970s.

Now that's about 40 years ago, so as you can imagine,

it's not really that secure,

but yet we still use it today in things like SNMP,

the Simple Network Management Protocol version 3.

DES is considered a weak encryption algorithm today,

but it is still better than having nothing,

which is what we had in earlier versions of SNMP.

Next, we have Triple DES,

and this was because DES was becoming insecure

because of this 56-bit key.

It was starting to become very easy to crack

as computers got smarter and faster.

So, as these computers got stronger,

they decided we need to have better encryption.

So they decided to take something, encrypt it with one key,

decrypt it, and then re-encrypt it again.

And each of those portions would use a different 56-bit key.

So we used DES to encrypt it,

then we decrypt it with a different key,

scrambling it further,

and then re-encrypt it a third time with a different key,

scrambling it even more.

This means that we get a total key strength

of about 168-bits,

because we're doing this encrypt decrypt encrypt cycle

using three different keys, and that makes our data totally

unreadable or ineligible for anyone

who doesn't have all of those three keys.

Finally, we have AES, or the Advanced Encryption Standard.

This is the modern contemporary encryption system

that we use on pretty much everything these days.

This is the preferred symmetric key encryption

that we're going to use in any network today,

and it's also use by default in WPA2

for securing our wireless networks,

as well as BitLocker for encrypting our hard drives

to provide us data at rest.

AES, or the Advanced Encryption Standard,

is going to use three different size encryption keys.

It can use 128-bit key, a 192-bit key or 256-bit key

making it very secure.

Now when we look at symmetric encryption,

the sender and the receiver are both using the same key

to encrypt and decrypt it, which is great,

and it makes it extremely fast.

In fact, symmetric encryption is almost 1000 times faster

than using asymmetric encryption,

where we use two different keys.

Now we're going to cover asymmetric encryption

in just a second,

but first we need to talk about some problems

with symmetric encryption.

Now there is one large problem

with symmetric encryption though.

And this is that we both have to have the same key

to encrypt and decrypt that data.

Now if you and I have never met before,

how are we going to make sure we both have the same shared key?

Well, if I'm doing this on a large scale,

let's say I encrypted a folder on my Google Drive

that I wanted to share with all of my students,

I would have 300,000 people who need to access

that Google Drive,

and I have to get each of them a copy of that key,

that'd be a really hard task to do and to do it securely.

Now, let's imagine that one of those students

shouldn't have access anymore.

Now I have to go and change that key and give it

to the other 299,999 people who still need access.

And I have to have a secure way to redistribute that new key

to all of those people who still need to access it.

You see, this is the biggest problem

we have with symmetric encryption.

It's key management.

Even though symmetric encryption is fast,

and even though it's secure,

we still have to figure out a way to get a secured

share secret key to all the users who need to use it.

So, how are we going to solve that problem?

Well, enter the world of asymmetric encryption.

Now asymmetric encryption is used to give confidentiality

as well, but it does this by using two different keys,

one for the sender and one for the receiver.

Now, RSA is by far the most popular implementation of this,

and it uses what we call Public Key Infrastructure, or PKI.

Now PKI is where we encrypt the data between an email sender

and an email receiver,

or when you're going to an E-commerce site like Amazon,

you're going to be using PKI to do a key exchange.

This way, you can get a secure email exchange

or secure web browsing,

and it solves the problem of having to distribute those keys

ahead of time because we're using public keys.

So how does asymmetric encryption work?

Well, it works on the concept of having a Key Pair.

This key pair is made up of a Public and a Private key.

The Public Key, anybody can know,

and we can share with everyone in the entire world,

but the Private Key is something that only I should know

and nobody else should see it.

Let's see how this works in the real world.

Well, when we look at this, there's a sender and receiver

and they're both going to use different keys to encrypt

and decrypt the message.

In this case, if I'm the sender,

and I want to send something to the receiver,

I'm going to use the receiver's public key,

which everyone in the world can know because it's public.

Now, once I've encrypted that data using their public key,

the only key in the entire world that can open up that

message and decrypt, is going to be their private key.

And the only person with that private key is that receiver.

So we know it has confidentiality

'cause only they can decrypt this message.

This guarantees that we're going to have confidentiality

of the data because nobody can read it except them,

and once I encrypt that data, even I can't read it

because I don't have the receiver's private key.

So how does this work if we're going to use

E-commerce for instance?

Well, I said before that we can use asymmetric keys

as a way to do a key exchange.

And we're going to be able to share a symmetric key

through that by creating an encrypted tunnel.

So, what we're going to do here in E-commerce

is using asymmetric to then turn over to symmetric.

In this case, if I wanted to be the client

and I wanted to go to Amazon to buy something,

I would do it this way.

First, I'm going to request the website by going to amazon.com,

and I'm going to use the secure version of the website

by going to https://amazon.com.

Now when I go to that server, the server is going to tell me

that it has a public key available.

That public key is going to have what we like to call

a Digital Certificate.

Now when you buy a VeriSign Certificate,

or some other trusted certificate for your server,

that server is going to then hold a copy of your public key

for you and any client who wants to get it can go to that

trusted third party and get a copy of your public key.

So, my web client is going to go to VeriSign,

it's going to grab Amazon's public key,

and then I'm going to create a random number

and whatever I'm going to choose,

and I'm going to encrypt that random number using

the public key that Amazon has.

Now, I'm going to send that back over to the Amazon Server

because Amazon will be the only person who can unlock

that message and decrypt it using their private key,

which is part of their server code,

now they can open that message and see that random number.

So, now that they have the random number I chose,

I know it because I chose it,

and they know it because they decrypted the message.

So, we've used a asymmetric encryption to be able

to pass this random number,

which will now act as our symmetric key.

Now, we can both create a tunnel and that tunnel can be

secured by that symmetric key we just chose,

and we can use that for the entire session.

This becomes known as a session key,

which is a simply that random number that I chose

and sent over to Amazon.

Now we can communicate securely for the rest of the session,

creating a nice secure encrypted tunnel

between me and the Amazon Server.

Now, why would I do it this way?

Why wouldn't I just use an asymmetric key the entire time

and send data back and forth?

Well, the problem is, asymmetric is pretty slow.

In fact, symmetric key is a thousand times faster

than asymmetric.

So, we want to use symmetric to the maximum extent possible,

but there's some things that symmetric doesn't do well

like a key exchange.

So, for that reason, we're going to use asymmetric

to do the handshake and exchange a key,

and then we're going to switch over to symmetric

using that key we just exchanged to get those faster speeds

for all the rest of our data transfer.

All right, the next thing we to talk about

is the I in the CIA Triad.

This is Integrity.

Now Integrity is all about making sure that the data you

have was not modified in storage or in transit.

This verifies that the source of the traffic

that it originated from, was where you thought it came from.

We're not going to be subject to an on-path

or man in the middle attack here,

because we want to make sure that that data has integrity.

Also, this will help us prevent forms of spoofing

like IP spoofing, ARP spoofing or Mac spoofing.

Integrity violations can also happen if there's a defacement

of your corporate webpage for example,

because somebody is changing the data on your server

and you didn't authorize it.

All these are examples of integrity violations.

Now another example of this would be

if somebody went to your E-commerce site,

and they'd like to buy a product that's supposed to sell

for a 100 dollars, but they actually changed that

to 10 dollars by removing a zero.

That will be an integrity breach because they also modified

the electronically stored financial records on your server.

So, what if I decided to add a couple of zeros

to my bank account balance?

Guess what, that's also an integrity breach

because I'm changing the balance

and I'm not authorized to do that.

All these things are things that we don't want to happen

inside of our network.

So, how do we ensure that we have integrity?

Well, we're going to use Hashing.

Now Hashing is an algorithm that runs a string of data

through the algorithm,

and then it creates a hash or a hash digest.

This serves as a unique individual fingerprint for a file

or a data set.

All right, if you see here on the screen,

I have the word password written in three different ways.

I have it written as password,

I have it written as password with a capital P,

and I have it written as password with a capital P

and a period at the end.

Notice, those three hashes are vastly different,

even though I changed very little,

just adding a period or changing a letter

from lowercase to uppercase.

In this example, I'm using an MD5 hash for each one of these

and this algorithm ensures drastic changes to the output,

when a slight change is made to the input.

Now, by just adding that period or making a capital letter

instead of a lowercase letter,

we have this huge amount of change to the hash digest.

That's how we're using them as individual fingerprints.

Once I run this data through the algorithm,

I get this hash, and the data and the hash

are then going to be sent over to the receiver.

Now when the receiver gets the data,

they're going to run it through the same hash on their own side

and compare that hash they get to the hash that I sent them.

If the two are going to match,

that means there was integrity in the transmission.

If they don't match,

it's going to reject that transmission and ask for it to be

sent again, because it assumes it was bad,

or there was some kind of an integrity breach.

Now, there are lots of different

hashing algorithms out there.

The first one is MD5,

which is the one I used in my examples.

This is by far the oldest one, MD5 is 128-bit hash.

And it works very well for the most part,

and you'll still find it in use today.

Now the biggest problem we have with MD5

is that the key space is rather small with only 128-bits,

which means, there's only so many combinations

and those would have to be reused over time

because we have an infinite amount of words or phrases

that we could have.

For example, when we have the word Jason here,

and it's MD5 hash shown, there are other words

that might also give us that same MD5 hash.

So, if I could find another word or phrase

that gives me the same hash,

this is known as a collision,

because two things share the same hash value.

We want to be able to minimize the collisions.

And the best way to do that is by increasing our key space

from 128-bits to something larger.

So, computer scientists came up with a new algorithm

known as SHA-1.

Now SHA-1 is a secure hash algorithm Version one,

and it uses 160-bit hash, instead of that

128-bit hash digest that we use with MD5.

This way we would have less collisions.

Now over time though,

we found that there were going to be more collisions.

Now there's less than MD5,

but there's still a good amount of collisions.

So, they increase the key size again,

and they came up with SHA-256.

Now SHA-256 is a 256-bit hash digest,

and it gives us a lot more options and choices

and less overlap, and therefore fewer collisions.

SHA-256 has a much longer hash digest than an MD5 hash.

In fact, it's double the length because it's 256-bits

instead of 128-bits.

Now this doesn't mean there's double the amount

of possible combinations.

Instead, we have an exponential amount of more combinations

because we're going from two to the one 28th

to two to the 256 power,

which is a much, much bigger number.

The final algorithm we need to talk about

when we talk about hashes is CRAM-MD5.

This is the Challenge Response Authentication Mechanism,

message-digest five.

And this is a common variant of MD5,

and it's usually used in email systems for authentication.

All right, let's move to our third component

of the CIA Triad.

This is the A, for Availability.

Now Availability is going to measure the accessibility

of that data.

Can I get to the data when I want to and where I want to?

That's what we're asking here.

This is increased by designing redundant networks,

by having multiple components doing the same functions.

We're going to talk a lot more about redundant networks

and talk about high availability and redundancy

in a separate lesson, as we start digging more

into how we can create good availability

within our networks.

But for now,

let's talk about how availability could be compromised.

Well, there's lots of different things that you can do

to hurt your availability.

You could crash a router or switch

by sending improperly format data to it,

like the old Ping of Death attack.

And that would actually turn off your router or switch

and make the entire network go down.

Therefore, your availability would be failed as well.

Now you can also flood a network with just so much traffic,

even if it's legitimate requests that they simply

can't be processed in time.

This is known as a Denial of Service,

or Distributed Denial of Service attack,

and this can make your network fail as well.

This can also happen when you have a good problem.

For instance, if my site became wildly popular overnight,

and I had a million people trying to access it all

at the same time, that could crush my website

because I became too popular, too fast.

This would also be considered a Denial of Service,

even though it was more of a self-imposed one

by becoming too popular.

Now, you could also have a power outage in your area,

and that could cause your network to fail,

or maybe there's a flood,

and your server room is now underwater.

And guess what?

That's going to take a hit to your availability as well.

All these are things that can really hurt you.

Maybe you have some really old routers and switches

that are out of warranty, and one of them dies from old age.

Well, that's going to hurt the availability of your network

because the network is going to go down

when that core switch goes down.

I think you get the idea, but we will dive deeper into this

later on as we cover availability more in depth.

Threats and vulnerabilities.

In this lesson, we're going to talk

about threats and vulnerabilities.

After all, where threats and vulnerabilities intersect

is where risk exists within our enterprise networks.

So if we can understand the different threats

and vulnerabilities we may have in our networks,

we can then add protection mechanisms

to help mitigate that risk.

Often, I hear people use the words threats

and vulnerability interchangeably,

but they're technically not the same thing.

A threat is a person or an event that has the potential

for impacting a valuable resource in a negative manner.

So a hacker will be a threat

since they want to steal your data, but so is a hurricane,

because it could cause a power outage

that takes down your valuable network.

Now, a vulnerability on the other,

is a quality or characteristic within a given resource

or it's environment that might allow the threat

to be realized.

Now essentially, a vulnerability is any weakness

in the system design, implementation,

source code or lack of preventive mechanisms

that would prevent a threat from occurring.

So for example, if you're not running the latest version

of Microsoft Windows on your servers,

that is considered a vulnerability.

If you have a battery back up for your network

that only lasts 15 min

and you don't have any back up generators,

that again, is a vulnerability,

because if you loose power,

after 15 minutes, the entire network is going to crash on you.

Now, it's only going to be when combine a threat

with a vulnerability that we actually get risk

that's being realized and now something bad will occur.

This is an important distinction,

because if I have device, like a switch or a router

and it has a vulnerability, but there's no threats

that would ever go after that vulnerability,

then guess what?

There's not really a risk there.

On the other hand, if I have an asset

that doesn't have any vulnerabilities,

then it doesn't matter if I have dedicated threat

trying to attack me, because they won't ever be able

to cause any harm to my network,

because there's absolutely not vulnerabilities

for them to exploit.

Now, in the real world,

there's almost always some kind of threat

and some kind of vulnerability out there

that we're going to be facing,

so we almost always have risk.

The amount of risk though,

is really going to be determined by how big of a threat

or how major of a vulnerability we actually have.

So let's dive into threats and vulnerabilities

a bit deeper here.

First, let's cover threats.

Threats come in two basic varieties.

We have internal threats and external threats.

Now, an internal threat is any threat

that originates from within the organization itself.

Normally, these are conducted by a current

or former employee, a contractor

or a business partner who's going to cause damage

to your systems or steal your data.

When we're dealing with internal threats,

these can be caused by people

who intend to do us harm or those who do it accidentally.

For example, if you have insider threat,

this could be a person who uses their authorized access

to get onto your network and harm your organization.

Maybe you had an employee who just got passed over

for a promotion and now they're pretty upset.

So they decide to download

your entire client contact database

to their thumb drive and take it home with them,

that way, if they decide to quit next week,

they can start calling up all of your clients

and bring them over to a new company

that will end up hiring them.

On the other hand, you may also have an end user

who unknowingly causes damage to your systems,

for example, we may have a sales person

who opens up an email that contains malware

and this starts infecting your systems and your servers.

Now, that person, they weren't being malicious,

they didn't do this on purpose.

They just opened an email and they didn't think

that it would cause any problems.

This is an unwitting or unknowing internal threat.

Now, in addition to internal threats,

we also have external threats.

External threats could be people, like a hacker,

or it can be an event or environmental condition,

for example, if I was going to have a wildfire near my office,

that would be an environmental threat against my facility

and the network that it contains,

or maybe I'm working as the IT director

for a large oil company

and I have a lot of angry hacktivists

who want to take down my network,

because they don't agree with our company's policies

concerning drilling within the Alaskan wilderness.

This would also be considered an external threat,

because it's something external to my organization,

these hacktivists or hacker who are trying to break in.

Now next, we need to talk a little more

about the types of vulnerabilities

that we can have in our organizations and its networks.

Remember, a vulnerability is any weakness

in the system design, implementation,

software code or a lack of preventive measure

in your systems.

Now, these can take the form of environmental,

physical, operational or technical vulnerabilities.

When we talk about environmental vulnerabilities,

these are focused on undesirable conditions or weaknesses

that are in the general area surrounding your building

where you're going to operate your networks.

So for example, my company is based out of Puerto Rico,

so we have he ever-present threat of hurricanes

and earthquakes that could exploit a vulnerability

and how we provide services to our office,

including our power, water, heating and air-conditioning.

So to mitigate this, we actually have four sources of power

at our facility, including solar power,

a full building battery back-up,

a diesel generator and of course,

our local power grid from the electric company.

Physical vulnerabilities are focused

on undesirable conditions or weaknesses

in the buildings where you operate your networks.

Now, some examples of physical vulnerabilities,

might be things like unlocked doors,

unmonitored hallways, misconfigured sprinkler systems

or cables that are running across the floor.

These things could lead to a threat actor

being able to get into your building

and stealing all your data or a fire could break out

and cause massive amounts of damage,

or maybe somebody will trip over a misplaced cable

and that will cause damage to themselves or to your network.

Operational vulnerabilities are focused on how the network

and its systems are being run

from a policy and procedure perspective.

These vulnerabilities or weaknesses usually result

from either poorly worded or unenforceable policies

within your organization.

This can allow a threat actor to exploit weaknesses

in these policies to their own advantage.

Now, technical vulnerabilities

are system-specific conditions

that create a weakness in our security.

This includes misconfigurations,

outdated hardware, malicious software

and other technical weaknesses

in the implementation or operation of our networks

and its devices.

When it comes to technical vulnerabilities

that focus on network or system exploitation,

we normally are going to classify these as a CVE

or a zero-day vulnerability.

Now, CVE or the common vulnerabilities and exposures

are going to be a list of publicly disclosed

computer security weaknesses or flaws.

Basically, it's an official list

of all the known technical vulnerabilities

for each and every piece of software

that's publicly available.

When you look up a CVE, for example,

you might see something like CVE-2017-0144

and then you can look at that

and read all about that vulnerability,

what it is, what software it affects

and a list of references, so you can learn more about it.

Now, in the case of CVE-2017-0144,

this was the 144th vulnerability

that was disclosed in the year 2017.

This particular CVE was actually a really serious one

and it was one that was exploited by the WannaCry ransomware

that spread rapidly across the globe.

Due to its widespread exploitation,

it also received a codename

and became knows as EternalBlue.

Now, EternalBlue affected Windows Vista,

Windows 7, Windows 8, Windows 10

on desktop and laptop computers

and then also, Windows server 2008,

2012 and 2016 on servers.

This vulnerability allowed an attacker

to be able to remotely execute arbitrate code

via well-crafted packets over the network

that could lead to a remote code execution vulnerability.

Now, for the network plus exam,

you don't need to know specifics of the EternalBlue exploit,

but you should be aware of what a CVE is

and the kind of information it can give you

as a network administrator.

So while CVEs provide us with a list

of all the known vulnerabilities,

there's also a lot of unknown vulnerabilities

that may be out there too.

These are known as zero-day vulnerabilities.

Now, a zero-day vulnerability is any weakness

in the system design, implementation,

software code or a lack of preventive mechanisms

within a given system or network

that is unknown at the time of publication.

Now, basically a zero-day vulnerability

is a new vulnerability that not everyone is aware of.

Once cybersecurity professionals become aware

of this vulnerability, they're going to report it

and a CVE will be created for that zero-day,

making it no longer a zero-day

and now it's going to be called a CVE.

So remember, CVEs are a list of known vulnerabilities,

while a zero-day is a brand new vulnerability

that no one else has discovered or reported yet.

Finally, let's talk about how a vulnerability is attacked.

After all, a vulnerability is just a weakness,

but until it's attacked or exploited,

as we like to call it in the cybersecurity world,

it's just sitting there

and it's really not hurting anyone.

When you take advantage of a vulnerability

as a threat actor, this is called,

exploiting the vulnerability.

We do this using an exploit.

Now, an exploit is a piece of software code

that takes advantage of a security flaw

or vulnerability within a system or network.

Because CVEs are known vulnerabilities,

most of them have a matching exploit.

This is because when a new vulnerability

is discovered and reported,

a patch is created by the software's creators.

For example, if a new zero-day vulnerability was discovered

in Windows 10, then Microsoft will create a software patch

to fix this vulnerability and they will release

that software patch and a CVE to the public

so we can all know about it.

Now at the same time, attackers are going to reverse engineer

that software patch and research the CVE

to determine what vulnerability

that patch is trying to solve.

Then they can create some code

that will take advantage of that vulnerability

if a system isn't properly patched and updated.

Since many people don't patch their systems right away

using the latest security patches,

this means, there's a period of time

where a lot of Windows systems may still be vulnerable

and so, we can actually use this exploit

to attack those vulnerable systems

for weeks or months after the release of a patch.

So now the attackers have a working exploit

for this vulnerability and they can find

any unpatched Windows 10 machines out there

and exploit them by running their new software code

against those unpatched systems.

This exploit code is often incorporated

into malware and this allows it to propagate

and run intricate scripts against vulnerable computers,

therefore, increasing the damage

that these attackers can do with this exploit.

To prevent this, you need to ensure

your systems remain up to date

and patched with the latest security releases

and ensure that your systems

have an up-to-date anti-malware

or anti-virus software installed to protect them

from these known vulnerabilities.

Risk Management.

In this lesson we're going to talk about risk management.

In every network we have threats and vulnerabilities

and when these two are combined,

this is where risk is going to exist within our networks.

Let's consider a simple example

you probably deal with every day in your own life.

When you're ready to go to bed at night

do you lock the doors to your house?

Well this little question each and every evening

is going to be answered by your actions

after you conduct a quick risk assessment

as part of your ability to manage the risk to your home,

its contents and your family.

First you consider the threats.

There might be a burglar who wants to get inside

and steal all your valuables,

or maybe you live in an area that's pretty windy,

and this could result in the door being pushed open at night

and the elements like the wind and the rain getting inside

and ruining all your stuff.

Next, you're going to consider the vulnerabilities

that could exist.

In this case, maybe you have a front door and a back door

and a garage door that leads into your home.

Now each of these represents a vulnerability

if you don't lock it before you go to bed at night.

Now should you lock these doors?

Well, that depends on your assessment of the situation.

If I was worried about a burglar,

I probably would lock all three of these doors.

But on the other hand

if I'm trying to mitigate the threat

of wind opening my door

I may only need to lock the front door and the back door

because the door leading into my garage

is already protected from the wind

because I have a large garage door there as well

that's already in the closed position

and it blocks the wind from entering my home.

Now I know this is a pretty silly example but at its core,

this is the basics of risk management.

Risk management is the identification,

evaluation, and prioritization of risks

followed by the allocation of resources

to minimize, monitor and control the probability

or impact of a vulnerability being exploited by a threat.

In order to conduct risk management

we often conduct risk assessments.

Now a risk assessment is a process

to identify potential hazards

and analyze what could happen if a hazard occurs.

Simply put a risk assessment

to determine its possible incidents,

their likelihood and consequences,

and your organization's tolerance for such events occurring.

To conduct risk management within our organizations

we usually use two different types of risk assessments;

these are known as security risk assessments

and business risk assessments.

Now a security risk assessment is used to identify,

assess, and implement key security controls

within an application system or network.

Security risk assessments may be conducted

as a threat assessment, a vulnerability assessment,

a penetration test, or a posture assessment.

Now in a threat assessment

we're going to focus on the identification

of the different threats that may wish to attack

or cause harm to our systems or networks.

A common tool that we use to do this is known

as the MITRE ATT&CK framework.

Now the MITRE ATT&CK framework

is a globally accessible knowledge base

of adversary tactics and techniques

based on real-world observations from the field

and it lets an administrator or analyst

walk through the typical methodologies

that are used by different threats to harm your networks

and identify where you should focus your resources

to better protect yourself.

Now a vulnerability assessment on the other hand

is focused on identifying, quantifying,

and prioritizing the risks and vulnerabilities

in a system or network.

To conduct a vulnerability assessment

a technician will normally use a vulnerability scanner,

something like Nessus or QualysGuard or OpenVAS

to enumerate each system or machine on that network

and identify the versions of every piece of hardware

and software that's being used.

And then it can create a summarized report of which systems

have open vulnerabilities

and which ones need to be remediated.

Now the big difference between these two

is whether you're looking at the target network

through the eyes of the attacker or the eyes of a defender.

Remember a threat is controlled by the attacker or an event.

They get to determine how and when it could be occurring.

Now a vulnerability on the other hand

is usually going to be within your control.

After all if you have an unpatched network router,

that's a vulnerability,

but you could remove that vulnerability

by patching the system or replacing that router.

Sure there are some vulnerabilities

that you can't remove completely

like the vulnerability of the network losing power,

but you can't can add additional controls

to help mitigate it and reduce that risk.

You can do this by adding battery backups

or diesel generators, for example,

to provide secondary and tertiary backup power

to your systems and your networks.

In some security risk assessments

they'll also combine a threat assessment

and a vulnerability assessment into one single threat

and vulnerability assessment

to provide a more holistic perspective of your network

and its security.

The third kind of security risk assessment we have

is a penetration test.

Now a penetration test is an attempt

to evaluate the security of an IT infrastructure

by safely trying to exploit vulnerabilities

within the system or networks.

Penetration tests are also useful

in validating the effectiveness

of your defensive mechanisms,

as well as the adherence of your security policies

by your end users.

Now a penetration test is a technical assessment

where ethical hackers within your organization

have permission to attempt to break into the network

to validate your security controls

and identify where improvements could be made.

Now the fourth type of security risk assessment we have

is known as a posture assessment.

A posture assessment is used to assess

your organization's attack surface

in order for you to better understanding

your cyber risk posture and exposure to threats

that are caused by misconfigurations and patching delays.

A posture assessment will often include four main steps:

First define your mission-critical components,

second, identify strengths, weaknesses,

and security issues.

Third, strengthen your position

and fourth stay in control.

By conducting a posture assessment,

you will ensure you're always up to date

on the status of your system security

and to ensure you always understand the healthier systems.

Often you'll combine this posture assessment

with a threat and vulnerability assessment as well.

Now, in addition to conducting security risk assessments

your organization may also conduct

business risk assessments.

Now a business risk assessment

is the process of identifying,

understanding and evaluating potential hazards

in the workplace concerning the day-to-day

running of your company.

Now there are two main types of business risk assessments,

process assessments, and vendor assessments.

A process assessment

is the discipline examination of the processes

used by your organization against a set of criteria

to determine the capability of these processes

to perform within the quality, cost, and schedule goals.

Basically that's a lot of words to say

this method is used to determine

if you're doing the right things

and if you're doing those things the correct way.

Now maybe you have a process in your organization

for the creation of a new user account on the network.

This process may have eight steps to creating the account

and then the entire process

should take less than one work day to complete

from the time the submitted request is received.

This is the basis of your process.

So during your process assessment

the auditor might watch you perform this process

and they'll see all the steps you do

to make sure they make sense

and to make sure you're doing them properly

and within the proper timeframes

to ensure it's all being done

within the requirements you've set.

Now after the assessment

there may be some recommendations

on how you could speed up the process

or take some steps out, or refine it

to get a better higher level of quality

within some sort of the process.

All of these are things that can come out

of a process assessment.

Now the second type of business risk assessment

is known as a vendor assessment.

A vendor assessment is defined

as the assessment or evaluation of prospective vendor

to determine if they can effectively meet the obligations

and the needs of the business regarding the product.

Now by conducting a vendor assessment

we can assess the suppliers and contractors ability

to ensure they're implementing and maintaining

the appropriate security controls.

This is also used to mitigate the threat

of a supply chain vulnerability.

For example, a few years ago,

there was a big issue with counterfeit Cisco devices,

these were routers and switches

that were being sold to other businesses.

Now these devices were being sold by third-party vendors,

not Cisco directly.

And these vendors themselves didn't even realize

that they were selling counterfeit devices.

The problem is that introduced new vulnerabilities

into the business networks all over the world

because these counterfeit Cisco devices

had malware installed in their firmware,

effectively giving the threat actors a backdoor

into various business networks all over the globe.

For reasons such as this,

it is really important to vet your vendors

and your suppliers to make sure they understand

what their supply chain looks like

and this way you can minimize your risk

of supply chain issues.

Also, you want to make sure that they won't fail

to deliver on their contractual obligations

and doing a vendor assessment can help with that too.

Security principles.

In this lesson, we're going to discuss

some of the various security principles

that are crucial to securing our networks and systems,

including least privilege, access controls, and zero-trust.

The first foundational security principle

we need to discuss is least privilege.

Now, the principle of least privilege

is pretty straightforward.

It states that whenever the user's performing a job function

or an administrative task, they should do that

while using the lowest level of permissions or privileges

needed in order for them to complete their job.

So, as a network or system administrator,

if you're going to be able to do a function

as a regular user, then you should.

Now, if you need to do something as an administrator,

then you need to log in as an administrator.

Whenever I log into my computer to check my email,

for example, I'm going to do that using my user account,

which has no administrative permissions

but if I need to install a piece of software

or change some kind of configuration setting,

then I'm going to log in as the root or administrative user,

so that I have all the accesses and permissions that I need

to make those changes.

This principle of least privilege

extends past the different types of accounts we use

with our giving users.

It also applies to things

like designing our systems and networks

because we need to design them

with the concept of least privilege in mind as well.

For example, if you're installing

some new Internet of Things devices,

like led lights that are going to connect to your network

and allow them to be remotely controlled through automation,

you also need to make sure you're using the principle

of least privilege here as well.

Should these IoT devices have access to every system

or service on your network?

Of course not.

Instead, these devices likely

only need to have one or two ports open,

that way they can communicate

and they may need to have access to the internet

to receive firmware updates.

And that's okay too

but they shouldn't have access to any of your file servers,

your web servers, or your printers.

So, using the principle of least privilege,

we can isolate these devices

into their own screen subnet or VLAN

and then we can tightly control access

into and out of that VLAN

to ensure that only those users and applications

that have an absolute need

to communicate with these IoT devices, can.

Now, the second foundational security principle

we need to cover is known as role-based access.

There are several methods

of conducting access control in the network,

such as DAC, MAC, and RBAC.

DAC, or discretionary access control,

is an access control method

where access is determined by the owner of that resource.

This is discretionary.

So for every file or folder on your ShareDrive,

the owner who created it will assign the permission levels

to other users on the system.

The owner is going to be the one who decides who can read,

write, and run these different types of files

on that file server.

Discretionary access control is commonly used

because you have very granular control

to be able to decide who has access to things

that a user has created.

And because the person who created it makes those decisions,

they are the most knowledgeable on this area.

There are two big challenges though, when you use DAC.

The first is that every object on the system

has to have an owner, because if there's no owner,

then nobody would know who has the right permissions to it

because the owner is the one who sets those permissions.

The second problem is that you need to make sure

that each owner determines the access rights and permissions

for each of those objects.

So, if I'm the owner of a file

and I never set permissions on it,

this means that nobody's going to have access

to be able to read that file.

And if I set those permissions too tightly,

then I would be keeping people out

who may need to have access

or if I set them up too loosely,

everyone can now access it

and read the contents of that file,

eliminating my confidentiality and security.

So, the owner here really has a lot of control.

In corporate or enterprise systems,

this can be really dangerous and you have to think about it

if you really want to be using a discretionary model

or one of our other choices.

Now, our second model is what's known as MAC

or mandatory access control.

MAC goes to the other extreme.

With MAC, or mandatory access control,

we have an access control policy where the computer system

gets to decide who has access to what objects.

So, how does the computer do this?

Well, with discretionary access control,

you, the owner, were able to choose who got permissions

but in MAC, the computer's going to do that for you

and it does this through data labels.

In MAC, data labels are going to create this trust level

for all subjects and all objects,

so that every person out there gets a label

with their associated trust level.

If we have a high trust level, a medium trust level,

or a low trust level for them,

then each data object gets a label as well

as either high, medium, or low trust.

And then we just compare the labels

to determine if somebody should be granted or denied access

to a particular object.

So, how does this really work in the real world?

Well, the most common use of mandatory access control

is in the military

and they'd use this with their high security systems.

So, if you've ever seen a war movie

at any time in your life,

you've probably seen the words "Top Secret"

on some kind of document.

Well, there's really four levels of documentation

inside the military context.

They have unclassified, confidential,

secret, and top secret levels.

Now, each person in the military

is also assigned a clearance level

that tells them what they're allowed to see.

So, maybe the private

only gets to see confidential information

but the Colonel gets to see top secret information

and the captain only gets to see secret information.

And so on.

Now, each person here has a label associated with them.

This is their clearance

and this also gets associated with their account.

Now, all of the documents are also going to be labeled

with whatever they're classified as.

So they're either unclassified, confidential,

secret, or top secret.

Now, when a person wants to read a document,

their label of their user account

is going to be checked against the documents label.

If your label

is at or above the level of that clearance document,

you're going to be able to read it.

If not, you're going to be denied access.

Now, this makes a lot of sense

because if you have a top secret clearance,

you should be able to read top secret documents

but you should also be able to read secret, confidential,

and unclassified documents

because top secret is a higher level than these other three.

But if you have a confidential clearance,

you would be denied access to secret and top secret data

because those are higher classifications

than the clearance you hold.

Now in a MAC system, they're going to add

another piece of information though, as well.

This is that if you want to access something,

you need to not just meet the minimum levels for it

but you also need to have what's known as a need-to-know.

So, for example, let's say we have two military members.

We have an army person and a Navy person,

and they both have a top secret clearance.

Now, I have this particular document

that's about a Navy operation.

In this case, the army guy

doesn't need to know about what's going on there

because he's not in the Navy

and doesn't need to have access to this information.

He doesn't have a need-to-know.

So even though he has the clearance level of top secret,

he doesn't have a need-to-know

and therefore, he shouldn't have access.

Now, with MAC, these labels can be very in-depth

and they can get very, very complicated.

This is why MAC is not used in most enterprise networks

and is reserved only for highly classified information

within military systems.

Now the third type of access we have

is known as RBAC or role-based access control.

Now, role-based access control is an access model

that's controlled by the system like MAC

but instead of using labels,

it's going to focus on a set of permissions

instead of an individual's permissions.

Now, we don't have to actually label

each individual person on every single file,

instead, we're going to assign roles to these files

and then we're going to assign roles to these people.

The way I like to think about this

is that we create roles for each job function

and then we assign roles for each person's permissions

to that particular object.

For example, let's say you go into your company

and there's a sales department, a human resource department,

and an IT department.

Now, we have these three departments sitting here.

Do the salespeople need to have access

to the human resources people's files?

Probably not.

Now, does the human resource people

need to have access to the salespeople's files?

Probably not again, right?

Does the IT person need to have access to everybody's file?

Probably, if they're going to be doing all the data backups

and maintenance and things like that, right?

So, essentially, what we're going to do

is create these different groups

and then these groups are going to get a set of permissions

and those are going to be applied

to the different files and folders.

When we do this, we add or remove people into the roles,

instead of onto those particular files.

By using role-based access controls,

we are going to be using a best practice

inside of the cybersecurity industry.

Now, if I have a file on the ShareDrive

and you see that Jason was added to it individually,

you would flag this as a bad practice

because we're not using role-based access

and instead, we're using discretionary access control.

Now in role-based access control,

we're instead going to have an owners group

instead of an individual person.

We're also going to have an admin group

and we're going to have an IT group,

and we're going to have a sales group.

And we're going to put all the people

who have the same type of job in the same functions

into the same type of group.

This makes it much easier to control our permissions

based on the concept of least privilege

because we're relating it to the permissions required

to actually do your job.

Always ask yourself, what is the role of the person

that is going to be using this file?

Based on that, assign them to the right group

with the right permissions

that are going to do that thing using those job functions.

Let me give you a great example of this.

There is a role-based group called power users

inside of a Windows system.

Now, power users are people who aren't a normal user

but they're also not a normal administrator either,

they're somewhere in the middle.

For example, an administrator might have full access

to do whatever they want on a system,

whereas a user might only be able to operate

the programs that currently exist

but they can't make configuration changes,

like changing the time or adding a printer.

Well, a power user has a little bit more permissions

than a regular user

and they can do things like changing the time

or adding a printer to the network

but they don't have full administrative rights,

like an administrator would.

So, we could put different users

into that power users group

and they be able to inherit those permissions

and be able to do just those functions that are necessary

to add things like printers

or make minor system changes like the time.

Now our third foundational security principle

we have to talk about is known as zero-trust.

Zero-trust is a security framework that requires all users,

whether in or outside of the organization,

to be authenticated, authorized, and continuously validated

for security configuration and posture

before being granted or keeping access

to applications and data.

As we continue to try to combat threats to our networks

and our systems

from the de-perimeterization of our networks,

zero trust is becoming more widespread

and adopted by a ton of organizations.

Unlike our traditional networks

where we used to have very clearly defined edges,

like the border router or the firewall,

these days, those edges have become very blurred

by the on-premise and cloud-based hybrid architectures

and the increased adoption of bring your own devices

for mobile device connectivity on the go.

Due to this, zero-trust is going to assume

that there is no traditional network edge

and that workers could be accessing the network

from anywhere, at any time, using any device.

And therefore, no trust can exist.

To apply zero-trust to your networks and systems,

you need to follow four key principles.

First, re-examine all default access controls.

This means that there is no such thing

as a trusted device or source

because anyone in the network could be a threat

and therefore, they have to be validated

and re-validated continually.

Second, employ a variety of prevention techniques

and defense in depth.

This includes things like multi-factor authentication,

data loss prevention, micro-segmentation,

and least access privilege assignment.

Third, enable real-time monitoring and controls

to identify and stop malicious activity quickly.

Since incidents can occur at any time,

it's important to ensure that your organization

is actively monitoring its security

through the use of seams and real-time devices.

And forth, ensure your network's zero-trust architecture

aligns to your broader security strategy.

Your company needs to be continually retiring

older technologies that could leave you vulnerable

and instead, increase its resilience and reliance

on endpoint monitoring, detection and response

to quickly identify and respond to future incidents.

Defense in depth.

In this lesson, we're going to discuss defense in depth

which includes network segmentation enforcement,

screen subnets and DMZs,

the separation of duties and honeypots.

So what is defense in depth?

Well, defense in depth is an approach to cybersecurity

in which a series of defensive mechanisms

are layered on top of each other

in order to protect valuable data and information.

Defense in depth is truly the foundation

of a good network security architecture.

With defense in-depth,

we're not going to rely on just a single defensive boundary

or measure of protection in place.

Instead, we're going to be able to layer them

on top of each other, because if we don't,

an attacker can find a vulnerability

and exploit a single boundary

and get access to our entire network.

So, instead, we're always going to be layering our defenses

and this way, we can ensure

that none of these vulnerabilities line up

across all the layers,

because if they do, then an attacker would still be able

to compromise our network.

Our goal is to make sure

that the vulnerabilities don't all line up

because we want to make sure we can stop attackers

from being successful.

To achieve this, we're going to use a mixture

of different physical, logical and administrative controls

to help secure our networks.

Let's consider an example and look at it from the inside

and work our way out.

When we talk about layer defense,

we're first going to start on the inside with our data.

We need to protect our data

by doing things like data integrity checks

or encrypting the data.

Next, we want to protect the applications

that manipulate our data.

This way, we can ensure that all of the security patches

are up to date for applications like Microsoft office,

Google Chrome or Adobe Acrobat.

By making sure these applications are protected,

we're going to make sure the data doesn't get modified

by mistake.

Then, we can look at how we secure our host itself.

So, we're going to add some end point security.

This can be things like antivirus, anti-malware,

host based intrusion detection systems,

windows security patching,

or implementing some other hardening measures

and configurations.

Next, we're going to go into the network

and we're going to start talking about things

like network intrusion detection systems,

and IPS and access control lists,

and VLANs and unified threat management systems

and things like that.

Next, we're going to move out to our perimeter

and here, we're going to consider our border routers,

our firewalls, our VPN connections

and any cloud-based connections we may have

going into or out of our network.

By considering each of these different layers,

we hopefully can ensure there's no straight line of attack

by lining up all the different vulnerabilities

and allowing an attacker to exploit our network

in our hosts.

So, now that we've talked about the concept

of layering our defenses

let's consider how we might apply these layers

as we move laterally across our network as well.

To help provide protections between different portions

of our network, we also want to create network segmentation.

By creating network segmentation,

we can separate a single larger network

into different levels of security or levels of protection

and we can keep data from moving between these areas

without first being inspected by our requirements.

Now to enforce network segmentation,

we can check data as it tries to enter or leave

a different part of the network.

We do this by creating choke points in our network,

by adding subnets or VLANs into our networks.

Each of these subnets or VLANs

would then require their data to be passed to a router

before entering a different subnet or VLAN.

As it passes through that router,

the network traffic can be inspected

and compared against the access control lists on that router

and its interfaces.

Based on those ACL rules, the traffic can be blocked

in order to isolate the subnet or VLAN.

It can be filtered to allow only specific traffic to enter

or leave that subnet or VLAN

or it can be allowed to freely pass

between the subnets or VLANs

if we're going from one trusted zone

to another trusted zone.

Most networks have at a minimum three security zones.

These three segments or zones are the intranet

or the internal network,

the screen subnet or DMZ, demilitarized zone

and the internet or your external network.

For example, here you can see the intranet,

the screen subnet or DMZ and the internet.

These are my three security zones, but I didn't stop there.

I also have a data center and it has its own segment

that is connected to the internet, the DMZ and the internet.

Now, as you can see,

how the segmentation is going to be used

to create a more complex network

while still providing these choke points

where we can screen or inspect the traffic

as it moves between different parts of this network.

This is one of the reasons that the name DMZ has changed

to a screen subnet

because DMZ is just one type of screen subnet,

but there are many others that we can implement

based on our particular business use cases.

For clarification, whenever you hear the term DMZ,

I want you to remember that refers to a perimeter network

that protects an organization's internal local area network

from untrusted traffic.

Normally, this is just going to be a subnet

where you're going to place all your public facing servers,

like your email servers, web servers

and file servers that you need to have access to

from the internet for particular use cases.

Since you don't want those users entering your intranet,

to directly touch your stuff for security reasons,

we placed them in this DMZ or screen subnet.

Now a screen subnet on the other hand,

is any subnet in the network architecture

that uses a single firewall with three interfaces

to connect three dissimilar networks, one public,

one private or internal

and one that is a semi-trusted zone between the two.

Yes, your DMZ is a screen subnet,

but it is just one particular type of screen subnet

there are many others out there.

Now, when we have a firewall in this type of configuration,

it creates this screen subnet,

and we call this a triple home firewall

because it's touching three different areas.

There are three homes to it.

Now another way to add defense in depth

is to be able to use administrative policies

like separation of duties.

Now, separation of duties

is a preventative type of administrative control

and it's one that should be considered

when you're drafting up your organizational authentication

and authorization policies.

Separation of duties is designed to prevent fraud and abuse

by distributing various tasks and approval authorities

across a number of different users.

For example, let's pretend

you work in the accounting department

and you have to be able to request checks

that are sent out to employees on payday.

Well, you might be able to request a check,

but you can't also approve that same request.

Instead, you would request it

and then a supervisor could approve it.

This creates a clear separation of duties

and prevents fraud,

because now you're going to have to have two users

who are working together

to steal money from the organization

in the case of my check approval example.

Now in the cybersecurity world,

a great example of this

is when one administrator is given the rights

to create the backups of the server,

but a different administrator is given the rights

to be able to do the restoration of those files

from the backup.

This separation of duties occurs

because the backup and the restore functions

are being performed by do different administrative users.

Anytime you have a function in your organization

that you consider to be high risk,

you should use a proper separation of duties.

For example, if you've ever watched a war movie

like Crimson Tide, and they want to launch a nuclear missile,

what do they do?

Well, they have two people

each take out a different physical key.

They then insert it into the machine

and they turn at the same time.

This is a separation of duties

because we don't want somebody to go off

and launch a nuclear missile on their own

just because they decided they wanted to.

Instead, we want to make sure we built that into the system.

So, the military, in the case of nuclear weapons

has built this into their system.

This is a technical control that requires two separate keys

and two different people have to turn them at the same time

because those key holes are too far apart

for a single person to be able to turn both keys

at the same time.

Now, this is a specific type of separation of duties,

which is known as dual control

because both people have to be present

at the same time to do it.

Another type of separation of duties

is known as split knowledge.

Now, split knowledge occurs

when two people each have half of the knowledge required

to do some function.

For example, let's imagine that I have a safe in my house

and it's going to hold my super secret family recipe

for the best macaroni and cheese that you've ever tasted.

Well, I want to make sure nobody gets this recipe.

So I'm going to lock it up in my safe,

and I'm going to use two different locks.

Now there's two locks on this.

One is a combination lock that I know the combination to

and the other is a physical lock

that my wife has the key to.

Now, she doesn't know the combination to the first lock

and I don't know where the key is for the second lock.

This way, neither of us can open the safe ourself

because each of us only has half of the knowledge.

This is split knowledge.

Since I know the combination

and she knows where the padlock key is,

we can only open the safe together

and we can take out the recipe

if we both are working together to do it.

This is why it is considered split knowledge.

Now in the cybersecurity world,

we can implement split knowledge by using data encryption

where a key can be broken up into two pieces

and one is given each of the different administrators.

Therefore, the data cannot be decrypted

without both administrators providing their half of the key.

Now our final defense in depth strategy I want to cover

is the use of honeypots.

Honeypots and their larger cousins known as honeynets

are used to attract and trap potential attackers

to counteract any attempts at unauthorized access

into your organization's network.

When we're dealing with a honeypot,

it's usually going to be a single computer or a server,

but it can also be a file, a group of files

or an area of unused IP address base

that could be considered attractive to a would be attacker.

A honeynet on the other hand is one or more computers,

servers or area of your network.

Often a honeynet is going to be used

when a single honeypot is not deemed sufficient enough

for your business needs.

Now, why would we use honeynets

and honeypots in our networks?

Well, this is usually going to be used as a form of research,

as well as in, you're trying to learn more about an attacker

and their techniques.

For example, the honeynet project@honeynet.org

is a well-known honeynet that's in use today.

It's used to learn the tools, tactics

and motivations that are involved in computer

and network attacks.

As they collect data on attackers

and their different methods,

the honeynet project then shares what they learn

with all the different groups

inside the cybersecurity industry.

Normally, your organization

isn't going to put up a honeypot on its own

unless you're part of a security operation center

for a large company,

who's trying to develop better countermeasures on their own.

For example, if you work as a security researcher

at a place like Microsoft, Google, Apple, CrowdStrike

or FireEye, you might run a honeypot or a honeynet

to try to better be prepared in the defense of those systems

and better understand the bad guys, their techniques

and their tactics.

If you're going to be putting up a honeypot or a honeynet,

it is usually going to be located in a screen subnet

to ensure an attacker cannot breach

the rest of your network

if and when they attack that honeypot or honeynet.

Remember, when it comes to defense in depth,

you need to think vertically through the layers

as well as horizontally or laterally across the network

when using those screen subnets to protect your network.

(soft music)

Multi-factor authentication.

In this lesson, we're going to talk about the importance

of using multi-factor authentication.

So what exactly is multifactor authentication?

Well, multi-factor authentication

means that you're authenticating or proving your identity

using more than one method.

For it to be multi-factor,

you have to have at least two methods or more,

you can have two, three, four or five.

Now, when you talk about different methods,

we're talking about different categories.

We're talking about something you know,

something you have, something you are,

something you do, or somewhere you are.

We're going to talk about each of these in this lesson.

Now our first one is something you know,

and this is the most common factor

and it's known as a knowledge factor.

This is because you have to know something.

If it's something you know,

this would be something like a username, a password, a pin,

or answers to personal questions.

All of these are considered knowledge factors.

If you know it and I can try to figure it out,

then I would know it and now I have that knowledge too

and I can authenticate as you.

Now, one of the common questions I see

inside the network plus exam

is asking you

what would be considered two factor authentication.

Now, one of the answer choices they'll give you

will be something like a username and a password.

Now a lot of students will see this and they'll think,

Hey, this is a username and a password,

that must be two factors, right?

Cause I have two things, but that's not true.

A username and password both come from this factor,

which is known as the knowledge factor

or something we know,

therefore it's still considered a single factor.

Now this is not going to give you the best security.

So instead, we want to add a second factor

to get us two FA, two factor authentication

or multi-factor authentication.

So what are some weaknesses with passwords?

Well, the most common weakness

is that people don't change default credentials.

You might have a default password on a brand new system,

like a wireless router or access point

and the password is password

and nobody ever bothers to change it.

I've seen this time and time again in my penetration tests,

don't do that.

Whenever you get a new device,

you need to go in and change those default credentials

because default credentials are really easy to guess.

Also people will use common passwords

and that becomes a big issue.

Now, when you talk about common passwords,

that means using the same password across multiple devices

or using a common phrase or word as their password.

Things like love and password and secret.

Those things are just way too common.

Now, every year there's a dictionary that comes out

called the Attackers Dictionary

and it shows all the commonly used passwords.

We can use those passwords in that list

as part of a dictionary attack

to be able to find your password pretty darn quickly

in most cases.

Now another issue we have

is that people use weak or short passwords.

If you're going to use something like dog or puppy or cupcake

or dog123, these are all short and weak passwords.

Anything that is a standard dictionary word

is completely bad and you should not use it.

Anything less than eight characters is also pretty bad.

You really want to make sure you have a nice, long, strong,

secure password.

And to do that, you need uppercase letters,

lowercase letters, numbers and special characters

all mixed together.

This will help increase the security of your password

and you want to make sure it's a long password.

Now, if you're only going to use a single factor authentication

like a username and a password,

at least make sure you're using a long, strong password.

The reason for this is that attackers know

that they can break our passwords over time.

There's lots of different attacks we can use

like a dictionary attack, a brute force attack,

or a hybrid attack.

Now dictionary attack occurs

when the attacker tries to guess the password

by checking every single word

or phrase contained within a word list,

which we call a dictionary.

Now, an attacker's dictionary

isn't like the dictionary you used in high school.

It doesn't contain just real words.

Many attackers dictionaries

contain things like the word password,

but they'll also sub it out

and have the A becoming an at symbol

or the S becoming a dollar sign.

And they'll have lots of different combinations of these

in this single dictionary.

When the attacker tries to crack your password

using a list, we consider it a dictionary attack

whenever there's a list involved,

even if they're not real words.

So the best defense against the dictionary attack

is to not use anything that looks like a regular word.

Even if you start substituting in symbols for letters

that still looks like a regular word,

and it's probably in an attacker's dictionary.

On the other hand, if a dictionary attack isn't successful,

the attacker may move on and try to do a brute force attack.

Now with a brute force attack,

they're going to try every possible combination

until they figure out your password.

For example, let's say your password was a pin

and it's four digits long.

Well, the attacker could start out with 0 0 0 0,

then try 0 0 0 1, then 0 0 0 2.

And they keep going up

until they finally get your four digit code,

which may be something like 5 2 4 6,

or whatever it was.

The thing is that with a brute force attack,

they will always be successful eventually,

it's just a matter of time.

And so the key here is to prevent them

by creating longer and more complicated passwords,

because the longer more complicated the password is,

the longer it's going to take an attacker

to guess it using brute force

and going through all the possible combinations.

For example, if you have an eight character password,

even if it has uppercase, lowercase,

numbers and symbols in it,

it will take less than a few days to crack that password

using a decent graphics card.

Now, if I raise it up to nine characters,

it will take me about five days to crack it.

With 10 characters, it becomes four months,

11 characters 10 years and 12 characters about 200 years.

You can see there's this exponential curve here,

but remember computers are always getting faster

and better at cracking every single day.

So all these numbers I just gave you,

by next year you can cut them in half

and the year after that, cut it in half again,

and it'll keep going down and down and down.

So we have to get longer and stronger passwords.

As a good rule of thumb,

you want your password to be at least 12 characters minimum

for good security.

Now the final method a hacker can use

is what's known as a hybrid technique.

This is a mixture of a dictionary and a brute force method.

Now, basically the attacker tries to gather keywords

that would relate to your life

and then make their own custom dictionary lists.

For example, I might go on Facebook

and try to find out your spouse's name or your dog's name

or your favorite sports team.

And then I put all the words related to those things

into a custom dictionary list.

Then my password cracking program would take that list

and add different things to it as a form of brute force.

For example, let's say your favorite sports team

was the Ravens up in Baltimore.

I might use two words as part of my dictionary list,

Baltimore and Ravens.

Then the program will try things like Baltimore123,

or Ravens911 or whatever.

And they'll substitute in symbols and numbers

and add things to it and make different combinations,

trying to figure out what your password is.

This is essentially a modified version of brute force,

but it does speed up the time it takes to crack a password.

If I use the right keywords,

because I'm giving it some sort of a starting point,

instead of picking everything out at random.

But again, this isn't 100% effective

because if I choose something like Ravens

and you didn't use that as part of your password stem,

I'm not going to ever get there,

because it's not a traditional brute force attack

where I try every single possible combination.

The next factor of authentication we have

is known as something you have,

also known as a possession factor.

Now this can be something like a smart card,

which stores a digital certificate on a card,

you have to insert into your computer

and then unlock it using a pin.

Now this means you have something you have, the card,

and something you know, the pin.

So we have two factors of authentication.

Now, another thing you might use

is something like an RSA key fob.

Now an RSA key fob is going to change a number

on a little device that's in your pocket,

every 30 to 60 seconds.

Now, when you go to log into your machine,

it's going to ask you your username and password

and a rotating pin number,

because that pin number is provided by that key fob.

So this means you have something you know,

that knowledge factor because your username and password,

but you also have something you have,

this key fob by typing in that rotating pin number.

By combining those two things together,

I now have two factor authentication.

Another option we have is using something like an RFID tag.

Now, some employers have badges that you wear,

and there's an RFID tag built into it.

To log into your system,

you tap your badge onto the computer,

that's something you have.

And then you enter a pin number or password,

something you know.

Again, this gives you a good

two factor authentication solution,

something you have and something you know.

The next factor we have is something you are.

Now, this is also referred to as an inherence factor.

Now the inherence factor is things like fingerprints

because only I have my fingerprints

or retina scan because only I have my eyes.

Voice prints are also included in this

because the way I talk

is different than the way other people talk.

And my voiceprint is unique to me.

Now, all of these things can be used

as an inherence factor or something you are.

Now, these are not commonly used

like the way you see something you know,

or something you have,

because something you are, these inherence factors,

are very intrusive when you're dealing with them.

For instance,

if every time I wanted to log into my computer,

I had to put my eyeball up to a scanner,

that's pretty intrusive, right?

And I wouldn't want to log in to my computer very often.

So instead, most two factor authentication schemes

are going to be using something you know or something you have.

Now, something you are is often going to be used

in high security environments.

Usually something like a door lock

or something like that to keep people out

of a very secure room,

like a server room that holds top secret information.

Now, the next factor we have is something you do,

which is known as an action factor.

This might be the way you sign your name,

the way you draw a particular pattern on a screen,

or the way you say a particular passphrase.

All these are something you do that's unique to you.

That action can be measured by a computer

and used for authentication.

Usually you don't want to use this

as a single factor though, because it is prone to error.

So instead, you'll add it with something you know

and that'll give you two factor authentication

because people can forge your name and the way you sign.

But generally the way you press and the way you put pressure

on certain parts of your signature is more unique to you.

Our final factor is somewhere you are,

this is known as a location factor.

Now we do this one of two ways.

We can either use geotagging or geo-fencing.

When we do a geo-tagging,

that's going to be based off your GPS or your phone

or the device you're using.

For example, if I'm trying to log into my local server

that uses geo-tagging

and it's going to check my coordinates in my GPS,

and it sees that I'm sitting in Moscow

instead of being in Puerto Rico,

that means it's going to reject me

because it realizes it's not Jason.

Jason's not in Moscow, he's sitting in Puerto Rico.

So that person doesn't need to be on our network.

Now, the other way we can do this is by using geo-fencing.

Geo-fencing is used more

when we actually want to track a device

and see if it's going to leave a certain area.

And if it does leave that area,

which is set off by GPS coordinates,

it will then send an alert and let us know.

So maybe we have a bunch of mobile phones

for all of our employees,

but we don't want them to use them

anytime they leave our city.

Well, we can set up a GPS location coordinates

around our city borders,

and any time they cross those city lines,

it would actually send up an alert

back to our mobile device management suite

to say this person is no longer within our coverage area.

This is basically a location factor.

It's an additional way to provide protection,

additional authentication in addition to something

like a username and password

and make sure your devices are within the range

of where you want them to be used.

Authentication methods.

In this lesson,

we're going to discuss different authentication methods

that are used in our networks,

including local authentication,

LDAP, Kerberos and SSO.

So what is authentication?

Authentication is the process of determining

whether someone or something is in fact,

who or what it claims itself to be.

So, if you're walking into my classroom

on the first Monday of the semester,

and you say,

"Hi, Professor Dion,

my name is John Smith."

I might ask to see your student ID card

or your driver's license

so I could verify you are who you say you are.

After all, you and I have never met before.

So I need some way to validate your claim

and authenticate that you are really John Smith.

Now, the same thing will happen

when you go and take your certification exam.

Before you take your exam,

they're going to look over

your official government identification,

such as your driver's license or a passport,

and they're going to compare that to what you look like,

and also compare it to the name that you used

when you registered for the exam.

So, if you registered as Jason Dion,

but your driver's license says, Michael Jordan,

they're not going to let you take your exam

because you're going to fail the authentication process.

The first authentication mechanism you have on a system,

is known as local authentication.

When you turn on your personal laptop

and you entering your username and password,

you're going to be using local authentication.

Whenever you set up the laptop for the first time,

you can create a username and a password on that device

and the device will save it in an encrypted version

on your hard drive.

Now, every time you try to log on,

it's going to take what you enter, encrypt it

and compare it against what is stored as your username

and password.

If they match,

you're going to be authenticated

because you've met

this single factor authentication requirement

of a standard laptop by entering in,

your username and password.

Now, the next type of authentication we have,

is known as LDAP, L-D-A-P.

Now, LDAP is the Lightweight Directory Access Protocol.

And it's essentially a database

that's used to centralize information

about your clients and your objects on your network.

LDAP is essentially a simplified version of X.500,

which is a directory service.

And it contains a hierarchal organization of the users,

groups, servers and systems inside your network.

LDAP is going to communicate over port 389

if you're going to be using the standard plain text version.

If instead, you're using the more secure version

known as LDAPS or LDAP Secure,

it's going to communicate over port 636

using either SSL or TLS encryption,

to be able to protect your data

as it crosses the network.

LDAP is going to be used for validating

a username and password combination

against an LDAP server as your form of authentication.

Now, it's going to be very similar to local authentication,

except that it's now going to be occurring

over the network.

With LDAP,

it is going to be considered a cross-platform system.

And that means it works on Unix, Linux, OSX,

and even Windows.

But Microsoft also created their own implementation of this

known as a AD or Active Directory.

Now, in a Windows domain environment,

Active Directory is going to be used

to organize image, everything on the network,

including those clients, servers, devices, users and groups

that we mentioned before.

Active Directory can be used

as part of your overall security policies

and access control through the use of group policies

as well.

Now, this brings us to Kerberos,

which has focused on authentication and authorization

within a Windows domain environment.

And it integrates with Active Directory.

Kerberos is designed to provide secure authentication

to services over an insecure network.

Kerberos is going to use tickets to authenticate a user

and completely avoid sending passwords across the network

because it relies instead on the Kerberos ticketing system

inside of a Windows domain,

Kerberos is authentication protocol

that provides two way or mutual authentication.

When a user logs on to the domain,

they first contact the domain controller,

which acts as the key distribution center or KDC.

This KDC has two basic functions,

authentication and ticket granting.

So, if your client is authenticated properly,

the KDC will issue them a TGT or a ticket granting ticket.

This ticket granting ticket is then provided

to the domain controller,

anytime that user wants to access a resource on the network.

Something like,

a file share or a printer, for example.

Now, the domain controller

can also provide that user with a service ticket

or a session key to use,

whichever one is going to be appropriate

for their current needs,

based on what they're trying to access on the network.

These tickets are presented to the resource

and then access is going to be granted to the user

because the resource

always trust the domain controllers provided tickets.

Now, if your domain controller is running Kerberos,

this is going to happen on port 88.

And so you have to keep that open on the domain controller,

so it can receive these inbound service logging requests

from your clients.

Because Kerberos relies on the domain controller

to serve as that key distribution center,

this can become a single point of failure

in your domain though.

So be aware of that.

If your domain controller goes down,

ticket granting services are also going to be shut down.

To prevent this though,

most people will have a primary

and secondary domain controller

working in a clustered or active standby configuration.

That gives you redundancy to ensure Kerberos

is always up and available,

and that LDAP continues to run effectively

on a Windows environment.

Now, our final authentication method we need to discuss,

is known as SSO or Single Sign-On.

Due to the large number of resources and websites

that the average person has to access

on a daily basis,

many organizations are starting to adopt

a Single Sign-On environment.

When adopted, the organization establishes

a default user profile for each of their users.

And then, they link that profile

to all of the different resources

that the user is going to have access to.

Under a Single Sign-On system,

the user can have a single long, strong password

that they can memorize,

or they can use multi-factor authentication.

Now, this will replace

30 or 40 or 50 different logging credentials

that the average person currently is using on a daily basis.

And instead, allows them to memorize

just one set of credentials.

This makes accessing new resources much quicker

and much easier because it simplifies user

and password management.

Now the one major drawback to using Single Sign-On,

is that if the user credentials have been compromised,

the attacker will now have access

to every resource that that user had access to.

I like to think about it like a master key.

Let's assume you had a single key

that opens your office, your car and your house,

but you went to the mall,

you dropped that key and you lost it.

Unfortunately for you,

now an evil person might find that key

and have access to all those things,

your home, your office and your car.

This is the biggest drawback to using Single Sign-On.

But again, if you're using multi-factor authentication,

this is going to help keep it more secure

than just using usernames and passwords.

Because now, somebody has to have both factors

in order to compromise your account.

That might be a username, a password,

and a key fob for instance.

Single Sign-On works by creating these trust relationships

between various applications and resources

that a user might need to access.

Then when the user tries to log onto their one account,

it's going to leverage that existing trust relationship.

So, the user's going to authenticate

with just one service provider,

but all the other resources they may need to access,

are going to trust that service provider

and allow that service provider to authenticate

the user's identity on behalf of all the other websites

and services out there.

Network Access Protocols.

In this lesson, we're going to discuss

the different network access protocols

that are used in our networks,

including RADIUS, TACACS+, 802.1x and EAP.

The first network access protocol we're going to discuss is

RADIUS, or the Remote Authentication Dial-In User Service.

RADIUS provides a centralized administration of dial-up VPN

and wireless network authentication.

RADIUS supports the use of 802.1x and EAP,

the Extensible Authentication Protocol.

Both of which we're going to discuss in just a moment.

RADIUS is considered a client server protocol

and it operates within layer 7 of the OSI model.

This is the application layer.

It's going to utilize UDP for making its connections,

making it fairly fast

during authentication to authorization.

Now, to implement RADIUS,

you need to usually run it on a separate server,

but this can be loaded up on the same server

as your Windows domain controller

if you're using a smaller domain environment.

RADIUS is going to be used to authenticate users,

authorize them to services,

and account for their usage of those services.

This means it's considered an AAA,

or Authentication, Authorization and Accounting service.

For network connectivity RADIUS is commonly

going to use port 1812 for authentication messages

and port 1813 for accounting messages.

But some proprietary versions of RADIUS

may also use port 1645 and 1646 for these purposes.

Now, while RADIUS is a cross platform standard,

there is a proprietary protocol from Cisco known as TACACS+,

which is similar in functionality.

Now this is our second network access protocol

that we need to discuss.

This is known as Terminal Access Controller Access Control

System Plus, or TACACS+.

Now TACACS+ is going to be used to perform

the role of an authenticator in an 802.1x network,

just like RADIUS can.

It's really up to you to determine which one

is going to be best for your organization's needs

in coordinating with your network engineers

and your cybersecurity team to decide

if you're going to use RADIUS or TACACS+.

Personally, I like to use RADIUS almost exclusively

within my organization because I've found that TACACS+

is a little bit slower to operate

because it relies on TCP instead of UDP.

Also, if you're going to use TACACS+

you need to make sure you have port 49 open and available,

so clients can access the TACACS+ server

and communicate without any issues.

Now, even though I said, I prefer to use RADIUS,

TACACS+ does have some benefits.

In developing TACACS+ Cisco has included

some additional security and its server

is going to independently conduct authentication,

authorization and accounting processes.

TACACS+ also supports all network protocols

where RADIUS doesn't support the remote access protocol,

net bios frame protocol, and a few others.

Overall, TACACS+ is a really good choice,

but only if you're going to be using Cisco devices

exclusively across your network.

If you're like me and you prefer

to have cross-platform capability,

then you're going to want to use RADIUS

for your implementations instead.

As we've been discussing RADIUS and TACACS+

I've mentioned 802.1x and EAP a couple of times.

So let's circle back and cover those topics right now.

The third network access protocol

we need to discuss is 802.1x.

Now, 802.1x is a standardized framework

that's used for port-based authentication

on both wired and wireless networks.

Now, since 802.1x is just a framework,

it's actually going to utilize other mechanisms

to do the real authentication for us,

things like RADIUS and TACACS+ that we just spoke about.

Now, for authentication to occur under 802.1x,

there are three roles that are required.

There's a supplicant, an authenticator,

and an authentication server.

First, we have our supplicant.

This is the device or the user that's requesting access

to the network, such as PC1 in this example.

Then we have the authenticator,

which is the device through which the supplicant

is requesting to access the network.

Normally, this is going to be something like a switch,

a wireless access point or a VPN concentrator.

Finally, there's the authentication server.

This is going to be the centralized device

that performs the authentication.

Normally this will be your RADIUS or your TACACS+ server.

802.1x is a great thing to have in your networks,

because it is one of the best protections that you can add

to your internal network conductivity,

to prevent rogue devices from getting access

to your organization's devices and connections.

As I said, this is port-based authentication.

So anything that connects to a switch

or wireless access point or a VPN concentrator

could be required to present itself

for authentication using 802.1x

prior to getting access to the entire network.

Another feature of 802.1x is the ability to encapsulate EAP.

This is our fourth network access protocol

that we need to discuss, known as EAP,

or the Extensible Authentication Protocol,

which can happen over a wired or wireless connection.

Now, EAP is actually not a single protocol by itself,

but actually a framework in a series of protocols

that allows for numerous different mechanisms

of authentication, including things like simple passwords,

digital certificates, and public key infrastructures.

There are many different variants of EAP,

such as EAP-MD5, EAP-TLS, EAP-TTLS,

EAP-FAST and EAP-PEAP.

Now, EAP-MD5 is a variant of EAP that utilizes

simple passwords and the challenge handshake

authentication process to provide

remote access authentication.

If you're using this method,

you have to ensure you're using long, strong,

and complex passwords in order for you to maintain

the security of your systems.

EAP-MD5 is a one-way authentication process,

and it's not going to provide mutual authentication

like some of the other versions will.

EAP-TLS is a form of EAP that uses public key infrastructure

with digital certificates being installed on both the client

and the server as a method of authentication.

This makes it immune to password-based attacks

since neither side is going to use a password.

And instead they're going to be using digital certificates

to identify themselves.

This is considered a form of mutual authentication

between both devices, the client and the server,

because each of them is going to authenticate

with the other one.

Now, another variant of this is known as EAP-TTLS.

This requires a digital certificate on the server,

but not on the client.

Instead, the client's going to use a password

for its authentication.

This makes it more secure than a traditional EAP-MD5,

which just uses passwords,

but it is less secure than EAP-TLS

because we're now only using one digital certificate

instead of two.

Our fourth variant of EAP is EAP-FAST,

or EAP Flexible Authentication via Secure Tunneling.

EAP-FAST is going to use protected access credentials

instead of a certificate to establish mutual authentication

between the two devices.

Now, our fifth and final variant of EAP is known as PEAP,

or Protected EAP.

This variant also supports mutual authentication

by using server certificates

and the Microsoft active directory database

for it to authenticate a password from the client.

Now, in addition to all these cross-platform

variants of EAP,

there's also a proprietary protocol

that was developed by Cisco,

known as LEAP or the Lightweight EAP.

Now, since it's proprietary,

it only works on Cisco based devices.

So, unless you have all Cisco devices in your network,

you should stick with using standard EAP

in your networks instead.

Network Access Protocols.

In this lesson, we're going to discuss

the different network access protocols

that are used in our networks,

including RADIUS, TACACS+, 802.1x and EAP.

The first network access protocol we're going to discuss is

RADIUS, or the Remote Authentication Dial-In User Service.

RADIUS provides a centralized administration of dial-up VPN

and wireless network authentication.

RADIUS supports the use of 802.1x and EAP,

the Extensible Authentication Protocol.

Both of which we're going to discuss in just a moment.

RADIUS is considered a client server protocol

and it operates within layer 7 of the OSI model.

This is the application layer.

It's going to utilize UDP for making its connections,

making it fairly fast

during authentication to authorization.

Now, to implement RADIUS,

you need to usually run it on a separate server,

but this can be loaded up on the same server

as your Windows domain controller

if you're using a smaller domain environment.

RADIUS is going to be used to authenticate users,

authorize them to services,

and account for their usage of those services.

This means it's considered an AAA,

or Authentication, Authorization and Accounting service.

For network connectivity RADIUS is commonly

going to use port 1812 for authentication messages

and port 1813 for accounting messages.

But some proprietary versions of RADIUS

may also use port 1645 and 1646 for these purposes.

Now, while RADIUS is a cross platform standard,

there is a proprietary protocol from Cisco known as TACACS+,

which is similar in functionality.

Now this is our second network access protocol

that we need to discuss.

This is known as Terminal Access Controller Access Control

System Plus, or TACACS+.

Now TACACS+ is going to be used to perform

the role of an authenticator in an 802.1x network,

just like RADIUS can.

It's really up to you to determine which one

is going to be best for your organization's needs

in coordinating with your network engineers

and your cybersecurity team to decide

if you're going to use RADIUS or TACACS+.

Personally, I like to use RADIUS almost exclusively

within my organization because I've found that TACACS+

is a little bit slower to operate

because it relies on TCP instead of UDP.

Also, if you're going to use TACACS+

you need to make sure you have port 49 open and available,

so clients can access the TACACS+ server

and communicate without any issues.

Now, even though I said, I prefer to use RADIUS,

TACACS+ does have some benefits.

In developing TACACS+ Cisco has included

some additional security and its server

is going to independently conduct authentication,

authorization and accounting processes.

TACACS+ also supports all network protocols

where RADIUS doesn't support the remote access protocol,

net bios frame protocol, and a few others.

Overall, TACACS+ is a really good choice,

but only if you're going to be using Cisco devices

exclusively across your network.

If you're like me and you prefer

to have cross-platform capability,

then you're going to want to use RADIUS

for your implementations instead.

As we've been discussing RADIUS and TACACS+

I've mentioned 802.1x and EAP a couple of times.

So let's circle back and cover those topics right now.

The third network access protocol

we need to discuss is 802.1x.

Now, 802.1x is a standardized framework

that's used for port-based authentication

on both wired and wireless networks.

Now, since 802.1x is just a framework,

it's actually going to utilize other mechanisms

to do the real authentication for us,

things like RADIUS and TACACS+ that we just spoke about.

Now, for authentication to occur under 802.1x,

there are three roles that are required.

There's a supplicant, an authenticator,

and an authentication server.

First, we have our supplicant.

This is the device or the user that's requesting access

to the network, such as PC1 in this example.

Then we have the authenticator,

which is the device through which the supplicant

is requesting to access the network.

Normally, this is going to be something like a switch,

a wireless access point or a VPN concentrator.

Finally, there's the authentication server.

This is going to be the centralized device

that performs the authentication.

Normally this will be your RADIUS or your TACACS+ server.

802.1x is a great thing to have in your networks,

because it is one of the best protections that you can add

to your internal network conductivity,

to prevent rogue devices from getting access

to your organization's devices and connections.

As I said, this is port-based authentication.

So anything that connects to a switch

or wireless access point or a VPN concentrator

could be required to present itself

for authentication using 802.1x

prior to getting access to the entire network.

Another feature of 802.1x is the ability to encapsulate EAP.

This is our fourth network access protocol

that we need to discuss, known as EAP,

or the Extensible Authentication Protocol,

which can happen over a wired or wireless connection.

Now, EAP is actually not a single protocol by itself,

but actually a framework in a series of protocols

that allows for numerous different mechanisms

of authentication, including things like simple passwords,

digital certificates, and public key infrastructures.

There are many different variants of EAP,

such as EAP-MD5, EAP-TLS, EAP-TTLS,

EAP-FAST and EAP-PEAP.

Now, EAP-MD5 is a variant of EAP that utilizes

simple passwords and the challenge handshake

authentication process to provide

remote access authentication.

If you're using this method,

you have to ensure you're using long, strong,

and complex passwords in order for you to maintain

the security of your systems.

EAP-MD5 is a one-way authentication process,

and it's not going to provide mutual authentication

like some of the other versions will.

EAP-TLS is a form of EAP that uses public key infrastructure

with digital certificates being installed on both the client

and the server as a method of authentication.

This makes it immune to password-based attacks

since neither side is going to use a password.

And instead they're going to be using digital certificates

to identify themselves.

This is considered a form of mutual authentication

between both devices, the client and the server,

because each of them is going to authenticate

with the other one.

Now, another variant of this is known as EAP-TTLS.

This requires a digital certificate on the server,

but not on the client.

Instead, the client's going to use a password

for its authentication.

This makes it more secure than a traditional EAP-MD5,

which just uses passwords,

but it is less secure than EAP-TLS

because we're now only using one digital certificate

instead of two.

Our fourth variant of EAP is EAP-FAST,

or EAP Flexible Authentication via Secure Tunneling.

EAP-FAST is going to use protected access credentials

instead of a certificate to establish mutual authentication

between the two devices.

Now, our fifth and final variant of EAP is known as PEAP,

or Protected EAP.

This variant also supports mutual authentication

by using server certificates

and the Microsoft active directory database

for it to authenticate a password from the client.

Now, in addition to all these cross-platform

variants of EAP,

there's also a proprietary protocol

that was developed by Cisco,

known as LEAP or the Lightweight EAP.

Now, since it's proprietary,

it only works on Cisco based devices.

So, unless you have all Cisco devices in your network,

you should stick with using standard EAP

in your networks instead.

Network Access Control.

In this lesson, we're going to talk all about

network access control.

Now, network access control or NAC is used to protect your

network from both known and unknown devices.

This is achieved because NAC ensures the device is scanned

to determine its current state of security prior to being

allowed access to your network.

Network access control can be used for computers

that are within your internal network

that are physically located inside

your buildings and are connected to it.

Or it can also be applied to devices that are connected

to your network remotely through a VPN.

Now, when a device attempts to connect to the network,

it's placed into a virtual holding area

while it's being scanned.

This scan can be extremely simple,

like verifying that it's based on the authentication

using EAP and just making sure that it has

the right username and password or it can be more intense.

For example, the device can be checked for

a number of different factors,

including its antivirus definitions,

to make sure that they're up to date the status

of the security patching and making sure

that's up to date and other items that might

introduce security threats into your network

if you allow that device to connect to you.

Now, if a device passes this inspection,

it's then going to be allowed to enter and receive access

to all of the organizational resources

that are provided by your network.

If the device fails the inspection, though,

it's going to be placed into a digital quarantine area

and there it's going to await remediation .

While it's in this area,

the device can receive its antivirus updates.

It can get operating system patches.

And any other security configurations and services

that it might need.

But it cannot largely communicate with the other portions

of the network because it's trapped inside

this screen subnet that's reserved for quarantine devices.

Like a bad child, this device has been placed into a timeout

and there it has to sit until it's rehabilitated.

Once it's been rehabilitated and can meet the requirements

of the initial NAC inspection,

it can then be moved into the regular network and receive

full access again, to all the organization's resources.

NAC solutions can either run as a persistent

or non-persistent agent.

Now, persistent agents are a piece of software

that's installed on a device

that's requesting access to the network.

This works well in a corporate environment

because the organization might own

all the different devices and controls,

and it can then understand what the software baselines are.

But this doesn't work well if you're in an environment

where people are going to use, "Bring your own devices."

Now, instead in those cases,

you're going to want to use a non-persistent agent.

A non-persistent agent solution is very popular,

especially in college campuses where people bring their own

devices and connect them to the network.

These non-persistent agents are going to require the users

to connect to the network, usually over wifi.

And then they're going to go to a web based captive portal

where they're going to log in.

Once they do that, they're going to click on a link

that's going to download an agent onto their computer.

It's going to scan the device for compliance

and then delete itself from the user's machine

once it's done with the inspection.

If they pass the inspection,

there'll be granted access to the network,

if they're not, there'll be placed in a quarantine.

Network access control can be offered as either a hardware

or a software based solution when you're implementing it

inside your networks.

One of the most commonly used network access control

mechanisms is known as the IEEE standard 802.1x

which is port based network access control.

Most modern NACs are going to be built

on top of the 802.1x standard,

adding additional features and capabilities to it as well.

In addition to this NAC health policy,

where we're checking to make sure everybody meets

a minimum level of standard,

there could also be different rule-based methods

that you can use for granting or denying access

to your networks using NAC.

There's going to be more than just this health policy.

We can do it with lots of different things.

We can check things like the time, the location,

the role or rules to decide whether or not this device

should be granted entry to our network.

With time-based factors,

we're going to find access periods for given hosts,

using a time-based schedule.

For example, you might work in a company

that only operates from 9:00 in the morning,

till 5:00 in the afternoon.

And if you try to log in at 2:00 in the morning,

you're going to be denied access.

Now you have to be careful in using these time-based

approaches though,

or you could block legitimate access by mistake.

In my company,

we have employees that work on both sides of the world.

So when I'm sleeping at 2:00 in the morning here,

some of my employees are over in Asia and they're accessing

our networks and we need to make sure they can do their job.

So I can't block people at 2:00 in the morning.

But we could make it so they can only access things

during their daytime hours.

And we can only access things during our daytime hours.

If we want to use a time-based model.

Now, another way you can do things is

with location-based factors.

With location-based factors,

we're going to evaluate the location of the endpoint

requesting access using geolocation of its IP,

it's GPS or other mechanisms.

For example, if I know that one of my employees

always logs in from Florida,

but now all of a sudden they're logging in from Italy,

that would be something that will be flagged and we might

want to put them into remediation until we figure out,

are they really in Italy

or is it somebody attacking their account?

Now, after all,

maybe that person is on vacation and they're accessing their

work email from Italy and maybe they're not.

And somebody's actually hacked their account

and using their credentials.

So we need to validate that before we give them access

to the network,

both of these could be caught using location-based access.

Now another one we have is role-based factors,

and this is going to reevaluate a devices authentication

when it's being used to do something.

This is known as adaptive NAC.

Now, for example, let's say your device tries

to join a sub-net that's used for server management

and it's on the user account and a user laptop.

This should be rejected.

We shouldn't let that user account and user laptop connect

directly to our server management subnet.

But if I tried to connect a server to that domain,

it would allow that to happen because it's an authorized

function for that server based on its role.

Now, by using adaptive NAC,

we're going to be looking at the role of the device

and figuring out if it's doing something that it should

or should not be allowed to do.

And we can then adapt based on that.

The final one we have is rule-based factors,

and we can use a complex admission policy if we want to,

to enforce a series of rules.

And we basically can write these up with a bunch

of logical statements, if this, then that,

if this, then this other thing,

if this, then this third thing,

and we can make these things happen, for example,

if Jason and instructor let him access this folder,

if Jason and student deny him access,

that's the idea of this rule based mentality.

Now, this is obviously a very simple example,

but hopefully you're getting the idea of how you can create

rules to secure your network using network access control.

Our goal here is to make a policy based on a series of rules

and then allow or deny things to those people

based on the different conditions.

As you can see, NAC can be extremely useful

as part of our defense in-depth strategy,

and it helps to enforce a zero trust architecture

within our networks.

Physical Security.

In this lesson, we're going to discuss

the importance of physical security to our network security.

After all,

if an attacker can touch your networking equipment,

such as your switches, your routers,

your firewalls and other devices,

they could change your configurations

or completely bypass the devices

by patching around them with patch cables.

So, as we move through this lesson,

we're going to cover both detection and prevention mechanisms

that you can use to protect your network

and its critical equipment.

First, we're going to start with detection mechanisms,

things like cameras, motion detectors,

asset tags, and tamper detection.

When it comes to detection mechanisms

and detective controls,

we're going to be talking about security controls

that are used during an event

to find out whether or not something malicious

may have happened.

Detection controls are going to be useful

in determining if something happened,

but only after the fact.

These types of controls won't stop something from happening,

but they can provide a method to log it

or identify if a piece of networking equipment

has remained under your control the entire time.

When it comes to cameras,

these are often mounted inside

and outside of rooms that contain your networking gear.

For example, you should have a camera

point at the entrance door to your main distribution frame,

your intermediate distribution frames,

your data center, and your telecommunication closets.

When it comes to placing your cameras,

it's really important to understand

where your high value targets are,

and that way you can understand what you need to protect.

If you're pointing the cameras at the entrance or exit,

you'll be able to see when people enter

or leave a data center

or your various telecommunication closets.

In addition to this,

you may wish to have cameras covering the entrance

and exits to your building and your parking lot

to detect if anyone is out there

parked for a long period of time,

trying to connect to your wireless network

and be able to remotely access you from there.

By utilizing a closed-circuit TV system or CCTV,

you can either actively monitor these doors

with a security guard

or you record the video feeds

and play them back after a suspected incident.

This way you can verify if anyone entered

or left the rooms containing your network equipment.

Surveillance cameras are going to come in many different types,

including both wired and wireless versions.

A wired camera allows the device to physically be cabled

from the camera all the way to a central monitoring station,

usually using a coaxial cable for analog solutions

or unshielded twisted pair for digital solutions.

If you're instead using a wireless camera solution,

you don't have to run physical cables.

Instead, they're going to be easier to install,

but you do have some things you have to worry about,

such as interference with other wireless systems

like 802.11 wireless networks.

Also, if they're wireless and an attacker could jam

or break those signals,

they could prevent them

from communicating with the central monitoring station,

and that way you won't be able to see the video feed.

In addition to wired and wireless varieties,

there's also indoor

and outdoor versions of surveillance cameras.

If you're going to be monitoring the parking lot

or external door,

you need to use an outdoor camera

that can withstand the elements like snow,

wind and rain.

If instead, you're going to be monitoring things

contained inside your building,

then you can choose an indoor camera

because they're going to be a little bit lighter,

cheaper, and easier to install.

Normally, when a camera is mounted,

it's going to be put in one location

with a fixed field of view.

On some higher end models though,

you can purchase what's known as a PTZ camera.

Now PTZ means pan, tilt and zoom.

If you've watch any bank heist movies,

you've probably seen these PTZ cameras used.

With these systems,

a security guard can move the camera

from a central monitoring station by panning,

moving it left or right,

tilting, moving it up or down,

or zooming by getting closer

or further with the field of vision,

all from a joystick at the control panel.

Another option to consider

is whether or not to purchase an infrared camera system.

Now an infrared camera system will display images

based on the amount of heat in a room.

These are really helpful inside our data centers

and telecommunication closets for two reasons.

First, you can quickly and easily identify where a person is

inside a given room.

If an attacker manages to get themselves

into the data center,

they're not going to be able to hide from you,

because their body heat will create a bright orange

or red glow on an infrared camera.

Second, even if no one is in the room,

you can use an infrared camera

to identify hotspots in the room,

such as when a fan in a server rack isn't working anymore,

and therefore heat starts building up.

Using this type of camera could be a way to detect gear

that may overheat before it actually overheats.

The final type of surveillance camera you can purchase

is known as an ultrasonic camera,

which is going to use sound-based detection.

If you've ever watched the Mission Impossible movies,

they had an ultrasonic system

that would sit there and listen.

Even if a pin was dropped on the floor,

it would be detected by the camera

and its super sensitive microphones.

Then the guards could come and arrest the attacker.

Another detection mechanism that we use in our organizations

is going to be related

to maintaining our configuration baseline

and an inventory of all of our equipment.

This is going to use asset tags on all of our equipment,

both the equipment currently in our network closets

and the equipment sitting on the shelf

as spares or replacements in our closets.

Now an asset tag

is a label that identifies a piece of equipment

using a unique serial number, code or barcode.

This asset tag

is going to be physically attached to the device,

things like your switches, your routers,

your firewalls, and other network devices.

And it contains information

such as a unique identification code,

the manufacturing date of the device,

and the date it was placed into use on the network.

By having a unique asset tag on an asset,

you can increase the security

by decreasing the risk of theft,

since each of these devices is going to be accounted for

and tracked by your organization.

Also, this asset tag can be used

to uniquely identify the device

for the purposes of configuration management

and supply chain management.

In conjunction with that asset tag,

you should also use a tamper detection method.

After all, how can you ensure

that nobody has modified the switch or router

once you place it into the supply closet,

or into that rack of gear in a telecommunications closet

that nobody is watching 24 hours a day?

One of the ways you can do this

is by using anti-tamper techniques.

This can be as simple as a sticker

or fixed to the corner of a switch.

And if that sticker is removed or broken,

it would indicate that somebody unscrewed the chassis

and add or remove something

from the inner components of that switch.

Some of our networking devices

also have an eFuse built into them

to help with this detection.

This eFuse is an electronic detection mechanism

that can record the version of the iOS

being used by a switch,

and if you attempt to downgrade it to a lower

or more insecure version,

the eFuse would trip

and indicate the firmware has been modified

and can no longer be trusted,

and then it would refuse to boot up the device.

On the other side of physical security,

we also have controls we put in place

to prevent things from happening.

Some prevention methods that we use

are things like: access control hardware,

like badge readers and biometrics;

access control vestibules,

like a mantrap; smart lockers;

locking racks; locking cabinets;

and employee training.

When it comes to access control hardware,

we usually are going to rely on either a badge reader

or a biometric reader to control access

to a secure telecommunications closet or data center.

A badge reader is going to rely on either a magnetic strip,

like a credit card, a chip card,

like a smart card

or RFID or radio-frequency ID card.

In general, it's going to be a great practice

to require all your employees

to wear an identification badge within your buildings.

This badge is then used to unlock an electronic lock

on the different doors of your data center

or other secured spaces,

in combination with some kind of a knowledge factor,

like a pin number,

that's unique to that employee.

By doing this,

we now have a two factor authentication system,

something you have, your badge

and something you know, your pin.

Some of these locks

can also choose to use a biometric authentication

instead of a badge.

For example, I've seen some data centers

protected by a fingerprint reader,

a retina scan, or a voiceprint and a pin.

Again, by adding the pin to the biometric factor,

we're gaining two factor authentication,

and making our data center much more secure.

In high security facilities,

they also use an access control vestibule,

in combination with an access control hardware or lock.

An access control vestibule is also known as a mantrap.

An access control vestibule or mantrap

is an area between two doorways that holds people

until they're identified and authenticated.

Sometimes these are automated,

like using an electronic badge and pin system

that we just discussed.

But sometimes they're actually manned

by a security personnel

who's going to physically look at your badge

to verify you are who you claim to be.

The most common placement of a mantrap

is actually the entrance to your office building.

So as you enter the building,

there might be an open lobby that anyone can access.

But then there's this set of turnstiles

where you're going to scan your badge

and enter your pin number.

This will allow you to go past those turnstiles

and get into the building.

This area between the front door and the turnstiles,

that is considered the access control vestibule or mantrap.

Once you get past that turnstile,

you're now in the secure area

and you've already been authenticated,

and now you can be trusted.

In some organizations,

they instead opt to have less security at the main floor,

and instead they're going to use access control vestibules

located at key choke points throughout the building,

as you go into certain

higher security areas of the building.

Sometimes you may have both a mantrap

at the entrance of the building that everyone goes in,

who works in that facility,

but then you have an additional mantrap

or access control vestibule

going into a higher security area,

such as your data center.

In this case, the first access control vestibule

at the main entrance of the building gives you access

to a generic level of organizational security.

But then you need to go into a top secret area,

you need to go through a second verification

at another access control vestibule.

Next let's talk about personal electronic devices.

A lot of organizations

prevent the use of personal electronic devices

like cell phones, smartphones, and tablets

inside their office spaces.

This is done as a form of data loss prevention.

After all, if I can simply carry my smartphone with me

into a top-secret building,

I could take pictures of some highly classified documents

from my computer screen,

and then those documents could walk

right out the front door with me on my cell phone.

This would not be very secure.

So in these organizations,

they're usually going to be located

in something like a government or military building,

and they're going to place a smart locker in the entrance

right before the access control vestibule.

This way you can drop off your cell phone

and lock it up securely in the smart locker.

Now a smart locker is a fully integrated system

that looks a lot like a large vending machine.

Basically, you're going to scan your employee badge

at the smart locker on a digital badge reader.

Then you pick an open locker from the display screen,

and once you select it,

that is going to unlock and open up that locker.

You then put in your laptop,

your tablet, your smartphone,

your smartwatch, or other valuables inside the locker.

And once you shut the door to the locker,

it automatically is going to lock

and you can go about the rest of your day.

Whenever you need to get your things back,

you go back to the display screen,

click Open Locker, and scan your badge.

At that point, the locker will unlock again,

and you can retrieve your items from the locker.

Next, let's talk about protecting the things

that are inside of our data center.

Now, once we get inside of our data center

or telecommunication closets,

you're going to see racks or cabinets

that contain the different networking equipment

and servers that we're going to use.

A standard server and networking equipment rack

is about 48 units high,

also known as 48U.

This is also about 50 inches deep

and about 20 inches wide.

These can house networking equipment

like switches and routers,

patch panels, servers,

rack mounted uninterrupted power supplies,

and much more.

To protect these devices from being tampered with,

these racks and cabinets may contain a small key lock.

By locking these racks and cabinets,

you can control physical access to your equipment.

Generally, one person will be the key custodian

for the data center,

and that person will also maintain a log

of who has which keys and when,

in case that information is needed

during an incident response.

Finally, we need to talk about

the most beneficial prevention mechanism that you have

that you can invest in.

And this is employee training,

according to a study by Forrester Research,

providing employee cyber security awareness training

produced a 69% return on investment

for small to medium-sized companies,

and a 248% return on investment

for large enterprise organizations.

The data supports it.

Employee training is a great investment.

This is particularly true

because the biggest weakness in our networks is our users

and the administrators,

because both of these people

are going to use and run our networks.

And they're going to cause a lot of problems for us

if they're not trained properly.

If an administrator misconfigures a device, for example,

that creates a vulnerability

that could be exploited by an attacker.

If an end user clicks on a link in a phishing email,

they can cause the organization to get infected with malware

and spend a lot of time and effort cleaning up this mess

during an incident response.

When you're your employee training,

it's important to stress the proper policies

and procedures that the employees need to follow

in regards to both their physical security,

such as challenging personnel

to show their employee badges

or questioning why certain people

are trying to access certain areas,

and technical security such as malware prevention

and anti-phishing training.

Asset Disposal.

In this lesson, we're going to talk about asset disposal,

which is critical to the physical security of your devices

and the data they process.

Asset disposal occurs whenever a system is no longer needed

by an organization.

Now, once you're done using a device like a router,

or a switch, or a firewall,

and you replace it with a newer version,

what are you going to do with that old device?

Are you simply going to take it out of the rack

and throw it in the corner?

Are you going to give it to another organization

like a charity,

or are you going to throw it away?

Well, this is really a question you have to think about

in terms of risk tolerance

and how your organization views it's security posture.

Which method you decide is going to be documented

inside your organization's disposal policy

and it's something you need to think through.

Now, regardless of which method you choose to use,

you should first make sure that you're following

your organization's policies for proper asset disposal,

because these devices may still contain valuable information

in the hands of an attacker.

For example, if you're disposing of a switch,

router or a firewall,

your old configurations may still be located

in the storage of that device.

If you throw it in the dumpster

and an attacker grabs it as part of dumpster diving,

then they're going to see every single access control rule

you have in that device

and essentially, they can map out their attacks

around your protections.

So, when it comes time to dispose of an asset,

it's important that we perform a factory reset,

wipe the configuration, or sanitize the device for disposal.

Now, many of our other network appliances

are essentially just going to be Linux servers

running specialized software.

So even our old intrusion detection systems

or intrusion protection systems,

data loss prevention systems,

and unified threat management systems

will still need to properly be disposed of

to ensure we don't let critical information

fall into the wrong hands.

For the rest of this lesson, I am going to talk specifically

about proper disposal of networking equipment

like routers and switches,

but similar processes and procedures are available

for all devices, appliances, and servers on your network.

Now, the quickest and easiest way

to prepare your devices for retirement

is to perform a factory reset.

A factory reset is a procedure

that will remove all the customer-specific data

that's been added to that network device

since the time it was shipped from the manufacturer.

This includes your IOS images, your boot images,

configurations, log files, boot variables, core files,

FIPS-related security keys, and credentials.

To perform a factory reset on a Cisco device, for example,

you simply need to enter the enable command

from the command line interface

to enter the privilege execution mode,

then enter the command factory_reset all.

Within a few minutes, you'll have a freshly-formatted

and ready to retire Cisco router or switch.

Now, on the other hand,

if you only need to perform a wipe of your configurations,

you can do this on a Cisco device

using the write erase command.

This will raise the NVRAM file system,

and will remove both the running

and startup configuration files.

But when you use either of these two options,

you're essentially just running a format command

on the storage of these switches.

If you're a bit more paranoid

or you're dealing with a high security environment,

you should take the extra step

of sanitizing your devices for disposal.

Now, the challenge is that sensitization procedures

usually involve conducting a format

and overwriting the non-volatile memory or storage

with a series of randomized ones or zeros multiple times

in order to prevent any reconstruction of the data remnants

from the storage devices.

So if you're using a network appliance

that's essentially a Linux workstation,

you can follow the standard overwrite procedures

for sanitization of hard drives and solid state drives

that you learned back in A+.

But unfortunately, if you're using a router or a switch,

this may not be possible.

In those cases, you have to resort to the removal

and physical destruction of the NVRAM and the flash modules

from these devices.

The NVRAM module stores your configuration files

and the flash module stores the Cisco IOS,

both of these are removable

on most Cisco routers and switches.

So you can take them out and destroy them.

If you happen to work for the government or the military

on a classified system,

they will usually go to this level of effort

to sanitize and destroy the NVRAM and flash modules

prior to disposal, recycling or destruction

of their secret and top secret routers and switches.

That said, most businesses and organizations

are instead going to rely on a factory reset

or simply wiping the configurations

prior to disposing of the devices.

Now, when it comes to disposing of the information

from our systems,

we have some other methods that we can use as well.

For example, if you have a bunch of backup tapes

that need to be disposed of,

you can actually shred those tapes

by pulling the magnetic ribbons out of the casing,

and then you can put them through the shredder,

or you can even burn them

as a method of physical destruction.

If your organization is using hard drives for storage,

these can be destroyed using a degaussing process.

Degaussing is going to expose the hard disk

to a powerful magnetic field

and this causes the previously written data

to be wiped from the drive

and the drive becomes a blank slate once again.

Now, I've also seen organizations

that take this a step further

and they physically destroy those hard drives

to prevent the data from being exposed.

And they do this by hitting them with axes,

smashing them with hammers,

or even using industrial shredders

to turn that hard disk into tiny little pieces.

Now, if all that sounds a little too violent for you,

that's okay,

there's electronic mechanisms to do this as well.

This is known as purging or sanitizing.

Now purging or sanitizing is the act of removing data

in such a way that it can not be reconstructed

using any known forensic techniques.

This includes using special bit by bit erasing software

that can allow you to rewrite the hard drive many times over

with a series of ones and zeros.

If you do this seven times or 35 times

for really high security applications,

you can actually erase that drive and reuse it again.

Another technique you can use is to encrypt the drive.

And if you destroy the encryption key,

this again makes the data on that drive impossible to read,

that's another way to basically sanitize your drive.

Now, if you want to reuse that old hard drive

more easily, though,

you would instead use a clearing technique.

Now, a clearing technique is the removal of data

with a certain amount of assurance

that it can't be reconstructed.

For example,

if you delete a file or folder from your hard disk,

and then you replace that area that it was stored on

with a series of zeros, this would constitute clearing.

This is also used to do a secure erase function

inside of some operating systems.

Now, unfortunately, clearing is not foolproof

and with special forensic techniques and procedures,

you can actually recover that data,

but again, it's a very low likelihood.

Also, if you want to conduct something

like a low level format of a hard disk,

this could be considered clearing as well.

The bottom line is,

if you're working in a high security environment,

you really shouldn't be using clearing.

Instead, you should opt for purging or physical destruction.

Now, when it comes down to it,

the major security concern here that we're dealing with

is data remnants.

These are the leftover pieces of the data

that may exist on the hard drive that we no longer needed.

For example, let's say I took an old network appliance

that runs on a Linux server,

and I want to sell it to another person,

I would want to ensure that they can't access any of the data

that was previously stored on there, right?

Well, to do that, I can remove the hard drive,

but this would make that network appliance

essentially unusable or unsellable.

So instead, I can purge it or sanitize it

by going through and using the overwrite procedures

and then re-install the operating system

for this appliance and all of the software

as if I just received it from the factory again.

As long as I overwrote every single sector

of that hard drive multiple times,

the fear of the data being recovered would be mitigated.

Now, there is no right or wrong answer

when it comes to deciding

if an asset should be physically destroyed or reused.

This is a decision you have to make

as a cybersecurity professional based on the cost,

the business case and the security issues

that are involved in your organization

and based on its asset disposal policies.