knowt logo

Network Hardening

Network hardening, in this section of the course,

we're going to talk all about network hardening.

Now, as a network technician,

part of your job is to help make sure

our networks are secure by conducting network hardening.

The term hardening in cybersecurity

simply means to secure a system

by reducing its attack surface

or the surface of vulnerabilities.

So, if you have a network with all of your ports open,

you are really vulnerable.

But on the other hand,

if you have a network that blocks traffic

on every single port going outbound or inbound,

you have an isolated network that isn't very helpful

or useful to your business.

So for this reason, we have to find a healthy balance

between operations and security.

And the best way to do that,

is by following a series of best practices

in order to harden our network devices

and our clients.

So in this section, we're going to focus on just one domain

and one objective.

We'll be talking specifically about

domain for network security and focusing on objective 4.3.

Objective 4.3 states, given a scenario,

you must apply network hardening techniques,

which is why this section is called network hardening.

Now, this includes a lot of best practices

surrounding patch management for our clients,

password security for all of our devices,

shutting down unneeded services,

increasing network security

by using port security and VLANs,

conducting inspections and policing, securing SNMP,

utilizing access control lists properly,

ensuring the security of our wireless devices,

and even taking a look at some of our internet of things

and the security considerations surrounding those.

So, let's get started talking all about the different ways

for us to harden our networks in this section of the course.

Patch management.

In this lesson, we're going to discuss

a network hardening technique known as patch management.

So what exactly is patch management?

Well, patch management is the planning, testing,

implementing and auditing of software patches.

Patch management is critical to the providing

of the security and increasing uptime inside your network,

as well as ensuring compliance

and improving features in your network devices,

your servers and your clients.

Now, patch management is going to increase

the security of your network

by fixing known vulnerabilities

inside of your network devices, things like your servers,

your clients, and your routers and switches.

Now, in terms of our servers and clients,

patch management is going to be conducted

by installing software and operating system patches

in order to fix bugs in the system software.

Patch management can also increase

the uptime of your systems

by ensuring your devices and software are up to date

and they don't suffer from resource exhaustion

or crashes due to vulnerabilities within their code.

Patch management is also used

to support your compliance efforts.

One of the biggest things that's looked at

within a compliance assessment,

is how well your patch management program is being run

and being conducted.

This way, you can ensure it's effective

and making sure your systems are up to date

and patched against all known vulnerabilities,

such as CVEs or common vulnerabilities and exposures

that have patches associated with those.

Now, patch management is also going to be used

to provide improvements and upgrades

to your existing feature set as well.

Many of your patches don't just fix things

or existing problems inside of them,

but they can also add other things like features

and functionality when you do those upgrades.

By ensuring that you're running

the latest version of the software

and that it's fully patched,

you can ensure you have the best feature set

with the highest security available.

Now, as you can imagine,

there are a lot of different patches out there,

because each manufacturer

is going to create their own patches

for their specific applications and software.

Part of your job inside of the patch management process,

is keeping track of all the various updates

and ensuring they're getting installed properly

throughout all of your network devices.

This includes your switches, your routers, your firewalls,

and your servers and clients.

Patch management is not just concerned with ensuring

that a patch patches installed though,

it's also important to ensure

it doesn't create new problems for you

when you do that installation.

After all, patches themselves can have bugs in them too,

just like any other software can.

Therefore, it's really important

for you to effectively conduct patch management

by following four critical steps.

First, planning, second, testing,

third, implementing and forth, auditing.

Step one, planning.

Now, planning consists of creating policies,

procedures, and systems

to track the availability of patches and updates

and a method to verify

that they're actually compatible with your systems.

Planning is also going to be used

to determine how you're going to test and deploy each patch.

Now, a good patch management tool

can tell you whether or not

the patches have actually been deployed,

installed and verified functionality wise

on a given server or a client.

For example, in large enterprise networks,

you may use the Endpoint Configuration Manager by Microsoft

or you can buy a third-party tool

to conduct your patch management.

Step two, testing.

When conducting patch management,

it's really important to test any patch you receive

from your manufacturer

before you automate its deployment

throughout your entire network.

As I said before, a patch is designed to solve one problem,

but it can also create new ones for you

if you're not careful.

Within your organization, you need to ensure

that you have a small test network, a lab

or at the very least a single machine

that you're going to use for testing new patches

before you deploy it across your entire network.

After all, many of our organizations

have unique configurations within our networks

and these patches can break things.

So while a manufacturer tries to attempt

to make sure that patch is not going to harm our systems,

they cannot guarantee this,

because everyone has different configurations.

Instead, it is better to find out

if a patch is causing issues in your lab environment

before you push it across 10,000 workstations

across the entire enterprise network.

Because if you do that,

you're going to have a lot of end users yelling

and screaming at you when their systems crash.

Step three, implementation.

After testing the patch,

it's going to be time to deploy it

to all of the workstations and servers that need that patch.

You can do this manually or you can do it automatically

by deploying this patch

to all of your client workstations and servers

and have it installed and moved into production for you.

Now, if you have a small network

of only a few clients and servers,

then you may choose to manually install the patches

across all of these devices,

but if you have a large network,

you're going to want to use some sort of tool.

As I said earlier, Microsoft provides us

with the Endpoint Configuration Manager tool,

but you can also use

third-party patch management tools as well.

Some organizations rely on automatic updates

from the Windows Update System,

while others decide they want to have complete control

over the installation of patches.

For large organizations, it is highly recommended

that you centrally manage updates through an update server,

instead of using the Windows Update tool itself.

This will allow you to test the patch

prior to deploying it into your environment.

To disable Windows Update,

you simply need to disable the Windows Update Service

from running automatically on the given workstations

in your network.

If you have a lot of mobile devices throughout your network,

you also have to figure out

how you're going to do patch management for those devices.

The easiest way to do this

is by using a mobile device manager or MDM,

which works like one of these patch management servers,

but has additional features as well.

All right, now when you come to testing,

you may not have your own dedicated test network

or lab environment to use, but you still need to do testing.

So what are you going to do?

Well, one thing you can do,

is split up your production network into smaller groups.

In organizations I've led in the past,

we use the concept of patch rings

when we deploy out new patches.

In patch ring one, we have 10 or 20 end user machines

that we'll deploy our patches to first.

If it doesn't break anything on those machines,

then we'll move out into patch ring two,

which has 100 or 200 people

and this will include things like our system administrators

and our service desk workstations,

so we can instantly figure out if things are going wrong.

If that works successfully,

we'll then go into patch ring three,

which contains 1000 or 2000 machines.

And finally, we'll move out to patch ring four

which includes everybody else

and then maybe 10 or 20,000 machines.

Now, the benefit of doing the deployments this way

as we move through the various patch rings,

is that if there is an issue,

I'm only affecting a smaller group of users

before I break all the users on the network.

If I did it to everybody at once,

I'd have 20 or 30,000 people who are complaining

when things break, but by doing it in these smaller steps,

I only have 10 or 15 people who are yelling at me

and I can fix things quicker.

All right, step four, auditing.

Now, auditing is important to understand,

because you have to understand the client status

after you conduct your patch deployment.

So I pushed out the patch, did it work?

During auditing, you're going to be able

to scan the network and determine if that patch

that you pushed out to install actually installed properly

or are there any kind of unexpected failures

that may have happened

and that meant that the patch wasn't really installed

or isn't doing the protection it's supposed to do?

Again, if you're using a tool

like the Microsoft System Center Configuration Manager,

SCCM, or a third-party management tool,

you'll be able to conduct scanning

and verification of your workstations and servers

to ensure that the patches have been installed properly

and with no issues.

Now, if you're using Linux or OSX,

they also have built in patch management systems.

For example, Red Hat Linux uses a package manager

to deploy RPMs, which are packages of patches

to your servers and workstations,

so the same concepts and principles are going to apply here.

Now, in addition to conducting patch management

across our workstations and servers,

it's also important for us to conduct firmware management

for our network devices.

After all, all of our network devices

are running some form of software

and this is known as firmware inside of our routers,

our switches, our firewalls,

and our other network appliances.

If your network devices don't contain the latest

and most up-to-date firmware versions,

then you could have security vulnerabilities

and software bugs that could be exploited by an attacker.

If you look at the common vulnerabilities

and exposures or CVE website,

you're going to see a long list of vulnerabilities

that we have for all sorts of different networking devices.

Just select the Cisco devices

and you'll see a long laundry list of those

that have been patched and fixed over time.

So just like you need to patch your operating system

for a windows or Linux computer,

you also need to update the operating system

of your network devices.

In a Cisco device, this is known as the Cisco IOS

or inter network operating system.

Now, to update the IOS version,

you need to flash the firmware on that networking device.

Some device manufacturers like Cisco,

provide a centralized method

of conducting firmware management

in your enterprise network.

For example, Cisco uses the Cisco UCS Manager

to centralize the management of resources and devices,

as well as conduct firmware management

for your server network interfaces and server devices.

There's also a lot of third-party tools out there,

like the network configuration manager

by Manager Engine, that will allow you to upgrade,

downgrade and manage the configuration

of the firmware of all of your network devices

using automation, orchestration and scripting.

The bottom line here,

is that you need to have firmware management

to ensure that you have the right firmware versions loaded

onto your network devices.

This ensures the security of those devices,

just like we do for our workstations and clients

by using patch management.

Password security.

In this lesson, we're going to discuss

some of the best practices that affect our password security

in our networks and devices, in general,

the strength of our passwords and the level of security

is going to be defined in our password policies.

A password policy is simply a policy document

that promotes strong passwords

by specifying a minimum password length,

requiring complex passwords,

requiring periodic password changes

and placing limits on the reuse of passwords.

Password policies are used to mitigate

against the risk of an attacker,

being able to compromise a user,

administrator or service account

on a given network device server or workstation.

Utilizing two factor authentication is always going to be

a lot more secure than using just a password,

which is considered a knowledge factor.

But many of our network devices may only support a username

and password for their authentication.

If this is the case, then you need to make sure

you're at least using a good, strong password.

Now, a strong password is defined as one

who's complexity and length are sufficient

to create a large amount of possible combinations

so that brute force attacks can not be completed

in a reasonable amount of time.

Now, there is some debate

amongst cybersecurity professionals as to whether or not

you should use a long password or a complex password.

Traditionally, you may have heard

cyber security professionals promoting the fact

that you need to use a complex password.

Something that includes uppercase letters,

lowercase letters, numbers, and special characters

or symbols in order to have a strong and complex password.

But there is a big vulnerability

with using a complex password

and that's most people have trouble remembering them.

So people being people will tend to write down

these long passwords and reuse the same password

across multiple devices or websites.

This reduces the security of these complex passwords

and leads to them being compromised.

So as of the latest guidance

from the NIST special publication, 800-63B,

be known as the digital identity guidelines,

it recommends the password complexity rules

should no longer be enforced.

Instead, this special publication from NIST

recommends that you should use a long password

of up to 64 ASCII characters,

even if you're only using uppercase and lowercase letters.

This long password has a sufficient key space

to make brute forcing the password much more difficult.

If you had some numbers

and special characters to it as well,

while still making it something you can remember,

you're adding additional complexity to it too.

And this makes it even stronger though.

But again, if you have just a really long string

that doesn't have anything in it,

that's being repetitive in nature.

The longer password can be just as secure

as an eight character complex password.

Now, another common set of password policy guidance

that was previously followed closely

was in terms of password aging.

Now essentially, the old guidance recommended

that you should change your password every 60 days,

but under the new guidance from NIST,

they claim that password aging policies

should no longer be enforced.

Again, this goes back to the same problem

we had with long and complex passwords.

If you have some long, complex password,

and you have to change it every 60 days,

you're not going to remember it.

So you're just going to write it down.

So again, they recommend allowing longer periods of time

in-between password changes,

or even not requiring you to change your password at all

if they're long enough and strong.

Speaking of changing your password,

there's another policy out there

that's often enforced in organizations.

This policy requires that users cannot reuse an old password

when they change their current password.

Sometimes this setting is created

so the password history can't be reused

within the last five password changes,

while others may make it so you can't use a password

that you've used within the last 25 times,

either way is a way to increase the password history length

and make it so you can't reuse old passwords again.

Another best practice in terms of password security

is that you need to ensure

that you're following that all default passwords

are being changed on network devices, servers,

workstations, and service accounts.

By default, when you install a new device like a router,

a switch, or a firewall, or an access point,

they're usually going to have a default username

and password that's set up by the manufacturer.

This allows you to log in the first time

and be able to make changes.

For example, if you buy your internet service

from Verizon Fios, they're going to come

to your home or office and give you a default gateway,

this router device, their gateway is going to use

the username of admin and the password of admin

as their default setting.

So if you just install this as your new border router

or gateway, and you didn't change

the default username and password from admin admin,

anyone can simply log into the device and gain full control

over the device and control all the things

that are entering or leaving your network

through that device.

This is obviously not a good thing for security

and makes for a really weak network.

Now, if you're using Cisco devices in your network,

they come with default usernames and passwords

enabled by default as well, depending on the model,

it's either going to be something like admin and admin

or Cisco and Cisco or something like that.

To figure it out, you can just Google it and say,

this is my model number, what's the default password?

And you'll find it really quickly.

Now your servers and workstations

also do something very similar, with Windows for example,

there's an administrator account

that's created by default and up until recently,

the default password for this account was set as blank,

meaning it didn't even have a password.

This is completely insecure.

So you want to make sure you're always checking

these default accounts.

Remember, it's important to set up your password policies

to require users to have a long, strong password,

but it's also important that you change

that default password for any devices,

as soon as you install them

and connect them to your network.

Unneeded Services, in this lesson,

we're going to discuss the best practices

of disabling unneeded services and switch ports

on our network devices, servers and workstations.

So, let's first talk about unneeded services.

Now, before we talk too much about unneeded services,

it's important to define exactly what a service is

in case you're not aware.

Now, a service is a type of application

that runs in the background

of the operating system or device

to perform a specific function.

For example, on your Windows server,

if you have a shared printer in your network,

you might be running a Print Spooler, and this is a service.

Or you might be running a DHCP server on your network.

And this allows you to automatically configure

IP addresses for all your clients.

This DHCP server is a type of service.

To ensure the highest levels of security for our network

and reduce the attack surface of our devices,

it's important that we disable any services

that are not needed for business operations.

For example, if I'm going to be using static IP addresses

for all my network devices,

I don't need to have a DHCP server.

So, I can shut down the DHCP server

and the associated services on the network for DHCP.

You see, each service that we're running

has to be installed on some kind of a device.

Then that device is now using a valuable disc space,

and more importantly, it's introducing additional code

that could contain vulnerabilities.

So to combat this,

administrators attempt to practice a concept

known as least functionality.

Now least functionality is the process

of configuring a device, a server or a workstation

to only provide essential services

that are required by the user.

Now, to create an environment of least functionality,

administrators should disable unneeded services,

ports and protocols on the network.

When dealing with hardening of your network devices,

you may also want to disable infrequently used services

that could be used by an attacker for malicious purposes.

Things like a denial of service attack, for instance.

For example, the Echo service runs on port seven,

Discard runs on port nine.

Daytime runs on port 13.

And Chargen runs on port 19.

These are all examples of some smaller TCP and UDP services

that we hardly ever use in modern networks.

But, if they're enabled,

you want to make sure they're disabled

to better harden your devices.

Now, even if you are running a service on your network,

you may not need to run it on every single device.

So, you need to figure out which devices you need it on,

and disable it on all the other devices,

because otherwise, it's just additional vulnerabilities

you're accepting.

For example, if you're running your DHCP server

on your windows domain controller,

then you can disable the DHCP server

that's built into one of your network devices.

Or, if you're using DHCP, but you never plan to use BOOTP,

which is an older dynamic configuration protocol,

you can disable that on all of your network devices,

so BOOTP is now disabled.

The key here is that

if you're not using a particular service,

you need to disable it.

So to help you disable those unneeded services,

Cisco Network Devices provide the auto secure

command line interface command

that will disable any unnecessary services

while enabling necessary security services

on your network devices.

Finally, let's talk about the switch ports

on your network devices themselves.

As a best practice,

if nothing is connected to a given switch port

on a switch or router, that interface should be disabled.

To disable a switch port on a router or switch,

you're going to go into the configuration

command line interface

and enter the interface you want to disable

such as interface FastEthernet 0/1.

Then you're going to enter the command shutdown.

At this point, even if somebody plugs a cable

into FastEthernet 0/1 on that switch port,

it's not going to do anything

because that port has been electronically

shutdown and disabled.

Now, let's pretend that that switch port

that goes with FastEthernet 0/1

is connected to the patch panel.

And that patch panel is connected to a wall jack

inside an empty office.

Your company just hires a new employee,

and now they're going to put that employee in that office.

What are you going to do?

Well, you need to re enable that wall jack.

So you're going to log back into the switch.

You're going to go into the

configuration command line interface,

and you're going to enter interface FastEthernet 0/1

and then you're going to enter, no shutdown, that's it.

The switch port will immediately turn itself back on

and it's ready to send and receive traffic again.

So it really doesn't take a lot of effort to turn off

or turn on when they switch ports.

That's why it's a best practice to always disable them

when they're not in use.

Now, each network device manufacturer

is going to use their own commands.

The one I just covered were for Cisco devices,

because they're most commonly used

in large business networks.

If you're going to be using a different brand or manufacturer,

simply check your manual or Google,

how to shut down and enable switch ports on those devices.

After all, even if there's no cables

connected to a switch port,

an open switch port is going to represent a vulnerability.

Think about it this way.

What would happen if an attacker

made their way past your security

and were able to get to where the switch is.

They could just plug into it

and immediately have access to the network.

That would be a really bad thing

because all the switch ports are enabled by default.

So we want to make sure we disable

any of them that we're not using.

Remember just like an unneeded service,

if there's an unneeded switch port,

you should always disable it

to reduce your attack surface

and increase the security of your networks.

Port Security and VLANS.

In this lesson, we're going to discuss the best practices

of using Port Security, Private VLANS

and how to securely configure your default VLANS

on our networks.

First, let's talk about Port Security.

Now, Port Security is a dynamic feature that prevents

unauthorized access to a switchport by restricting input

to an interface by identifying and limiting

the MAC addresses of the hosts

that are allowed to connect to it.

Basically Port Security refers to blocking unauthorized

access to the physical switchport that's being used

to allow the host to communicate on the Local Area Network.

Now, Port Security also helps us mitigate MAC flooding

by an attacker because only specific MAC addresses

can communicate over a given switchport.

Now, once you enable Port Security,

this is going to make sure that only packets

with a permitted source MAC address can send traffic

through that switchport.

This permitted MAC address is called a Secure MAC address.

Now, Port Security can create a list of authorized

MAC addresses by using their static configurations

or through dynamic learning.

Static Configurations allow you as an administrator

to define the static MAC addresses to use

on a given switchport.

This works well for static environments,

like a server firm, demilitarized zone,

screen sub-net or data center.

Now with dynamic learning of MAC addresses,

we're going to use this when there's a maximum number

of MAC addresses that are being defined for a given port.

Then whenever that number is reached,

the port will stop allowing new devices to connect to it

and instead we'll block devices that it didn't already learn

about and added to its learn list.

Sometimes you'll hear this dynamic learning

referred to as Sticky MAC.

Basically, this is a great way to configure your switchports

when you're going to be using them with end-user devices.

You can set up all the switchports to allow

only one MAC address to be learned per switchport,

then whatever the first device is that connects

that switch port it's going to be learned

and all the others will be rejected by that switch,

if they try to connect something to that switchport.

Now, if you need to move that person to another office,

for instance, you could go into the switch,

clear the secure MAC for that switchport,

and then the next device that connects would become

the secure MAC for that switchport.

Next, let's talk about Private VLANS.

Private VLANS are also known as Port Isolation,

and it's a technique where a VLAN contained switchports

that are restricted to using a single uplink.

Now, the purpose of a Private VLAN or Port Isolation

is to divide a primary VLAN into secondary

or sub VLANS while still using the existing sub-net.

While a normal VLAN has a single broadcast domain,

a Private VLAN can break up that broadcast domain

into multiple smaller broadcast sub domains.

There are three different types of VLANS.

We have Primary VLANS, Secondary isolated VLANS

and Secondary community VLANS.

A Primary VLANS is simply the original VLAN

and it isn't used with private VLANS.

Primary VLANS are used to forward frames downstream

to all secondary VLANS.

The secondary VLANS are broken down into two types,

isolated VLANS and community VLANS.

Now an isolated VLAN is a type of secondary VLAN

that's going to include any switchports that can reach

the primary VLAN but not other secondary VLANS.

An isolated VLAN will also prevent each host

from communicating with each other.

So this gives you true port isolation

and adds the security of your network.

Now, a community VLAN is the second type of VLAN

that includes any switchports that can communicate

with each other and the primary VLAN

but not with other secondary VLANs.

In this case, we don't see port isolation

between hosts in a given secondary VLAN

but instead we only receive isolation between various groups

of hosts in other secondary VLANs.

When you're working with private VLANS

there are a few different types of ports

that are going to come across.

First, we have Promiscuous Ports or P-Ports.

These are switchports that connect to the router,

the firewall, or other common gateway devices.

These ports can communicate with anything that's connected

to either the primary or secondary VLANs.

Basically this type of port is not isolated,

and instead it can send and receive frames to and from

any other port inside of the VLAN.

Second, we have Host Ports.

These are going to be broken down into isolated ports

or I-Ports and community ports, or C-Ports.

Isolated ports or I-Ports are going to be used

to connect a regular host

that's going to reside on an isolated VLAN.

These I-Ports can only communicate upwards to a P-Port

and they can not talk to other I-Ports.

Community ports or C-Ports are going to be used

to connect a regular host that resides on a community VLAN.

The C-Ports can communicate upwards to P-Ports

and across the other C-Ports within the same community VLAN.

Next, let's talk about default VLANs.

By default, you want to ensure all your switchports

are assigned to a VLAN.

If you don't have them assigned to a particular user VLAN

something like sales or marketing or human resources,

then they're all going to get assigned to the default VLAN

as part of the unassigned switchports.

Now, if you're using a Cisco device,

they're going to do this for you automatically.

If you're not, you may have to manually do it.

Now, the default VLAN is known as VLAN 1.

Personally, I don't like assigning all my unused

switchports to the default VLAN because malicious attackers

know that many businesses use VLAN 1 by default,

and then attempt to use it to conduct VLN hopping.

Instead, I prefer to create a separate VLAN called unused,

and I put all my unused switchports and assign them there.

This way, if an attacker connects to one of those

unused ports and they bypass my port security

and they enable the port somehow,

they're still going to be isolated and not communicating

with any of my other clients or servers.

So, what makes this default VLAN that we're talking

about so special?

Well, if a client is sending data to the network

and it doesn't contain a VLAN tag then it considers

that traffic destined for the default VLAN.

So if you don't have VLANs configuring your network,

all your traffic is going to use

the default VLAN, VLAN 1.

By default, your default VLAN is also the same

as your Native VLAN.

These terms are often used interchangeably.

The Native VLAN is revealing where untagged traffic is

going to go whenever it's received on a trunk port.

This allows our switches and other layer two devices

to support legacy devices or devices that don't

use tagging on their traffic,

and still get that traffic to this Native VLAN.

Now, this is really useful when you connect things like

wireless access points and network attached devices

to your network.

And so it's important for you to understand

that the default VLAN is VLAN 1 and the Native VLAN

is also the default VLAN.

Inspection and policing.

In this lesson,

we're going to discuss how we conduct inspection and policing

on our networks to increase its insecurity.

This includes dynamic ARP inspection, DHCP snooping,

Router Advertisement Guard, and control plane policing.

First, we have dynamic ARP inspection.

Dynamic ARP inspection, or DAI, is a security feature

that validates the address resolution protocol

or ARP packets within your network.

Dynamic ARP inspection allows a network administrator

to intercept, log, and discard ARP packets

with invalid Mac address to IP address bindings.

This protects the network from certain on-path

or man-in-the-middle attacks.

To prevent ARP cash poisoning attacks,

a switch needs to ensure that only valid ARP requests

and responses are being relayed across the network device.

Dynamic ARP inspection

inspects all ARP requests and responses,

and then verifies them for valid Mac address

to IP address bindings before the local ARP cache

is going to be updated or that packet gets forwarded

to the appropriate destination.

If an invalid ARP packet is found,

it's going to be dropped and it will not be forwarded.

For dynamic ARP inspection to work,

the system must maintain a trusted database

of Mac address and IP address bindings.

As each ARP packet is inspected,

it's going to be checked against this trusted database.

To create this database,

the network devices will conduct DHCP snooping

in order to build their list of bindings.

In addition to this,

you can also configure your network devices

to use user configured ARP access control lists

that contains statically configured Mac addresses

and IP address bindings.

Finally, dynamic ARP inspection

can also drop any ARP packets

where the IP addresses in the packet are invalid

or where the Mac addresses in the body of the ARP packet

do not match the address specified in the ethernet header.

Second, we have DHCP snooping.

DHCP snooping is a DHCP security feature

that provides security by inspecting DHCP traffic

and filtering untrusted DHCP messages

by building and maintaining a DHCP snooping binding table.

Now, an untrusted message

is any message that's received from outside of the network

or outside of the firewall

and that could be used to create an attack

within your network.

The DHCP snooping binding table

is going to contain the Mac address, the IP address,

the lease time, the binding type, the vlan number,

and the interface information

that corresponds to the local untrusted interface

of a switch.

The binding table does not contain information

regarding hosts interconnected

with a trusted interface though,

only the untrusted interfaces.

So this is used outside of your network

on the way in or out of that network,

not within your network.

Now when we talk about an untrusted interface,

this is any interface that's configured to receive messages

from outside your network or firewall.

Since they're outside of your network,

they're automatically considered to be untrusted.

A trusted interface, on the other hand,

is any interface that is configured

to receive only messages from within your network.

Remember, if they're coming from inside your network,

we consider it trusted.

If it's coming from outside of your network,

we consider it untrusted.

Essentially, when we use DHCP snooping,

it's going to act like a firewall

between untrusted hosts and DHCP servers.

It provides us with a way

to differentiate between untrusted interfaces

connected to an end-user device

and trusted interfaces connected to the DHCP server

or another switch.

For DHCP snooping to be effective,

you need to configure your switches and your VLANs

to allow DHCP snooping by your network devices.

Next, we have Router Advertisement Guard.

The IPv6 Router Advertisement Guard, or RA-Guard,

is a mechanism that's going to be commonly employed

to mitigate attack vectors

based on forged ICMPv6 router advertisement messages.

In IPv6, router advertisements can be used by network hosts

to automatically configure themselves

with their own IPv6 address

and pick out their own default router

based on the information they're seeing

within a router advertisement.

Now, this could introduce a security risk

to your network though

because a host could create a default route

out of the network based on a suspicious

or malicious router advertisement sent by an attacker.

So, to prevent this,

we need to configure IPv6 Router Advertisement Guards,

or RA-Guards, to filter router advertisements

as they're going across your network.

RA-Guards operate at layer two of the OSI model

for IPv6 networks.

Now, your configuration can be set up very easily

and very effectively

by simply adding a configuration that says,

"don't allow RAs on this interface".

With that simple line, the switch will then filter out

all router advertisements from the internet

and then your internal host devices

can't fall victim to setting up malicious routes

to a hacker control gateway.

Finally, we have Control Plane Policing, or CPP.

The control plane in policing, or CPP feature,

is going to allow users to configure a quality of service

or QoS filter that will manage the traffic flow

of control plane packets to protect the control plane

of your Cisco iOS routers and switches

against denial of service and reconnaissance attacks.

This helps to protect the control plane,

while maintaining packet forwarding and protocol states,

despite an attack or heavy load on that router or switch.

This is known as policing and not an inspection

because we're dealing with maintaining

a good quality of service level for this router.

Notice, we're talking all about

the control plane here as well.

This means we're looking at the switch

under the logical functional components of that router.

Things like the data plane, the management plane,

and the control plane, and the service planes,

just as we would be in a software defined network or SDN.

This control plane policing

ensures that the rate limiting of traffic

is modified dynamically

to ensure the device doesn't become overloaded,

that it doesn't have an overly high CPU utilization,

and that it doesn't create

an unacceptably low quality of service

due to periods of high demands or malicious attacks.

Securing SNMP.

In this lesson, we're going to discuss how we can best secure

the Simple Network Management Protocol or SNMP.

SNMP is a great helper protocol inside of our networks

and allows us to easily gather information

from various network devices

back to our centralized management server.

In the past, SNMP relied on the use of a secure string

called a community string to grant access to portions

of the device management planes.

This led to widespread abuse of SNMP by attackers though,

because it allowed them to gain access and control

over network devices.

So our first step to securing SNMP in your network

is to ensure you are not using SNMP v1 or SNMP v2.

This is because both version 1 and version 2,

use an insecure version of a community string.

The only version of SNMP that you should be using

is version 3, because it adds the ability

to use authentication and encryption of your SNMP payloads

as they're being sent across the network.

SNMP instead is going to use encoded parameters to provide

its authentication as part of the SNMP architecture.

By using SNMP V3, instead of V1 or V2,

you're going to prevent replay, on path

or men in the middle attacks of your SNMP architecture.

Now this alone isn't enough to call SNMP V3 secure though,

because hackers can continue to find ways

to abuse the protocol and use it for their own advantages.

To better secure SNMP,

you should also combine the use of SNMP V3

with using whitelisting of the management information base

or MIB by implementing different SNMP views.

This will ensure that even if the credentials are exploited,

your information can not be read from a device

or written to a device, unless the information is needed

as part of normal monitoring

or normal device reconfiguration techniques.

Some other solutions to help you secure SNMP V3,

is use authPriv on all your devices.

This will include authentication and encryption features.

For this to work,

you need to use a newer network device though

that supports a cryptographic feature set.

Also, you need to ensure that

all of your SNMP administrative credentials

are being configured with strong passwords

for authentication and encryption.

You should also follow the principles of least privilege.

This includes using role separation

between polling and receiving traps for reading,

and configuring users or groups for writing,

because many SNMP managers require login credentials

to be stored on the disk in order to receive traps

from an agent.

Access control lists, or ACL's should also be applied

and extended to block unauthorized computers

from accessing the SNMP management network devices.

Access to devices with read and or write SNMP permissions

should be strictly controlled

to ensure the security of your network.

When it comes to your SNMP traffic,

I recommend that you segregate out your SNMP traffic

onto a separate management network or a separate VLAN.

Preferably, you're going to use a management network

that's out of ban, if you can afford to design it that way,

if not, you're going to need to logically separate

the SNMP traffic into a separate VLAN

to keep it secure at a minimum.

Finally, remember that an MIB and SNMP measurement devices

are just another type of server,

and so you need to make sure you keep

those system images and software up-to-date

in terms of its software and firmware

using good patch management principles.

Access control lists.

In this lesson,

we're going to discuss how we can best secure our network

using access control lists, firewall rule-sets,

and how to configure role-based access.

Now, an access control list, or ACL,

is a list of permissions associated

with a given system or network resource.

An ACL can be applied to any packet filtering device,

such as a router, a layer 3 switch, or a firewall.

In an ACL, you're going to have a list of rules

that are being applied based on an IP address,

a port, or an application,

depending on the type of device

you're applying this ACL onto.

Now, as the access control list

is being processed by the device,

it's going to start at the top of the list

and work through each line

until it reaches the end of that list.

So we're going to work from top to bottom.

Therefore, you always want to make sure

your most specific rules are at the top,

and your most generic rules are at the bottom.

For example, let's pretend I had an access control list.

as I'm working as a bouncer at a nightclub.

Now, at the top of the list,

I might have something very specific,

like somebody's name, John Smith.

He came in last week and he caused all sorts of trouble.

So John Smith cannot come in the club.

Now, as I move down the list,

I may get to something more generic.

So I might get to something that says anybody

whose driver's license says they live in Montana,

because I'm running a club in Florida.

If that was the case,

I might want to block that

because maybe we had a lot of people

coming with fake ideas from Montana,

so we're not going to accept those anymore.

Now, as we get to the end of that list,

we might see something very generic.

Something like no men allowed.

Maybe this is a woman's only club.

Now, this is a pretty generic rule, right?

Because half of the people on the planet are men.

So this is a very generic way of saying things.

So as we go from the top to the bottom,

we go from very specific, to more general,

to the most general.

Now, the same thing happens in our networks.

If I'm going to create a rule to block SSH

for a single computer based on its IP address,

that's going to be towards the top of my list.

If I want to block any IP address that's using port 110,

that's going to be a bit more generic.

So it'll be somewhere in the middle.

Finally, if I want to block any IP going to any port,

that is going to be something that is really generic,

and it should be at the end of my list.

So let's talk about some things

that we may want to block using our ACLs

in order to help secure our networks better.

Now, first we want to make sure

we're blocking incoming requests

from internal or private loop back addresses,

or multicast IP ranges, or experimental ranges,

if we have something that's coming from outside

of the network going into our network.

So if you have something that says

it's coming from 192.168 dot something, dot something,

and it's coming from the internet interface,

well, that's a non-routable IP,

and it shouldn't be coming from there.

So you should be blocking that.

That should never be allowed

to come into your network from the internet,

because usually it's an attacker trying to spoof their IP.

Similarly,

if you start seeing source IP addresses

coming from areas that are reserved for things

like loop back or experimental IP ranges,

those things should also be blocked immediately.

Second,

you want to block incoming requests

from protocols that should only be used locally.

For example,

if you have ICMP, DHCP, OSPF, SMB,

and other things like that,

you want to block those at the firewall

as things try to enter your network.

Now, if you have something like Windows file sharing,

for instance, which operates over SMB,

that should not be happening over the internet.

That is something that should happen

inside the local network only.

So again,

you should be blocking that at the firewall

at the border of your network.

If somebody has a VPN and they're working from home,

they'll be able to tunnel through your firewall,

access the local network,

and then use SMB that way.

But they shouldn't be using it straight from their home

over the internet to your network.

They should only do it through a VPN.

Now, the third thing you want to consider

is how you want to configure IPv6.

Now, I recommend you either

configure IPv6 to block all IPv6 traffic,

or you allow it only to authorize hosts and ports

if you're using IPv6 in your network.

The reason for this is because a lot of hosts

will run dual stack TCP/IP implementations

with IPv6 enabled by default.

And if you're not aware of that,

you're going to be having a lot of these things open,

and you're letting people have

unfettered access to your network.

A lot of organizations are still running IPv4 only,

and if they're doing that,

they definitely need to turn off IPv6 on those hosts,

and they need to configure their firewall to block it.

If you don't do this,

you could have a misconfiguration

that could allow adversaries unfiltered access

into your networks by using that IPv6 IP address area,

because a lot of administrators

simply haven't locked down IPv6 well enough yet.

So keep this in mind as you're doing

your configurations on your firewalls

and your access control lists.

All right.

Now that we have some basic rules out of the way,

let's take a look at an access control list,

and walk through it together.

Now, this one is an example from a Cisco firewall,

but that really doesn't matter for this exam,

because when we're talking about CompTIA exams,

they are device agnostic.

This could've come from a router.

It could've come from a firewall.

It could've come from Cisco, or Juniper, or Brocade.

It really doesn't matter.

The point is I want you be able to read

a basic firewall like this and understand it,

because that will make sure you're doing well on the exam.

So let's start out with the first line.

ip access-list extended From-DMZ.

This just says that this is an access list.

And in this case, I'm using it for my DMZ.

The second line is a comment or remark line.

This is going to tell you what this section is about.

Basically, it's saying that we're

going to have responses to HTTP requests,

and that we're going to get a bunch of permit statements here.

Now, as we go through these permit statements,

we're going to look at them one at a time,

and it's going to tell us which things

are being permitted or denied.

Now, when we see the word permit,

that means we're going to allow something,

and in this case,

we're going to allow TCP traffic.

So we have permit tcp,

and then we have the IP address

that's going to be associated with it.

In this case, we're going to permit TCP traffic

coming from the IP address 10.0.2.0.

The next thing we have is going to be our wildcard mask,

which acts like a subnet mask.

Now, this looks a little funny

because it's a wildcard mask

and it's technically a reverse wild card,

and it's written as 0.0.0.255.

So if you want to read this as a subnet mask,

you actually have to convert it.

And essentially you're going to make it 255.255.255.0.

This is a Cisco thing.

When you see the zero in the reverse wildcard,

treat that as a 255.

If you see a 255, treat it as a zero.

Don't let this get you confused.

Essentially what we're saying here

is that we're permitting TCP traffic

from any IP that is 10.0.2 dot something,

because this is the 10.0.2.0 network,

and it has 256 possible IPs that we're going to use here.

Anything in this IP range will be permitted under this rule.

The next part you see is eq, which stands for equal.

So the IP address has whatever is beyond this equal sign,

and that's going to be allowed.

In this case, we're equaling www.

Now, what does that mean?

It means port 80.

Www is Cisco's way of saying this is web traffic.

Somebody can make a request over port 80,

and we're going to allow it.

Next we have the part that says any,

and this says that we're going to be going

to any IP address as our destination.

So we can go to any web server in the world over port 80,

and we're going to allow it.

This will allow us to make an established connection there,

and then start traffic.

So any time we want to make an established connection

from 10.0.2 dot something to some website over port 80,

we're going to allow that using TCP.

Essentially that's what we're saying.

People can go out and access a website

from our DMZ out to the internet,

and this is all we're saying with this particular line.

Now, as you go through

and you read all these different lines in the ACL,

you can start figuring out

what is permitted and what is denied.

In this case, everything shown here is permitted

because we're doing explicit allow permissions.

What we're saying is yes,

all of these things are allowed.

Permit them from this IP and this port

going to that IP and that port.

But as we go through to the bottom of this list,

you'll see one statement that looks a little different.

It says deny IP any any.

Now, this is what's known as an implicit deny.

This says that anything

that is not already allowed above in my ACO rule-set

is something we're just going to deny by default.

So if we get down this list

and you see things like www, 443, echo reply, domain,

these are all things that we're allowing.

And then when I talk about domain here,

I'm not really talking about domain in general,

but we're talking about DNS as a service,

because this is the way Cisco talks about DNS services.

When they say domain,

we are really talking about equaling port 53.

So in this case,

everything you see listed here

is all these different permit statements

that are going to allow traffic from our DMZ to the internet.

The DMZ can go out

and get web traffic over port 80 or port 443.

It can reply to echo requests, which is ICMP.

It can use port 53, which is domain, over UDP and TCP.

These are all things

that we're going to be allowed to do from this DMZ.

But when I get down to that last statement,

if any of those things didn't happen,

we are going to deny it.

So for example,

if somebody tries to go to port 21 and access FTP,

we're going to reach that deny IP any any statement,

and it's going to be blocked.

This is because that statement

will deny any IP going from any IP to any IP.

Essentially, this ACL is configured as a white list.

It's only going to allow things

that are being permitted explicitly listed in this list,

and everything else is going to be blocked.

This is a good way of doing your security.

Now, we just mentioned the concept of explicit allow,

but we can also have firewall rules

that will use explicit deny or implicit deny.

Now, when you have an explicit deny statement,

you're creating a rule to block

some kind of matching traffic.

In this example I showed you,

we didn't have any explicit deny statements,

but they would look exactly the same

as our permit statements,

except we would change the word permit to deny.

Now, this allows us to go

from an explicit allow to an explicit deny.

So let's say I wanted to block traffic

going to the IP address of 8.8.8.8.

I could create a rule that says

deny IP 8.8.8.8 0.0.0.0 any any.

And it's going to block all ports

and all protocols going to the IP address of 8.8.8.8.

Now, notice my reverse card mask there was 0.0.0.0,

which tells me I only want to match this IP.

Not a whole network, just the IP of 8.8.8.8.

On the other hand, I can also use an implicit deny,

which blocks traffic to anything not explicitly specified.

In the example ACL I showed you,

that last statement had that implicit deny.

Basically anything not already explicitly allowed

by an allow statement is going to get blocked

because we had that deny IP any any statement

as the last statement at the end of our ACL.

Finally, we need to talk about role-based access.

Role-based access allows you to define the privileges

and responsibilities of administrative users

who control your firewalls and their ACL's.

With role-based access,

we put different accounts into groups

based on their roles or job functions.

Then based on those roles,

we're going to assign permissions

to which devices they can configure and modify.

So for example,

if I'm responsible for updating

and configuring the border gateway

or firewall for the network,

I would get access to add things to the ACL

that would open or restrict communication

between the internet and the internal network.

On the other hand,

if I'm just a switch technician

who's responsible for adding and removing users

when they're assigned to a new office,

my role would not allow me

to modify a layer 3 switch's ACLs,

but instead would only allow me

to shut down or reenable switchports

and configure port security.

Wireless Security.

In this lesson,

we're going to discuss the best practices

for hardening your networks using wireless security.

We're going to cover topics such as MAC filtering,

antenna placement, power levels, wireless client isolation,

guests network isolation, pre-shared keys,

EAP, geofencing, and captive portals.

First, let's talk about MAC Filtering.

MAC filtering allows you to define a list of devices

and only allow those devices

to connect to your wireless network.

Now, it does this by using an explicit allow

or an implicit allow list.

With an explicit allow,

we're going to work by creating a list

of all the allowed devices

and blocking any device whose MAC address

is not included on this list.

Essentially, it's a white list.

Now, implicit allow,

instead works by creating essentially a black list.

We're going to allow any device

whose MAC address does not appear in that list

to connect to our wireless network.

That's the difference between the explicit allow

and an implicit allow.

Explicit allow is a white list,

implicit allow is a blacklist.

All right, for best security,

you should always use explicit allow

when doing MAC filtering.

This is because it's really easy to conduct MAC spoofing

on a wireless device.

So, if you're using an implicit allow,

and you're using this blacklist method,

I can simply take my MAC address

and spoof it to something else,

and be on your network within about five seconds.

So it doesn't really stop a bad actor from getting on there.

Now, in this case,

that bad actor looks like they're a brand new user

and they haven't been a bad person before,

so you're going to let them connect.

This is why you always want to use

an explicit allow instead.

In theory, MAC filtering is supposed to give you

some decent protection,

but I'll tell you, in the real world,

MAC filtering just isn't that strong,

and can easily be bypassed.

So, while maybe a little inconvenient for a skilled attacker

to get across and get into your network,

if you're using an explicit allow,

it really won't slow them down for very long.

Therefore, if you're going to use MAC filtering,

don't rely on it as your only protection

for a wireless network.

For the exam, though,

CompTIA does say MAC filtering is a good thing,

and you should use it.

Second, Antenna Placement.

Now, antenna placement is important

for our wireless networks,

both for our successful operations of those networks,

as well as for the security of those wireless networks.

Most wireless access points

come pre-installed with an omnidirectional antenna.

This means the wireless antenna is going to radiate out

radio frequency waves in all directions at an equal power.

For this reason,

you need to carefully consider

where you want to place that device

to provide adequate coverage for your entire office,

but also so that you can keep that signal

within the walls of your office,

and not out into the parking lot or other spaces.

For example, consider this floor plan for a small office.

Now, where you see a green area,

we have strong areas of signal strength,

and the area decreases down to yellow,

and then eventually down to red.

Due to the placement of the antennas

and the wireless access points,

we actually have some green and yellow signal

that's actually outside the physical office building.

Because of this,

an attacker could be sitting in the parking lot

and gaining access to this office as wireless network,

because there's proper coverage in this parking area.

For this reason,

it's important to consider the placement of your antennas,

especially if you're using omnidirectional antennas.

Additionally,

you can change out your omnidirectional antennas

on some access points to use directional antennas instead.

This will help you keep the signal inside the building.

So, instead of using four omni-directional

as they did in this office,

it would be better from a security standpoint,

should we play some of those omnidirectional antennas

with directional antennas.

For example, on the left-most wall,

we could mount a right directional antenna

that would then only broadcast a signal

180 degrees to the right.

Meaning there's no radio waves

leaking out the left wall of that building.

Similarly, I could use a left directional antenna

on the right wall,

pushing the radio frequency ways inward into that building

all the way to the left,

but we still need to make sure the omni-directional

is sitting in the middle.

This way, we have good coverage for that middle section.

Now, instead of placing it close to an external wall

like they did in this diagram,

I would instead move that more

towards the middle of the building.

This will actually center its coverage area

and keep more of it within the walls of the office.

In addition to that,

we could also adjust the power level downward.

And by doing that,

keep those radio waves inside the building even more.

Now, this brings us to our third security measure,

Power Levels.

Each wireless access point

can radiate its radio frequency waves

at a variable power level.

If you're using more power,

you're going to cover more area, but by covering more area,

we also have radio waves leaving our building,

and that's not going to be good for security.

So it becomes important for us to consider

what power level you're going to be using

when you set up your wireless access points.

By conducting a site survey,

you can determine how much power is too much or not enough,

and you can balance the needs of network coverage

against your need for network security.

Forth, Wireless Client Isolation.

Now, wireless client isolation is a security feature

that prevents wireless clients

from communicating with each other.

You see, by default, most wireless networks operate

as if your devices were all connected to a hub,

and this allows every device to communicate

with every other device on the wireless network.

But with wireless client isolation,

your wireless access point is going to begin to operate

like a switch when it's using private VLANs.

Now, this will ensure that each device

can only communicate with itself

or upwards to the access point and out of the network

through the wireless access point.

By doing this, it operates a lot like a private VLAN

using an isolation port or I port.

When using wireless client isolation,

these devices can communicate with other devices

on the local area network if the access control lists

are configured to allow them to do this.

Now, the fifth thing we want to talk about

is Guests Network Isolation.

Guest network isolation is a type of isolation

that keeps guests

away from your internal network communications.

With a guest network isolation, your wireless access point

will create a new wireless network that's used by guests,

your home, or your office.

This wireless network

simply provides them with a direct path out to the internet

and bypasses your entire local area network.

If you have a network device,

something like a printer or a FileShare,

the people in the guest network can not get to it

because they're isolated from your local area network.

This is a great security measure,

and ensures your local area network is protected

from those who are using the guest wireless network.

Sixth, Pre-Shared Keys, or PSKs.

Now, pre-shared keys

are used to secure our wireless networks

by using encryption, things like WEP, WPA, WPA2, and WPA3.

The pre-shared key is using these encryption schemes,

and it's a shared secret

between the client and the access point,

and that has to be shared ahead of time

before you connect to it over some secure channel.

So, for example, let's say you came over to my house

and I'm using WPA2 with a pre-shared key.

Now, you're going to slick my wireless network,

and then you're going to enter the password for that network.

That password is your pre-shared key.

For you to get that pre-shared key,

I had to give it to you, though, right?

These pre-shared keys are only as strong

as the passwords that are representing them.

So, if you're going to use a pre-shared key,

make sure you're using a long and strong password.

The biggest challenge we have with these pre-shared keys,

though, is that everyone needs to know it,

to be able to get onto the network,

so we're all using the same password.

But having a lot of people using the same pre-shared key,

it becomes vulnerable to compromise

because somebody could lose it

or tell somebody else what it is,

and then everybody knows what that PSK is.

Seventh, EAP or the Extensible Authentication Protocol.

Now, EAP is a protocol that acts as a framework

and transport for other authentication protocols.

In our wireless networks,

if we want to move beyond using a pre-shared key

for authentication, we can instead use EAP,

and this is used at a lot of enterprise networks.

With EAP,

we can combine it with the 802.1X port access protocol

to use digital certificates

and pass them to an authentication server,

such as a radius or TACACS+ server using EAP.

Now, this is going to provide us with higher levels of security

than a pre-shared key,

and we can individually identify which device or user

is connected to the network using that digital certificate.

Eighth, Geofencing.

Geofencing is a virtual fence

created within a certain physical location.

Now, when you combine this with a wireless network,

we can actually set up our wireless network

to only allow a user to connect to it

if they're located within a certain geofenced area.

For example, let's say I'm running a restaurant in the mall.

I could set up a wireless network

to only allow people sitting in my restaurant

to connect to the wireless network

and deny anyone whose device says their GPS coordinates

are not within the four walls of my restaurant.

So, if somebody's sitting

at the competitor's restaurant next door,

they can't use my wireless network.

Only my clients can,

because they're sitting within my geofence.

Nine, Captive Portals.

Now, a captive portal is a webpage

that's accessed with a browser

that's going to be displayed to newly-connected users

of a wireless network

before they're granted broader access to network resources.

If you've ever used the wireless network at a hotel

or an airplane, you've used a captive portal.

For example, you connect to your hotel's Wi-Fi

and a webpage pops up

and asks you to enter your last name and room number.

Then, if you enter the details

and they match your registration from the front desk,

they're going to let you in to access the internet

from that network.

Often, captive portals are going to be used

to collect billing information

or consent from a network user,

but it can also be used

in combination with network access control, or a NAC system,

to conduct an agentless scan of the device

before it allows them to join the full network

to make sure it meets the minimum security requirements

for use on that network.

This is commonly used in colleges

and universities in this way.

As you can see,

there are a lot of different things we can do

to help secure our wireless networks.

This includes implementing MAC filtering,

adjusting your antenna placement,

lowering your power levels,

enabling wireless client isolation,

enabling guests network isolation,

creating a secure pre-shared key,

migrating to EAP instead of using a pre-shared key,

enforcing geofencing, and using captive portals.

IoT considerations.

In this lesson,

we're going to discuss how you can best secure

your Internet of Things devices,

when you connect them to your network.

When it comes to IoT,

I believe there are many things you should be doing

within your organization to best protect yourself.

First, you need to understand your endpoints.

Each new IoT device brings with it, new vulnerabilities.

So you need to understand your endpoints

and what their security posture is.

If you're adding a new wireless camera

or a new smart thermostat,

each one of those brings different vulnerabilities

that you need to consider before connecting those devices

to your network.

Second track and manage your IoT devices.

You need to be careful and don't just let anyone connect any

new IoT device to your network.

Instead,

you need to ensure you have a good configuration management

for your network and follow the proper processes to test,

install, and operate these IoT devices,

when you connect them.

Third, patch vulnerabilities.

IoT devices can be extremely insecure.

If you're deploying a device,

you need to understand the vulnerabilities

and patch them the best you can.

After that,

you're still going to be left with some residual risk here,

but there may not be a bug fix or security patch available

for that IoT device.

If that's the case,

you need to conduct some risk management

and determine if you're willing to accept the risk,

or if you need to put additional mitigation in place,

like putting them on a separate VLAN.

Fourth, conduct test and evaluation.

Before you connect any IoT device to your network,

you should fully test it and evaluate it,

using penetration testing techniques.

It is not enough to trust your manufacturer when they say

their devices are secure because many of these devices

are not.

Therefore always conduct your own assessments

of their security by conducting it on a test network

or lab before you attach it to your production network.

Fifth, change default credentials.

Just like network devices,

each IoT device has a default username and password

that allows you to connect to it and configure it.

These default credentials present a huge vulnerability.

So they have to be changed before you allow the IoT device

to go into production on your network.

Sixth, using encryption protocols.

IoT devices are inherently insecure.

So it's important that you utilize encryption protocols

to the maximum extent possible to better secure the data

being sent and received by these IoT devices.

Seventh, segment IoT devices.

The Internet of Things devices should be placed

in their own VLAN and their own sub-net

to ensure they don't interfere

with the rest of your production network.

If you can afford it,

you may even want to have a separate IoT only network

to provide physical isolation, as well.

As you can see,

there are lots of different considerations

that you need to think about when it comes

to connecting Internet of Things, devices to your network.

W

Network Hardening

Network hardening, in this section of the course,

we're going to talk all about network hardening.

Now, as a network technician,

part of your job is to help make sure

our networks are secure by conducting network hardening.

The term hardening in cybersecurity

simply means to secure a system

by reducing its attack surface

or the surface of vulnerabilities.

So, if you have a network with all of your ports open,

you are really vulnerable.

But on the other hand,

if you have a network that blocks traffic

on every single port going outbound or inbound,

you have an isolated network that isn't very helpful

or useful to your business.

So for this reason, we have to find a healthy balance

between operations and security.

And the best way to do that,

is by following a series of best practices

in order to harden our network devices

and our clients.

So in this section, we're going to focus on just one domain

and one objective.

We'll be talking specifically about

domain for network security and focusing on objective 4.3.

Objective 4.3 states, given a scenario,

you must apply network hardening techniques,

which is why this section is called network hardening.

Now, this includes a lot of best practices

surrounding patch management for our clients,

password security for all of our devices,

shutting down unneeded services,

increasing network security

by using port security and VLANs,

conducting inspections and policing, securing SNMP,

utilizing access control lists properly,

ensuring the security of our wireless devices,

and even taking a look at some of our internet of things

and the security considerations surrounding those.

So, let's get started talking all about the different ways

for us to harden our networks in this section of the course.

Patch management.

In this lesson, we're going to discuss

a network hardening technique known as patch management.

So what exactly is patch management?

Well, patch management is the planning, testing,

implementing and auditing of software patches.

Patch management is critical to the providing

of the security and increasing uptime inside your network,

as well as ensuring compliance

and improving features in your network devices,

your servers and your clients.

Now, patch management is going to increase

the security of your network

by fixing known vulnerabilities

inside of your network devices, things like your servers,

your clients, and your routers and switches.

Now, in terms of our servers and clients,

patch management is going to be conducted

by installing software and operating system patches

in order to fix bugs in the system software.

Patch management can also increase

the uptime of your systems

by ensuring your devices and software are up to date

and they don't suffer from resource exhaustion

or crashes due to vulnerabilities within their code.

Patch management is also used

to support your compliance efforts.

One of the biggest things that's looked at

within a compliance assessment,

is how well your patch management program is being run

and being conducted.

This way, you can ensure it's effective

and making sure your systems are up to date

and patched against all known vulnerabilities,

such as CVEs or common vulnerabilities and exposures

that have patches associated with those.

Now, patch management is also going to be used

to provide improvements and upgrades

to your existing feature set as well.

Many of your patches don't just fix things

or existing problems inside of them,

but they can also add other things like features

and functionality when you do those upgrades.

By ensuring that you're running

the latest version of the software

and that it's fully patched,

you can ensure you have the best feature set

with the highest security available.

Now, as you can imagine,

there are a lot of different patches out there,

because each manufacturer

is going to create their own patches

for their specific applications and software.

Part of your job inside of the patch management process,

is keeping track of all the various updates

and ensuring they're getting installed properly

throughout all of your network devices.

This includes your switches, your routers, your firewalls,

and your servers and clients.

Patch management is not just concerned with ensuring

that a patch patches installed though,

it's also important to ensure

it doesn't create new problems for you

when you do that installation.

After all, patches themselves can have bugs in them too,

just like any other software can.

Therefore, it's really important

for you to effectively conduct patch management

by following four critical steps.

First, planning, second, testing,

third, implementing and forth, auditing.

Step one, planning.

Now, planning consists of creating policies,

procedures, and systems

to track the availability of patches and updates

and a method to verify

that they're actually compatible with your systems.

Planning is also going to be used

to determine how you're going to test and deploy each patch.

Now, a good patch management tool

can tell you whether or not

the patches have actually been deployed,

installed and verified functionality wise

on a given server or a client.

For example, in large enterprise networks,

you may use the Endpoint Configuration Manager by Microsoft

or you can buy a third-party tool

to conduct your patch management.

Step two, testing.

When conducting patch management,

it's really important to test any patch you receive

from your manufacturer

before you automate its deployment

throughout your entire network.

As I said before, a patch is designed to solve one problem,

but it can also create new ones for you

if you're not careful.

Within your organization, you need to ensure

that you have a small test network, a lab

or at the very least a single machine

that you're going to use for testing new patches

before you deploy it across your entire network.

After all, many of our organizations

have unique configurations within our networks

and these patches can break things.

So while a manufacturer tries to attempt

to make sure that patch is not going to harm our systems,

they cannot guarantee this,

because everyone has different configurations.

Instead, it is better to find out

if a patch is causing issues in your lab environment

before you push it across 10,000 workstations

across the entire enterprise network.

Because if you do that,

you're going to have a lot of end users yelling

and screaming at you when their systems crash.

Step three, implementation.

After testing the patch,

it's going to be time to deploy it

to all of the workstations and servers that need that patch.

You can do this manually or you can do it automatically

by deploying this patch

to all of your client workstations and servers

and have it installed and moved into production for you.

Now, if you have a small network

of only a few clients and servers,

then you may choose to manually install the patches

across all of these devices,

but if you have a large network,

you're going to want to use some sort of tool.

As I said earlier, Microsoft provides us

with the Endpoint Configuration Manager tool,

but you can also use

third-party patch management tools as well.

Some organizations rely on automatic updates

from the Windows Update System,

while others decide they want to have complete control

over the installation of patches.

For large organizations, it is highly recommended

that you centrally manage updates through an update server,

instead of using the Windows Update tool itself.

This will allow you to test the patch

prior to deploying it into your environment.

To disable Windows Update,

you simply need to disable the Windows Update Service

from running automatically on the given workstations

in your network.

If you have a lot of mobile devices throughout your network,

you also have to figure out

how you're going to do patch management for those devices.

The easiest way to do this

is by using a mobile device manager or MDM,

which works like one of these patch management servers,

but has additional features as well.

All right, now when you come to testing,

you may not have your own dedicated test network

or lab environment to use, but you still need to do testing.

So what are you going to do?

Well, one thing you can do,

is split up your production network into smaller groups.

In organizations I've led in the past,

we use the concept of patch rings

when we deploy out new patches.

In patch ring one, we have 10 or 20 end user machines

that we'll deploy our patches to first.

If it doesn't break anything on those machines,

then we'll move out into patch ring two,

which has 100 or 200 people

and this will include things like our system administrators

and our service desk workstations,

so we can instantly figure out if things are going wrong.

If that works successfully,

we'll then go into patch ring three,

which contains 1000 or 2000 machines.

And finally, we'll move out to patch ring four

which includes everybody else

and then maybe 10 or 20,000 machines.

Now, the benefit of doing the deployments this way

as we move through the various patch rings,

is that if there is an issue,

I'm only affecting a smaller group of users

before I break all the users on the network.

If I did it to everybody at once,

I'd have 20 or 30,000 people who are complaining

when things break, but by doing it in these smaller steps,

I only have 10 or 15 people who are yelling at me

and I can fix things quicker.

All right, step four, auditing.

Now, auditing is important to understand,

because you have to understand the client status

after you conduct your patch deployment.

So I pushed out the patch, did it work?

During auditing, you're going to be able

to scan the network and determine if that patch

that you pushed out to install actually installed properly

or are there any kind of unexpected failures

that may have happened

and that meant that the patch wasn't really installed

or isn't doing the protection it's supposed to do?

Again, if you're using a tool

like the Microsoft System Center Configuration Manager,

SCCM, or a third-party management tool,

you'll be able to conduct scanning

and verification of your workstations and servers

to ensure that the patches have been installed properly

and with no issues.

Now, if you're using Linux or OSX,

they also have built in patch management systems.

For example, Red Hat Linux uses a package manager

to deploy RPMs, which are packages of patches

to your servers and workstations,

so the same concepts and principles are going to apply here.

Now, in addition to conducting patch management

across our workstations and servers,

it's also important for us to conduct firmware management

for our network devices.

After all, all of our network devices

are running some form of software

and this is known as firmware inside of our routers,

our switches, our firewalls,

and our other network appliances.

If your network devices don't contain the latest

and most up-to-date firmware versions,

then you could have security vulnerabilities

and software bugs that could be exploited by an attacker.

If you look at the common vulnerabilities

and exposures or CVE website,

you're going to see a long list of vulnerabilities

that we have for all sorts of different networking devices.

Just select the Cisco devices

and you'll see a long laundry list of those

that have been patched and fixed over time.

So just like you need to patch your operating system

for a windows or Linux computer,

you also need to update the operating system

of your network devices.

In a Cisco device, this is known as the Cisco IOS

or inter network operating system.

Now, to update the IOS version,

you need to flash the firmware on that networking device.

Some device manufacturers like Cisco,

provide a centralized method

of conducting firmware management

in your enterprise network.

For example, Cisco uses the Cisco UCS Manager

to centralize the management of resources and devices,

as well as conduct firmware management

for your server network interfaces and server devices.

There's also a lot of third-party tools out there,

like the network configuration manager

by Manager Engine, that will allow you to upgrade,

downgrade and manage the configuration

of the firmware of all of your network devices

using automation, orchestration and scripting.

The bottom line here,

is that you need to have firmware management

to ensure that you have the right firmware versions loaded

onto your network devices.

This ensures the security of those devices,

just like we do for our workstations and clients

by using patch management.

Password security.

In this lesson, we're going to discuss

some of the best practices that affect our password security

in our networks and devices, in general,

the strength of our passwords and the level of security

is going to be defined in our password policies.

A password policy is simply a policy document

that promotes strong passwords

by specifying a minimum password length,

requiring complex passwords,

requiring periodic password changes

and placing limits on the reuse of passwords.

Password policies are used to mitigate

against the risk of an attacker,

being able to compromise a user,

administrator or service account

on a given network device server or workstation.

Utilizing two factor authentication is always going to be

a lot more secure than using just a password,

which is considered a knowledge factor.

But many of our network devices may only support a username

and password for their authentication.

If this is the case, then you need to make sure

you're at least using a good, strong password.

Now, a strong password is defined as one

who's complexity and length are sufficient

to create a large amount of possible combinations

so that brute force attacks can not be completed

in a reasonable amount of time.

Now, there is some debate

amongst cybersecurity professionals as to whether or not

you should use a long password or a complex password.

Traditionally, you may have heard

cyber security professionals promoting the fact

that you need to use a complex password.

Something that includes uppercase letters,

lowercase letters, numbers, and special characters

or symbols in order to have a strong and complex password.

But there is a big vulnerability

with using a complex password

and that's most people have trouble remembering them.

So people being people will tend to write down

these long passwords and reuse the same password

across multiple devices or websites.

This reduces the security of these complex passwords

and leads to them being compromised.

So as of the latest guidance

from the NIST special publication, 800-63B,

be known as the digital identity guidelines,

it recommends the password complexity rules

should no longer be enforced.

Instead, this special publication from NIST

recommends that you should use a long password

of up to 64 ASCII characters,

even if you're only using uppercase and lowercase letters.

This long password has a sufficient key space

to make brute forcing the password much more difficult.

If you had some numbers

and special characters to it as well,

while still making it something you can remember,

you're adding additional complexity to it too.

And this makes it even stronger though.

But again, if you have just a really long string

that doesn't have anything in it,

that's being repetitive in nature.

The longer password can be just as secure

as an eight character complex password.

Now, another common set of password policy guidance

that was previously followed closely

was in terms of password aging.

Now essentially, the old guidance recommended

that you should change your password every 60 days,

but under the new guidance from NIST,

they claim that password aging policies

should no longer be enforced.

Again, this goes back to the same problem

we had with long and complex passwords.

If you have some long, complex password,

and you have to change it every 60 days,

you're not going to remember it.

So you're just going to write it down.

So again, they recommend allowing longer periods of time

in-between password changes,

or even not requiring you to change your password at all

if they're long enough and strong.

Speaking of changing your password,

there's another policy out there

that's often enforced in organizations.

This policy requires that users cannot reuse an old password

when they change their current password.

Sometimes this setting is created

so the password history can't be reused

within the last five password changes,

while others may make it so you can't use a password

that you've used within the last 25 times,

either way is a way to increase the password history length

and make it so you can't reuse old passwords again.

Another best practice in terms of password security

is that you need to ensure

that you're following that all default passwords

are being changed on network devices, servers,

workstations, and service accounts.

By default, when you install a new device like a router,

a switch, or a firewall, or an access point,

they're usually going to have a default username

and password that's set up by the manufacturer.

This allows you to log in the first time

and be able to make changes.

For example, if you buy your internet service

from Verizon Fios, they're going to come

to your home or office and give you a default gateway,

this router device, their gateway is going to use

the username of admin and the password of admin

as their default setting.

So if you just install this as your new border router

or gateway, and you didn't change

the default username and password from admin admin,

anyone can simply log into the device and gain full control

over the device and control all the things

that are entering or leaving your network

through that device.

This is obviously not a good thing for security

and makes for a really weak network.

Now, if you're using Cisco devices in your network,

they come with default usernames and passwords

enabled by default as well, depending on the model,

it's either going to be something like admin and admin

or Cisco and Cisco or something like that.

To figure it out, you can just Google it and say,

this is my model number, what's the default password?

And you'll find it really quickly.

Now your servers and workstations

also do something very similar, with Windows for example,

there's an administrator account

that's created by default and up until recently,

the default password for this account was set as blank,

meaning it didn't even have a password.

This is completely insecure.

So you want to make sure you're always checking

these default accounts.

Remember, it's important to set up your password policies

to require users to have a long, strong password,

but it's also important that you change

that default password for any devices,

as soon as you install them

and connect them to your network.

Unneeded Services, in this lesson,

we're going to discuss the best practices

of disabling unneeded services and switch ports

on our network devices, servers and workstations.

So, let's first talk about unneeded services.

Now, before we talk too much about unneeded services,

it's important to define exactly what a service is

in case you're not aware.

Now, a service is a type of application

that runs in the background

of the operating system or device

to perform a specific function.

For example, on your Windows server,

if you have a shared printer in your network,

you might be running a Print Spooler, and this is a service.

Or you might be running a DHCP server on your network.

And this allows you to automatically configure

IP addresses for all your clients.

This DHCP server is a type of service.

To ensure the highest levels of security for our network

and reduce the attack surface of our devices,

it's important that we disable any services

that are not needed for business operations.

For example, if I'm going to be using static IP addresses

for all my network devices,

I don't need to have a DHCP server.

So, I can shut down the DHCP server

and the associated services on the network for DHCP.

You see, each service that we're running

has to be installed on some kind of a device.

Then that device is now using a valuable disc space,

and more importantly, it's introducing additional code

that could contain vulnerabilities.

So to combat this,

administrators attempt to practice a concept

known as least functionality.

Now least functionality is the process

of configuring a device, a server or a workstation

to only provide essential services

that are required by the user.

Now, to create an environment of least functionality,

administrators should disable unneeded services,

ports and protocols on the network.

When dealing with hardening of your network devices,

you may also want to disable infrequently used services

that could be used by an attacker for malicious purposes.

Things like a denial of service attack, for instance.

For example, the Echo service runs on port seven,

Discard runs on port nine.

Daytime runs on port 13.

And Chargen runs on port 19.

These are all examples of some smaller TCP and UDP services

that we hardly ever use in modern networks.

But, if they're enabled,

you want to make sure they're disabled

to better harden your devices.

Now, even if you are running a service on your network,

you may not need to run it on every single device.

So, you need to figure out which devices you need it on,

and disable it on all the other devices,

because otherwise, it's just additional vulnerabilities

you're accepting.

For example, if you're running your DHCP server

on your windows domain controller,

then you can disable the DHCP server

that's built into one of your network devices.

Or, if you're using DHCP, but you never plan to use BOOTP,

which is an older dynamic configuration protocol,

you can disable that on all of your network devices,

so BOOTP is now disabled.

The key here is that

if you're not using a particular service,

you need to disable it.

So to help you disable those unneeded services,

Cisco Network Devices provide the auto secure

command line interface command

that will disable any unnecessary services

while enabling necessary security services

on your network devices.

Finally, let's talk about the switch ports

on your network devices themselves.

As a best practice,

if nothing is connected to a given switch port

on a switch or router, that interface should be disabled.

To disable a switch port on a router or switch,

you're going to go into the configuration

command line interface

and enter the interface you want to disable

such as interface FastEthernet 0/1.

Then you're going to enter the command shutdown.

At this point, even if somebody plugs a cable

into FastEthernet 0/1 on that switch port,

it's not going to do anything

because that port has been electronically

shutdown and disabled.

Now, let's pretend that that switch port

that goes with FastEthernet 0/1

is connected to the patch panel.

And that patch panel is connected to a wall jack

inside an empty office.

Your company just hires a new employee,

and now they're going to put that employee in that office.

What are you going to do?

Well, you need to re enable that wall jack.

So you're going to log back into the switch.

You're going to go into the

configuration command line interface,

and you're going to enter interface FastEthernet 0/1

and then you're going to enter, no shutdown, that's it.

The switch port will immediately turn itself back on

and it's ready to send and receive traffic again.

So it really doesn't take a lot of effort to turn off

or turn on when they switch ports.

That's why it's a best practice to always disable them

when they're not in use.

Now, each network device manufacturer

is going to use their own commands.

The one I just covered were for Cisco devices,

because they're most commonly used

in large business networks.

If you're going to be using a different brand or manufacturer,

simply check your manual or Google,

how to shut down and enable switch ports on those devices.

After all, even if there's no cables

connected to a switch port,

an open switch port is going to represent a vulnerability.

Think about it this way.

What would happen if an attacker

made their way past your security

and were able to get to where the switch is.

They could just plug into it

and immediately have access to the network.

That would be a really bad thing

because all the switch ports are enabled by default.

So we want to make sure we disable

any of them that we're not using.

Remember just like an unneeded service,

if there's an unneeded switch port,

you should always disable it

to reduce your attack surface

and increase the security of your networks.

Port Security and VLANS.

In this lesson, we're going to discuss the best practices

of using Port Security, Private VLANS

and how to securely configure your default VLANS

on our networks.

First, let's talk about Port Security.

Now, Port Security is a dynamic feature that prevents

unauthorized access to a switchport by restricting input

to an interface by identifying and limiting

the MAC addresses of the hosts

that are allowed to connect to it.

Basically Port Security refers to blocking unauthorized

access to the physical switchport that's being used

to allow the host to communicate on the Local Area Network.

Now, Port Security also helps us mitigate MAC flooding

by an attacker because only specific MAC addresses

can communicate over a given switchport.

Now, once you enable Port Security,

this is going to make sure that only packets

with a permitted source MAC address can send traffic

through that switchport.

This permitted MAC address is called a Secure MAC address.

Now, Port Security can create a list of authorized

MAC addresses by using their static configurations

or through dynamic learning.

Static Configurations allow you as an administrator

to define the static MAC addresses to use

on a given switchport.

This works well for static environments,

like a server firm, demilitarized zone,

screen sub-net or data center.

Now with dynamic learning of MAC addresses,

we're going to use this when there's a maximum number

of MAC addresses that are being defined for a given port.

Then whenever that number is reached,

the port will stop allowing new devices to connect to it

and instead we'll block devices that it didn't already learn

about and added to its learn list.

Sometimes you'll hear this dynamic learning

referred to as Sticky MAC.

Basically, this is a great way to configure your switchports

when you're going to be using them with end-user devices.

You can set up all the switchports to allow

only one MAC address to be learned per switchport,

then whatever the first device is that connects

that switch port it's going to be learned

and all the others will be rejected by that switch,

if they try to connect something to that switchport.

Now, if you need to move that person to another office,

for instance, you could go into the switch,

clear the secure MAC for that switchport,

and then the next device that connects would become

the secure MAC for that switchport.

Next, let's talk about Private VLANS.

Private VLANS are also known as Port Isolation,

and it's a technique where a VLAN contained switchports

that are restricted to using a single uplink.

Now, the purpose of a Private VLAN or Port Isolation

is to divide a primary VLAN into secondary

or sub VLANS while still using the existing sub-net.

While a normal VLAN has a single broadcast domain,

a Private VLAN can break up that broadcast domain

into multiple smaller broadcast sub domains.

There are three different types of VLANS.

We have Primary VLANS, Secondary isolated VLANS

and Secondary community VLANS.

A Primary VLANS is simply the original VLAN

and it isn't used with private VLANS.

Primary VLANS are used to forward frames downstream

to all secondary VLANS.

The secondary VLANS are broken down into two types,

isolated VLANS and community VLANS.

Now an isolated VLAN is a type of secondary VLAN

that's going to include any switchports that can reach

the primary VLAN but not other secondary VLANS.

An isolated VLAN will also prevent each host

from communicating with each other.

So this gives you true port isolation

and adds the security of your network.

Now, a community VLAN is the second type of VLAN

that includes any switchports that can communicate

with each other and the primary VLAN

but not with other secondary VLANs.

In this case, we don't see port isolation

between hosts in a given secondary VLAN

but instead we only receive isolation between various groups

of hosts in other secondary VLANs.

When you're working with private VLANS

there are a few different types of ports

that are going to come across.

First, we have Promiscuous Ports or P-Ports.

These are switchports that connect to the router,

the firewall, or other common gateway devices.

These ports can communicate with anything that's connected

to either the primary or secondary VLANs.

Basically this type of port is not isolated,

and instead it can send and receive frames to and from

any other port inside of the VLAN.

Second, we have Host Ports.

These are going to be broken down into isolated ports

or I-Ports and community ports, or C-Ports.

Isolated ports or I-Ports are going to be used

to connect a regular host

that's going to reside on an isolated VLAN.

These I-Ports can only communicate upwards to a P-Port

and they can not talk to other I-Ports.

Community ports or C-Ports are going to be used

to connect a regular host that resides on a community VLAN.

The C-Ports can communicate upwards to P-Ports

and across the other C-Ports within the same community VLAN.

Next, let's talk about default VLANs.

By default, you want to ensure all your switchports

are assigned to a VLAN.

If you don't have them assigned to a particular user VLAN

something like sales or marketing or human resources,

then they're all going to get assigned to the default VLAN

as part of the unassigned switchports.

Now, if you're using a Cisco device,

they're going to do this for you automatically.

If you're not, you may have to manually do it.

Now, the default VLAN is known as VLAN 1.

Personally, I don't like assigning all my unused

switchports to the default VLAN because malicious attackers

know that many businesses use VLAN 1 by default,

and then attempt to use it to conduct VLN hopping.

Instead, I prefer to create a separate VLAN called unused,

and I put all my unused switchports and assign them there.

This way, if an attacker connects to one of those

unused ports and they bypass my port security

and they enable the port somehow,

they're still going to be isolated and not communicating

with any of my other clients or servers.

So, what makes this default VLAN that we're talking

about so special?

Well, if a client is sending data to the network

and it doesn't contain a VLAN tag then it considers

that traffic destined for the default VLAN.

So if you don't have VLANs configuring your network,

all your traffic is going to use

the default VLAN, VLAN 1.

By default, your default VLAN is also the same

as your Native VLAN.

These terms are often used interchangeably.

The Native VLAN is revealing where untagged traffic is

going to go whenever it's received on a trunk port.

This allows our switches and other layer two devices

to support legacy devices or devices that don't

use tagging on their traffic,

and still get that traffic to this Native VLAN.

Now, this is really useful when you connect things like

wireless access points and network attached devices

to your network.

And so it's important for you to understand

that the default VLAN is VLAN 1 and the Native VLAN

is also the default VLAN.

Inspection and policing.

In this lesson,

we're going to discuss how we conduct inspection and policing

on our networks to increase its insecurity.

This includes dynamic ARP inspection, DHCP snooping,

Router Advertisement Guard, and control plane policing.

First, we have dynamic ARP inspection.

Dynamic ARP inspection, or DAI, is a security feature

that validates the address resolution protocol

or ARP packets within your network.

Dynamic ARP inspection allows a network administrator

to intercept, log, and discard ARP packets

with invalid Mac address to IP address bindings.

This protects the network from certain on-path

or man-in-the-middle attacks.

To prevent ARP cash poisoning attacks,

a switch needs to ensure that only valid ARP requests

and responses are being relayed across the network device.

Dynamic ARP inspection

inspects all ARP requests and responses,

and then verifies them for valid Mac address

to IP address bindings before the local ARP cache

is going to be updated or that packet gets forwarded

to the appropriate destination.

If an invalid ARP packet is found,

it's going to be dropped and it will not be forwarded.

For dynamic ARP inspection to work,

the system must maintain a trusted database

of Mac address and IP address bindings.

As each ARP packet is inspected,

it's going to be checked against this trusted database.

To create this database,

the network devices will conduct DHCP snooping

in order to build their list of bindings.

In addition to this,

you can also configure your network devices

to use user configured ARP access control lists

that contains statically configured Mac addresses

and IP address bindings.

Finally, dynamic ARP inspection

can also drop any ARP packets

where the IP addresses in the packet are invalid

or where the Mac addresses in the body of the ARP packet

do not match the address specified in the ethernet header.

Second, we have DHCP snooping.

DHCP snooping is a DHCP security feature

that provides security by inspecting DHCP traffic

and filtering untrusted DHCP messages

by building and maintaining a DHCP snooping binding table.

Now, an untrusted message

is any message that's received from outside of the network

or outside of the firewall

and that could be used to create an attack

within your network.

The DHCP snooping binding table

is going to contain the Mac address, the IP address,

the lease time, the binding type, the vlan number,

and the interface information

that corresponds to the local untrusted interface

of a switch.

The binding table does not contain information

regarding hosts interconnected

with a trusted interface though,

only the untrusted interfaces.

So this is used outside of your network

on the way in or out of that network,

not within your network.

Now when we talk about an untrusted interface,

this is any interface that's configured to receive messages

from outside your network or firewall.

Since they're outside of your network,

they're automatically considered to be untrusted.

A trusted interface, on the other hand,

is any interface that is configured

to receive only messages from within your network.

Remember, if they're coming from inside your network,

we consider it trusted.

If it's coming from outside of your network,

we consider it untrusted.

Essentially, when we use DHCP snooping,

it's going to act like a firewall

between untrusted hosts and DHCP servers.

It provides us with a way

to differentiate between untrusted interfaces

connected to an end-user device

and trusted interfaces connected to the DHCP server

or another switch.

For DHCP snooping to be effective,

you need to configure your switches and your VLANs

to allow DHCP snooping by your network devices.

Next, we have Router Advertisement Guard.

The IPv6 Router Advertisement Guard, or RA-Guard,

is a mechanism that's going to be commonly employed

to mitigate attack vectors

based on forged ICMPv6 router advertisement messages.

In IPv6, router advertisements can be used by network hosts

to automatically configure themselves

with their own IPv6 address

and pick out their own default router

based on the information they're seeing

within a router advertisement.

Now, this could introduce a security risk

to your network though

because a host could create a default route

out of the network based on a suspicious

or malicious router advertisement sent by an attacker.

So, to prevent this,

we need to configure IPv6 Router Advertisement Guards,

or RA-Guards, to filter router advertisements

as they're going across your network.

RA-Guards operate at layer two of the OSI model

for IPv6 networks.

Now, your configuration can be set up very easily

and very effectively

by simply adding a configuration that says,

"don't allow RAs on this interface".

With that simple line, the switch will then filter out

all router advertisements from the internet

and then your internal host devices

can't fall victim to setting up malicious routes

to a hacker control gateway.

Finally, we have Control Plane Policing, or CPP.

The control plane in policing, or CPP feature,

is going to allow users to configure a quality of service

or QoS filter that will manage the traffic flow

of control plane packets to protect the control plane

of your Cisco iOS routers and switches

against denial of service and reconnaissance attacks.

This helps to protect the control plane,

while maintaining packet forwarding and protocol states,

despite an attack or heavy load on that router or switch.

This is known as policing and not an inspection

because we're dealing with maintaining

a good quality of service level for this router.

Notice, we're talking all about

the control plane here as well.

This means we're looking at the switch

under the logical functional components of that router.

Things like the data plane, the management plane,

and the control plane, and the service planes,

just as we would be in a software defined network or SDN.

This control plane policing

ensures that the rate limiting of traffic

is modified dynamically

to ensure the device doesn't become overloaded,

that it doesn't have an overly high CPU utilization,

and that it doesn't create

an unacceptably low quality of service

due to periods of high demands or malicious attacks.

Securing SNMP.

In this lesson, we're going to discuss how we can best secure

the Simple Network Management Protocol or SNMP.

SNMP is a great helper protocol inside of our networks

and allows us to easily gather information

from various network devices

back to our centralized management server.

In the past, SNMP relied on the use of a secure string

called a community string to grant access to portions

of the device management planes.

This led to widespread abuse of SNMP by attackers though,

because it allowed them to gain access and control

over network devices.

So our first step to securing SNMP in your network

is to ensure you are not using SNMP v1 or SNMP v2.

This is because both version 1 and version 2,

use an insecure version of a community string.

The only version of SNMP that you should be using

is version 3, because it adds the ability

to use authentication and encryption of your SNMP payloads

as they're being sent across the network.

SNMP instead is going to use encoded parameters to provide

its authentication as part of the SNMP architecture.

By using SNMP V3, instead of V1 or V2,

you're going to prevent replay, on path

or men in the middle attacks of your SNMP architecture.

Now this alone isn't enough to call SNMP V3 secure though,

because hackers can continue to find ways

to abuse the protocol and use it for their own advantages.

To better secure SNMP,

you should also combine the use of SNMP V3

with using whitelisting of the management information base

or MIB by implementing different SNMP views.

This will ensure that even if the credentials are exploited,

your information can not be read from a device

or written to a device, unless the information is needed

as part of normal monitoring

or normal device reconfiguration techniques.

Some other solutions to help you secure SNMP V3,

is use authPriv on all your devices.

This will include authentication and encryption features.

For this to work,

you need to use a newer network device though

that supports a cryptographic feature set.

Also, you need to ensure that

all of your SNMP administrative credentials

are being configured with strong passwords

for authentication and encryption.

You should also follow the principles of least privilege.

This includes using role separation

between polling and receiving traps for reading,

and configuring users or groups for writing,

because many SNMP managers require login credentials

to be stored on the disk in order to receive traps

from an agent.

Access control lists, or ACL's should also be applied

and extended to block unauthorized computers

from accessing the SNMP management network devices.

Access to devices with read and or write SNMP permissions

should be strictly controlled

to ensure the security of your network.

When it comes to your SNMP traffic,

I recommend that you segregate out your SNMP traffic

onto a separate management network or a separate VLAN.

Preferably, you're going to use a management network

that's out of ban, if you can afford to design it that way,

if not, you're going to need to logically separate

the SNMP traffic into a separate VLAN

to keep it secure at a minimum.

Finally, remember that an MIB and SNMP measurement devices

are just another type of server,

and so you need to make sure you keep

those system images and software up-to-date

in terms of its software and firmware

using good patch management principles.

Access control lists.

In this lesson,

we're going to discuss how we can best secure our network

using access control lists, firewall rule-sets,

and how to configure role-based access.

Now, an access control list, or ACL,

is a list of permissions associated

with a given system or network resource.

An ACL can be applied to any packet filtering device,

such as a router, a layer 3 switch, or a firewall.

In an ACL, you're going to have a list of rules

that are being applied based on an IP address,

a port, or an application,

depending on the type of device

you're applying this ACL onto.

Now, as the access control list

is being processed by the device,

it's going to start at the top of the list

and work through each line

until it reaches the end of that list.

So we're going to work from top to bottom.

Therefore, you always want to make sure

your most specific rules are at the top,

and your most generic rules are at the bottom.

For example, let's pretend I had an access control list.

as I'm working as a bouncer at a nightclub.

Now, at the top of the list,

I might have something very specific,

like somebody's name, John Smith.

He came in last week and he caused all sorts of trouble.

So John Smith cannot come in the club.

Now, as I move down the list,

I may get to something more generic.

So I might get to something that says anybody

whose driver's license says they live in Montana,

because I'm running a club in Florida.

If that was the case,

I might want to block that

because maybe we had a lot of people

coming with fake ideas from Montana,

so we're not going to accept those anymore.

Now, as we get to the end of that list,

we might see something very generic.

Something like no men allowed.

Maybe this is a woman's only club.

Now, this is a pretty generic rule, right?

Because half of the people on the planet are men.

So this is a very generic way of saying things.

So as we go from the top to the bottom,

we go from very specific, to more general,

to the most general.

Now, the same thing happens in our networks.

If I'm going to create a rule to block SSH

for a single computer based on its IP address,

that's going to be towards the top of my list.

If I want to block any IP address that's using port 110,

that's going to be a bit more generic.

So it'll be somewhere in the middle.

Finally, if I want to block any IP going to any port,

that is going to be something that is really generic,

and it should be at the end of my list.

So let's talk about some things

that we may want to block using our ACLs

in order to help secure our networks better.

Now, first we want to make sure

we're blocking incoming requests

from internal or private loop back addresses,

or multicast IP ranges, or experimental ranges,

if we have something that's coming from outside

of the network going into our network.

So if you have something that says

it's coming from 192.168 dot something, dot something,

and it's coming from the internet interface,

well, that's a non-routable IP,

and it shouldn't be coming from there.

So you should be blocking that.

That should never be allowed

to come into your network from the internet,

because usually it's an attacker trying to spoof their IP.

Similarly,

if you start seeing source IP addresses

coming from areas that are reserved for things

like loop back or experimental IP ranges,

those things should also be blocked immediately.

Second,

you want to block incoming requests

from protocols that should only be used locally.

For example,

if you have ICMP, DHCP, OSPF, SMB,

and other things like that,

you want to block those at the firewall

as things try to enter your network.

Now, if you have something like Windows file sharing,

for instance, which operates over SMB,

that should not be happening over the internet.

That is something that should happen

inside the local network only.

So again,

you should be blocking that at the firewall

at the border of your network.

If somebody has a VPN and they're working from home,

they'll be able to tunnel through your firewall,

access the local network,

and then use SMB that way.

But they shouldn't be using it straight from their home

over the internet to your network.

They should only do it through a VPN.

Now, the third thing you want to consider

is how you want to configure IPv6.

Now, I recommend you either

configure IPv6 to block all IPv6 traffic,

or you allow it only to authorize hosts and ports

if you're using IPv6 in your network.

The reason for this is because a lot of hosts

will run dual stack TCP/IP implementations

with IPv6 enabled by default.

And if you're not aware of that,

you're going to be having a lot of these things open,

and you're letting people have

unfettered access to your network.

A lot of organizations are still running IPv4 only,

and if they're doing that,

they definitely need to turn off IPv6 on those hosts,

and they need to configure their firewall to block it.

If you don't do this,

you could have a misconfiguration

that could allow adversaries unfiltered access

into your networks by using that IPv6 IP address area,

because a lot of administrators

simply haven't locked down IPv6 well enough yet.

So keep this in mind as you're doing

your configurations on your firewalls

and your access control lists.

All right.

Now that we have some basic rules out of the way,

let's take a look at an access control list,

and walk through it together.

Now, this one is an example from a Cisco firewall,

but that really doesn't matter for this exam,

because when we're talking about CompTIA exams,

they are device agnostic.

This could've come from a router.

It could've come from a firewall.

It could've come from Cisco, or Juniper, or Brocade.

It really doesn't matter.

The point is I want you be able to read

a basic firewall like this and understand it,

because that will make sure you're doing well on the exam.

So let's start out with the first line.

ip access-list extended From-DMZ.

This just says that this is an access list.

And in this case, I'm using it for my DMZ.

The second line is a comment or remark line.

This is going to tell you what this section is about.

Basically, it's saying that we're

going to have responses to HTTP requests,

and that we're going to get a bunch of permit statements here.

Now, as we go through these permit statements,

we're going to look at them one at a time,

and it's going to tell us which things

are being permitted or denied.

Now, when we see the word permit,

that means we're going to allow something,

and in this case,

we're going to allow TCP traffic.

So we have permit tcp,

and then we have the IP address

that's going to be associated with it.

In this case, we're going to permit TCP traffic

coming from the IP address 10.0.2.0.

The next thing we have is going to be our wildcard mask,

which acts like a subnet mask.

Now, this looks a little funny

because it's a wildcard mask

and it's technically a reverse wild card,

and it's written as 0.0.0.255.

So if you want to read this as a subnet mask,

you actually have to convert it.

And essentially you're going to make it 255.255.255.0.

This is a Cisco thing.

When you see the zero in the reverse wildcard,

treat that as a 255.

If you see a 255, treat it as a zero.

Don't let this get you confused.

Essentially what we're saying here

is that we're permitting TCP traffic

from any IP that is 10.0.2 dot something,

because this is the 10.0.2.0 network,

and it has 256 possible IPs that we're going to use here.

Anything in this IP range will be permitted under this rule.

The next part you see is eq, which stands for equal.

So the IP address has whatever is beyond this equal sign,

and that's going to be allowed.

In this case, we're equaling www.

Now, what does that mean?

It means port 80.

Www is Cisco's way of saying this is web traffic.

Somebody can make a request over port 80,

and we're going to allow it.

Next we have the part that says any,

and this says that we're going to be going

to any IP address as our destination.

So we can go to any web server in the world over port 80,

and we're going to allow it.

This will allow us to make an established connection there,

and then start traffic.

So any time we want to make an established connection

from 10.0.2 dot something to some website over port 80,

we're going to allow that using TCP.

Essentially that's what we're saying.

People can go out and access a website

from our DMZ out to the internet,

and this is all we're saying with this particular line.

Now, as you go through

and you read all these different lines in the ACL,

you can start figuring out

what is permitted and what is denied.

In this case, everything shown here is permitted

because we're doing explicit allow permissions.

What we're saying is yes,

all of these things are allowed.

Permit them from this IP and this port

going to that IP and that port.

But as we go through to the bottom of this list,

you'll see one statement that looks a little different.

It says deny IP any any.

Now, this is what's known as an implicit deny.

This says that anything

that is not already allowed above in my ACO rule-set

is something we're just going to deny by default.

So if we get down this list

and you see things like www, 443, echo reply, domain,

these are all things that we're allowing.

And then when I talk about domain here,

I'm not really talking about domain in general,

but we're talking about DNS as a service,

because this is the way Cisco talks about DNS services.

When they say domain,

we are really talking about equaling port 53.

So in this case,

everything you see listed here

is all these different permit statements

that are going to allow traffic from our DMZ to the internet.

The DMZ can go out

and get web traffic over port 80 or port 443.

It can reply to echo requests, which is ICMP.

It can use port 53, which is domain, over UDP and TCP.

These are all things

that we're going to be allowed to do from this DMZ.

But when I get down to that last statement,

if any of those things didn't happen,

we are going to deny it.

So for example,

if somebody tries to go to port 21 and access FTP,

we're going to reach that deny IP any any statement,

and it's going to be blocked.

This is because that statement

will deny any IP going from any IP to any IP.

Essentially, this ACL is configured as a white list.

It's only going to allow things

that are being permitted explicitly listed in this list,

and everything else is going to be blocked.

This is a good way of doing your security.

Now, we just mentioned the concept of explicit allow,

but we can also have firewall rules

that will use explicit deny or implicit deny.

Now, when you have an explicit deny statement,

you're creating a rule to block

some kind of matching traffic.

In this example I showed you,

we didn't have any explicit deny statements,

but they would look exactly the same

as our permit statements,

except we would change the word permit to deny.

Now, this allows us to go

from an explicit allow to an explicit deny.

So let's say I wanted to block traffic

going to the IP address of 8.8.8.8.

I could create a rule that says

deny IP 8.8.8.8 0.0.0.0 any any.

And it's going to block all ports

and all protocols going to the IP address of 8.8.8.8.

Now, notice my reverse card mask there was 0.0.0.0,

which tells me I only want to match this IP.

Not a whole network, just the IP of 8.8.8.8.

On the other hand, I can also use an implicit deny,

which blocks traffic to anything not explicitly specified.

In the example ACL I showed you,

that last statement had that implicit deny.

Basically anything not already explicitly allowed

by an allow statement is going to get blocked

because we had that deny IP any any statement

as the last statement at the end of our ACL.

Finally, we need to talk about role-based access.

Role-based access allows you to define the privileges

and responsibilities of administrative users

who control your firewalls and their ACL's.

With role-based access,

we put different accounts into groups

based on their roles or job functions.

Then based on those roles,

we're going to assign permissions

to which devices they can configure and modify.

So for example,

if I'm responsible for updating

and configuring the border gateway

or firewall for the network,

I would get access to add things to the ACL

that would open or restrict communication

between the internet and the internal network.

On the other hand,

if I'm just a switch technician

who's responsible for adding and removing users

when they're assigned to a new office,

my role would not allow me

to modify a layer 3 switch's ACLs,

but instead would only allow me

to shut down or reenable switchports

and configure port security.

Wireless Security.

In this lesson,

we're going to discuss the best practices

for hardening your networks using wireless security.

We're going to cover topics such as MAC filtering,

antenna placement, power levels, wireless client isolation,

guests network isolation, pre-shared keys,

EAP, geofencing, and captive portals.

First, let's talk about MAC Filtering.

MAC filtering allows you to define a list of devices

and only allow those devices

to connect to your wireless network.

Now, it does this by using an explicit allow

or an implicit allow list.

With an explicit allow,

we're going to work by creating a list

of all the allowed devices

and blocking any device whose MAC address

is not included on this list.

Essentially, it's a white list.

Now, implicit allow,

instead works by creating essentially a black list.

We're going to allow any device

whose MAC address does not appear in that list

to connect to our wireless network.

That's the difference between the explicit allow

and an implicit allow.

Explicit allow is a white list,

implicit allow is a blacklist.

All right, for best security,

you should always use explicit allow

when doing MAC filtering.

This is because it's really easy to conduct MAC spoofing

on a wireless device.

So, if you're using an implicit allow,

and you're using this blacklist method,

I can simply take my MAC address

and spoof it to something else,

and be on your network within about five seconds.

So it doesn't really stop a bad actor from getting on there.

Now, in this case,

that bad actor looks like they're a brand new user

and they haven't been a bad person before,

so you're going to let them connect.

This is why you always want to use

an explicit allow instead.

In theory, MAC filtering is supposed to give you

some decent protection,

but I'll tell you, in the real world,

MAC filtering just isn't that strong,

and can easily be bypassed.

So, while maybe a little inconvenient for a skilled attacker

to get across and get into your network,

if you're using an explicit allow,

it really won't slow them down for very long.

Therefore, if you're going to use MAC filtering,

don't rely on it as your only protection

for a wireless network.

For the exam, though,

CompTIA does say MAC filtering is a good thing,

and you should use it.

Second, Antenna Placement.

Now, antenna placement is important

for our wireless networks,

both for our successful operations of those networks,

as well as for the security of those wireless networks.

Most wireless access points

come pre-installed with an omnidirectional antenna.

This means the wireless antenna is going to radiate out

radio frequency waves in all directions at an equal power.

For this reason,

you need to carefully consider

where you want to place that device

to provide adequate coverage for your entire office,

but also so that you can keep that signal

within the walls of your office,

and not out into the parking lot or other spaces.

For example, consider this floor plan for a small office.

Now, where you see a green area,

we have strong areas of signal strength,

and the area decreases down to yellow,

and then eventually down to red.

Due to the placement of the antennas

and the wireless access points,

we actually have some green and yellow signal

that's actually outside the physical office building.

Because of this,

an attacker could be sitting in the parking lot

and gaining access to this office as wireless network,

because there's proper coverage in this parking area.

For this reason,

it's important to consider the placement of your antennas,

especially if you're using omnidirectional antennas.

Additionally,

you can change out your omnidirectional antennas

on some access points to use directional antennas instead.

This will help you keep the signal inside the building.

So, instead of using four omni-directional

as they did in this office,

it would be better from a security standpoint,

should we play some of those omnidirectional antennas

with directional antennas.

For example, on the left-most wall,

we could mount a right directional antenna

that would then only broadcast a signal

180 degrees to the right.

Meaning there's no radio waves

leaking out the left wall of that building.

Similarly, I could use a left directional antenna

on the right wall,

pushing the radio frequency ways inward into that building

all the way to the left,

but we still need to make sure the omni-directional

is sitting in the middle.

This way, we have good coverage for that middle section.

Now, instead of placing it close to an external wall

like they did in this diagram,

I would instead move that more

towards the middle of the building.

This will actually center its coverage area

and keep more of it within the walls of the office.

In addition to that,

we could also adjust the power level downward.

And by doing that,

keep those radio waves inside the building even more.

Now, this brings us to our third security measure,

Power Levels.

Each wireless access point

can radiate its radio frequency waves

at a variable power level.

If you're using more power,

you're going to cover more area, but by covering more area,

we also have radio waves leaving our building,

and that's not going to be good for security.

So it becomes important for us to consider

what power level you're going to be using

when you set up your wireless access points.

By conducting a site survey,

you can determine how much power is too much or not enough,

and you can balance the needs of network coverage

against your need for network security.

Forth, Wireless Client Isolation.

Now, wireless client isolation is a security feature

that prevents wireless clients

from communicating with each other.

You see, by default, most wireless networks operate

as if your devices were all connected to a hub,

and this allows every device to communicate

with every other device on the wireless network.

But with wireless client isolation,

your wireless access point is going to begin to operate

like a switch when it's using private VLANs.

Now, this will ensure that each device

can only communicate with itself

or upwards to the access point and out of the network

through the wireless access point.

By doing this, it operates a lot like a private VLAN

using an isolation port or I port.

When using wireless client isolation,

these devices can communicate with other devices

on the local area network if the access control lists

are configured to allow them to do this.

Now, the fifth thing we want to talk about

is Guests Network Isolation.

Guest network isolation is a type of isolation

that keeps guests

away from your internal network communications.

With a guest network isolation, your wireless access point

will create a new wireless network that's used by guests,

your home, or your office.

This wireless network

simply provides them with a direct path out to the internet

and bypasses your entire local area network.

If you have a network device,

something like a printer or a FileShare,

the people in the guest network can not get to it

because they're isolated from your local area network.

This is a great security measure,

and ensures your local area network is protected

from those who are using the guest wireless network.

Sixth, Pre-Shared Keys, or PSKs.

Now, pre-shared keys

are used to secure our wireless networks

by using encryption, things like WEP, WPA, WPA2, and WPA3.

The pre-shared key is using these encryption schemes,

and it's a shared secret

between the client and the access point,

and that has to be shared ahead of time

before you connect to it over some secure channel.

So, for example, let's say you came over to my house

and I'm using WPA2 with a pre-shared key.

Now, you're going to slick my wireless network,

and then you're going to enter the password for that network.

That password is your pre-shared key.

For you to get that pre-shared key,

I had to give it to you, though, right?

These pre-shared keys are only as strong

as the passwords that are representing them.

So, if you're going to use a pre-shared key,

make sure you're using a long and strong password.

The biggest challenge we have with these pre-shared keys,

though, is that everyone needs to know it,

to be able to get onto the network,

so we're all using the same password.

But having a lot of people using the same pre-shared key,

it becomes vulnerable to compromise

because somebody could lose it

or tell somebody else what it is,

and then everybody knows what that PSK is.

Seventh, EAP or the Extensible Authentication Protocol.

Now, EAP is a protocol that acts as a framework

and transport for other authentication protocols.

In our wireless networks,

if we want to move beyond using a pre-shared key

for authentication, we can instead use EAP,

and this is used at a lot of enterprise networks.

With EAP,

we can combine it with the 802.1X port access protocol

to use digital certificates

and pass them to an authentication server,

such as a radius or TACACS+ server using EAP.

Now, this is going to provide us with higher levels of security

than a pre-shared key,

and we can individually identify which device or user

is connected to the network using that digital certificate.

Eighth, Geofencing.

Geofencing is a virtual fence

created within a certain physical location.

Now, when you combine this with a wireless network,

we can actually set up our wireless network

to only allow a user to connect to it

if they're located within a certain geofenced area.

For example, let's say I'm running a restaurant in the mall.

I could set up a wireless network

to only allow people sitting in my restaurant

to connect to the wireless network

and deny anyone whose device says their GPS coordinates

are not within the four walls of my restaurant.

So, if somebody's sitting

at the competitor's restaurant next door,

they can't use my wireless network.

Only my clients can,

because they're sitting within my geofence.

Nine, Captive Portals.

Now, a captive portal is a webpage

that's accessed with a browser

that's going to be displayed to newly-connected users

of a wireless network

before they're granted broader access to network resources.

If you've ever used the wireless network at a hotel

or an airplane, you've used a captive portal.

For example, you connect to your hotel's Wi-Fi

and a webpage pops up

and asks you to enter your last name and room number.

Then, if you enter the details

and they match your registration from the front desk,

they're going to let you in to access the internet

from that network.

Often, captive portals are going to be used

to collect billing information

or consent from a network user,

but it can also be used

in combination with network access control, or a NAC system,

to conduct an agentless scan of the device

before it allows them to join the full network

to make sure it meets the minimum security requirements

for use on that network.

This is commonly used in colleges

and universities in this way.

As you can see,

there are a lot of different things we can do

to help secure our wireless networks.

This includes implementing MAC filtering,

adjusting your antenna placement,

lowering your power levels,

enabling wireless client isolation,

enabling guests network isolation,

creating a secure pre-shared key,

migrating to EAP instead of using a pre-shared key,

enforcing geofencing, and using captive portals.

IoT considerations.

In this lesson,

we're going to discuss how you can best secure

your Internet of Things devices,

when you connect them to your network.

When it comes to IoT,

I believe there are many things you should be doing

within your organization to best protect yourself.

First, you need to understand your endpoints.

Each new IoT device brings with it, new vulnerabilities.

So you need to understand your endpoints

and what their security posture is.

If you're adding a new wireless camera

or a new smart thermostat,

each one of those brings different vulnerabilities

that you need to consider before connecting those devices

to your network.

Second track and manage your IoT devices.

You need to be careful and don't just let anyone connect any

new IoT device to your network.

Instead,

you need to ensure you have a good configuration management

for your network and follow the proper processes to test,

install, and operate these IoT devices,

when you connect them.

Third, patch vulnerabilities.

IoT devices can be extremely insecure.

If you're deploying a device,

you need to understand the vulnerabilities

and patch them the best you can.

After that,

you're still going to be left with some residual risk here,

but there may not be a bug fix or security patch available

for that IoT device.

If that's the case,

you need to conduct some risk management

and determine if you're willing to accept the risk,

or if you need to put additional mitigation in place,

like putting them on a separate VLAN.

Fourth, conduct test and evaluation.

Before you connect any IoT device to your network,

you should fully test it and evaluate it,

using penetration testing techniques.

It is not enough to trust your manufacturer when they say

their devices are secure because many of these devices

are not.

Therefore always conduct your own assessments

of their security by conducting it on a test network

or lab before you attach it to your production network.

Fifth, change default credentials.

Just like network devices,

each IoT device has a default username and password

that allows you to connect to it and configure it.

These default credentials present a huge vulnerability.

So they have to be changed before you allow the IoT device

to go into production on your network.

Sixth, using encryption protocols.

IoT devices are inherently insecure.

So it's important that you utilize encryption protocols

to the maximum extent possible to better secure the data

being sent and received by these IoT devices.

Seventh, segment IoT devices.

The Internet of Things devices should be placed

in their own VLAN and their own sub-net

to ensure they don't interfere

with the rest of your production network.

If you can afford it,

you may even want to have a separate IoT only network

to provide physical isolation, as well.

As you can see,

there are lots of different considerations

that you need to think about when it comes

to connecting Internet of Things, devices to your network.