Ethernet Fundamentals

Transcript

Ethernet fundamentals.

In this section of the course,

we're going to cover layer 2 of the OSI model

in much more depth by digging into ethernet fundamentals.

As we go through this section,

we're going to be touching on domain one,

networking fundamentals,

domain two, network implementations,

domain four, network security,

and domain five, network troubleshooting.

We're going to cover a lot of stuff

across a lot of domains here.

Now, we're going to cover objectives 1.3,

2.1, 2.3, 4.4, and 5.5.

Objective 1.3 is to summarize

the types of cables and connectors

and explain which is the appropriate type for solution

as we talk about our ethernet standards.

Objective 2.1 is going to be to compare

and contrast various devices,

their features and their appropriate placement

on the network.

Objective 2.3 is given a scenario,

configure and deploy common ethernet switching features.

And objective 4.4 is to compare and contrast

remote access methods and security implementations.

Finally, objective 5.5 is given a scenario,

troubleshoot general networking issues.

So, without further delay,

let's dive right into ethernet fundamentals.

And I'm going to warn you.

This is a bit of a longer video

because there's a lot of stuff we need to know.

Now, in early computer networks, there were so many

different networking technologies out there,

and each of them was competing

for a piece in the market share.

There wasn't much standardization

among the different types either.

And so, if you were using ethernet,

it didn't talk with other types of networks.

Now, when I first started working with computer networks,

we had lots of different things out there,

including ethernet, token bus, token ring,

fiber-distributed data interface or FDDI,

local talk, AppleTalk

as well as others that were all fighting

to be the dominant market leader.

Well, there was one clear winner in these layer 2 wars,

and that was ethernet for our local area networks.

In this entire section of the course,

we're going to focus specifically on ethernet

because it's just that important.

Now, ethernet has gotten extremely popular.

So, much so that if you don't understand ethernet,

you really don't understand the way

today's modern networks operate.

Ethernet has been with us a long time.

Originally, it was run over coaxial cables

using B and C connectors and vampire taps,

and these networks were called 10BASE2

and 10BASE5 ethernet networks.

Also nicknamed thinnet and thicknet

because of the relative size of their coaxial cables.

Now, the great thing about these networks is

they could cover a really long distance

up to 200 meters with the 10BASE2 networks

and 500 meters with the 10BASE5 networks.

Now, for the Network Plus exam,

you don't need to memorize anything

about 10BASE2 or 10BASE5 networks anymore

because they're old and antiquated,

and likely, you're never going to see them.

Now, the only reason I'm even talking about them is

because I want you to understand where ethernet came from

because these were the first networks that used ethernet

and it was introduced in the early 1980s.

Over time, though,

we migrated to what is known as 10BASE-T ethernet.

And ethernet became associated

with this twisted pair cabling

that could run 10 megabit per second networks over it.

Now, this twisted pair cabling is known as CAT 3

or category three wiring.

These twisted pairs could be unshielded or shielded,

and they were cheaper and easier to use

than the older 10BASE2 or 10BASE5 coaxial networks.

The only real disadvantage to the new 10BASE-T network

was it could only cover distance of up to a 100 meters

before they needed to have their signal repeated

by a switch or a hub.

Now, I know that 10 megabits per second here

doesn't sound like a lot of speed.

But back in the 1980s, this was super fast.

After all, a dial-up modem in those days was lucky

to reach speeds of 300 bits per second.

So, this internal network of 10 megabits per second

was a really fast speed.

That's 10,000 bits per second.

It was lightning fast to users back then.

Now, the big question when it comes to network devices,

especially when they start operating at these speeds, is,

how are they going to access the network and communicate?

And this is really one of the core questions

that ethernet had to answer.

Should the network be deterministic?

Or should it be contention-based?

Now, if you're going to use deterministic means,

this is going to be where network access

is going to be very organized and orderly.

Some of the competing technologies

like token bus and token ring networks

use this deterministic style of network access.

If a device wanted to transmit data onto the network,

the device had to wait its turn,

waiting for an electronic token to get to it.

And that way, they would know it was its turn to transmit.

Now, let's pretend for a moment,

we're all sitting in a classroom together.

There we are.

You, me, and 20 to 25 other students

all sitting in this room.

Now, if everybody talked at once, what would happen?

Nobody could hear or understand anything.

We'd have a whole bunch of collisions

as we call in the networking world

as everyone tries to talk over each other.

Instead, we need to create a deterministic system

to determine who's going to speak

at any given time in my classroom.

So, whenever I was teaching a classroom,

I would tell my students,

"You have to raise your hand and then I'm going to call on you,

and then it's your turn to talk."

That's deterministic.

Since I was the instructor, I was handing out the token.

In this case, calling on somebody.

And then they were given this virtual token,

and they could speak until they gave

that virtual token back to me and said they were done.

Or at any time, I could tell them to stop talking,

and I could take back that virtual token and speak myself.

Because again, I'm the instructor.

I'm the one in charge in the room.

Now, if you used a token ring or a token bus network,

that is essentially how they operated.

The great thing about this is there are zero collisions

because nobody's going to talk over each other.

They all wait their turn.

That's the great thing about using a deterministic network.

Now, the other way we can determine

who gets access to the network is

by using what's known as contention-based networks.

Now, contention-based networks are very chaotic.

Unlike my classroom example,

which was very deterministic

and I get to be in charge of telling everyone

when and how long they can talk,

a contention-based model is more like when you go to the pub

on Friday night with your friends.

Maybe you're hanging out with five of your friends

and you're sitting around a table there.

You're having some drinks.

Now, normally people know how to interact

in these situations and they carry on a conversation, right?

We've all been there before.

There isn't somebody in your group who says,

"Hold on, Alex. It's Tamara's turn to talk now,"

or "Okay, Tamara, you have 90 seconds.

Now, it's John's turn. Let's switch."

It doesn't work that way.

I don't know about you, but if I had friends like that,

I think I would leave that conversation

and find me some new friends.

Now, instead, we actually have a natural way

of talking with each other, right?

Each of us more or less takes some time to tell our stories

and the flow of conversation naturally just happens.

I pick up the conversation when there's a space,

and I fill it in with some of my stories.

And then somebody else picks up the conversation

and says their stories or what they want to talk about.

This is contention-based.

Essentially, as you're sitting there,

you hear a gap in the conversation and you begin to speak.

If somebody else is speaking, you listen

until you find an appropriate time for you to transmit,

or in this case say whatever it is you wanted

to add to that conversation.

Now, the problem with using

a contention-based method like this

is that you can have collisions.

Have you ever been at the pub

and you're having a conversation with some friends,

and all of a sudden you hear that gap in the conversation

and you go to speak

and you find that somebody else speaks

at the exact same time as you?

Well, that's a collision.

You both transmit at the same time.

When that occurs, you both spoke over each other

and nobody around you could hear what was being said

because it mumbled together.

So, likely, one of you paused,

let the other one finish their story,

and then tried talking again.

Now, because of this natural way of having

a conversation can lead to people talking over each other,

and therefore creating collisions.

Some people also might take up more time than others.

And there's other issues like that

when we deal with these contention-based models.

They can be very chaotic.

And oddly enough, this is how ethernet chose to work.

Ethernet, unlike token ring,

actually chose to use a contention-based network.

Now, why would they do that?

Well, because contention-based networks have lower overhead.

This means you don't have to pass

around this electronic token,

and this way you can make full use

of all the bandwidth in the network

because anyone can talk at any time.

Now, I know this may seem odd,

but it's actually a good thing.

You see, in some of these

traditional deterministic networks,

you would actually split up the communication

into chunks a time.

For instance,

let's say we have eight devices on our network.

We could say that everybody gets 1/8

of a second to communicate,

and then they're going to pass the token

to the next network device.

That device is going to communicate

for their eight of a second.

And we keep doing that as we go in a circle.

So, of every one second,

you can only transmit 1/8 of a second.

That is it, for a 10 megabit per second network,

you're going to get to transmit 1.25 megabits per second,

not the full 10.

Now, effectively,

we've just cut down the network speed by a factor of eight.

Because most of the time,

these devices don't have anything to communicate,

but we still gave them the time period.

Think about when you and your friends are sitting

around the table at the pub again.

If I said that each of you had to take turns

and can only speak for one minute,

some of your friends would be able to fill that minute

and then run out of time,

and others wouldn't be able to fill them in at all.

They would tell a quick one-line joke

and then sit there for 45 seconds of silence.

This is the difference between the way people communicate.

For that reason,

deterministic models can waste a lot of resources

and we'd be sitting in silence.

So, ethernet chose instead to use contention-based networks

to maximize the efficiency of the network

by allowing anyone to use all of the bandwidth

at any time for as long as they need it.

But if any network device can transmit

at any time in the network,

how are we going to prevent collisions?

Well, the way ethernet approaches this is

by using something known as CSMA/CD.

This stands for carrier sense multiple access

with collision detection.

Now, what exactly is that?

Well, just like when you're sitting

around the table at the pub with all your friends,

you first listened and waited for the gap in conversation.

Once you heard that, you then tried to speak.

Now, if two of you spoke at the same time, you simply say,

"Oh, I'm sorry. We must have spoken over each other.

Let's try again."

And then we wait and we try again."

That's the idea here.

Well, in ethernet, we do the same thing using CSMA/CD.

Let's break down each part of this

and explain it a little bit.

The CS or carrier sense part of this means

that the ethernet needs to be able to do carrier sensing.

Now, a carrier is essentially

the line we're going to communicate on.

This means that it's going to listen to the network

and determine if there's a signal already being transmitted.

Now, this is known as carrier sensing.

Carrier is just as fancy word in electronics

for a signal that carries information or data.

So, we're going to try to sense if that data

is currently being transmitted.

Or to use our pub example,

we're going to see, is there a gap in the conversation?

Now, if there is a gap,

this is where the MA part of CSMA/CD comes into play.

MA stands for multiple access.

All that means is that there are many different devices

that hold the ability to access, listen to,

and transmit to that network at the same time.

So, ethernet is going to have lots of devices on the network,

and they're all going to be able to listen before they speak.

That is the CSMA part of CSMA/CD.

Now, we get to the important part,

the CD part, collision detection.

Since all these devices are listening to the network,

if they detect a collision has occurred

when they are transmitting something,

then those two devices who are both talking over each other

can decide who's going to transmit their data now,

and who's going to wait to retransmit.

To simplify this process,

ethernet uses a clever method to determine

who gets to transmit first.

Essentially, if a collision is detected,

both ethernet devices will stop transmitting

and pick a random number and then wait to retransmit.

So, if you and your friend

both talk over each other at the pub,

you might have a standing rule

that you both will stop what you're doing,

pick a random number,

and then count up to that number in your head.

And if nobody is speaking yet, you can then start to talk.

This is exactly what ethernet devices do

when they detect a collision.

They stop transmitting.

They pick a random number and they count.

This is known as a random back off timer.

It allows the two devices to attempt to retransmit again

when their timers hit zero.

Think of it this way.

Have you ever been walking down a hallway at work

and somebody is coming from the other direction?

It might be a small hallway and you walk up to each other,

and you kind of do that little dance

where I go left and you go right,

and you go right and I'll go left?

And you really don't say anything,

but you just kind of figure it out?

Well, if so, you're smack dab in the middle

of a CSMA/CD collision.

So, what should you do?

Well, what I usually do is exactly what ethernet does.

I stop and I count to three.

And then I look and see

if the other person has walked around me,

or if they're still in front of me.

If they haven't walked around me, I'll walk around them.

It effectively ends that little dance,

and the collision is over inside the hallway.

That's the same idea here

with carrier sense multiple access collision detection.

So, let's take a look at how this looks on a network.

Here's the example where I have six devices

on the network using a bus,

just to make it easier for us to see.

Yes, you could use a star, a bus, a ring,

or any topology you really want with ethernet.

So, for my illustration and to make it graphically easy,

we're going to use a bus.

Now, here we have six devices,

and they're all sharing the same wire.

If your number four wants to talk to number five

and they want to communicate over this network segment,

that means there's no problem

because nobody else is talking right now.

But what happens if number two wants

to talk to number one right now?

Well, carrier sensing shows us that the line is clear.

So, two transmit and one receives.

There's no problem here.

Now, what happens if both three

and number five are both going to try talking at the same time?

They listen to the carrier.

They didn't hear anybody else talking,

and they both started transmitting.

Well, this causes a collision,

just as we can see here with the red X.

So, now, we've detected a collision. What do we do?

Well, three and five are both going to stop,

and they're going to pick a random number

to serve as their random back off timer.

Now, in the case of three, it chose to wait 30 milliseconds.

In the case of five, it chose 150 milliseconds.

So, what happens next?

Well, 30 milliseconds goes by.

Number three listens to the carrier

and it doesn't detect anybody else transmitting.

So, it starts to transmit.

At the same time, number five is still waiting

because only 30 milliseconds have passed

and they chose to wait 150 milliseconds.

So, they still have 120 milliseconds

on their back off timer.

Now, what happens if it takes number three,

300 milliseconds to transmit everything?

Well, number five is going to wait

for its back off timer of 150 milliseconds.

And then when this timer hits zero,

it listens to the carrier again.

This time, though, it hears somebody transmitting.

In this case, it's number three.

So, it's just going to wait until it hears an open spot

in the conversation, and then it will transmit.

In this case,

it should be when number three is done transmitting

about 180 milliseconds from now.

Now, when number three is done communicating

and the carrier seems clear for transmission.

Number five will transmit.

If someone else happens to be transmitting at the same time,

again, we have a collision.

Both choose a random back off timer,

they wait, and the cycle repeats

until everybody transmits everything they want to send.

Now, I know this may seem like a bad way of doing things

because as networks get busier and busier

and you have more clients,

you're going to have a lot more collisions.

And guess what? You would be exactly right.

The more devices you have communicating

on a single network segment,

the more collisions you are going to have.

Each area of the network that shares a single segment

is known as a collision domain.

In this example, we had six devices

all sharing a single bus cable.

That made up our collision domain.

With ethernet, anytime you have devices on the same cable,

or they're all connected to the same hub,

you are sharing the same collision domain.

Because of this,

all the devices need to operate in half-duplex mode

because they have to listen and then talk.

They can't do both at the same time.

If a device talks and listens at the same time,

they simply hear themself

and think somebody else is transmitting.

So, they cannot operate in full duplex

where they can listen and talk at the same time.

If you're using a hub or a shared network segment,

you have to be able to listen to prevent those collisions.

For this reason, we need to keep collision domains

very small inside our networks.

If you have a large collision domain,

there is a high probability of collisions,

and each collision requires that data to be retransmitted.

Again, waiting for a back off timer to pass,

and this is going to minimize your bandwidth.

This is going to slow down the entire network segment.

Now, if you have a hub with four devices,

it probably won't be a major issue.

But if you're using a hub with 24 or 48 ports,

the collisions can bring your network to a screeching halt.

Think about it this way.

If we're going to go to the pub

and we take a group of four friends,

we can make that conversation work.

Everyone can listen for the gaps and they can transmit,

and we won't have too many people talking over each other.

But if we go at the same time with 20 to 25 people,

we're going to have entirely too many collisions

and it just won't work.

So, we need to break that larger group into smaller groups.

For my pub example,

we may do this by breaking up our 20 to 25 friends

into groups of four or five people

and sit them each at a different table.

This breaks down our large single collision domain

of 20 to 25 people

into five to six smaller collision domains

of four to five people each.

In our networks, we do the exact same thing

to break down collision domains by using ethernet switches.

This drastically increases the scalability of our networks

by creating a lot of different collision domains.

Remember, lots of collision domains is good

because you have less people in each one.

Every single switch port is actually considered

its own collision domain.

You can see here with my switch in the center,

I have four collision domains.

There is one in between each computer and the switch itself.

Now, if I was using a hub, all four devices

and the hub are all part of that single collision domain.

So, I have five devices in one collision domain.

But by simply replacing the hub with a switch,

I can increase the speed of my network.

Because now, nobody else is talking on that switch port,

except the device that I cabled to it.

Therefore, the switch port now can operate

in full duplex mode.

After all, it's my only device on that network segment.

So, there's no chance of collision.

It's like I can just ignore the fact that I had to listen.

If I'm sitting in my room alone,

I can talk all I want and I will never have a collision

because nobody's here to interrupt me.

I don't have to listen before I transmit.

I can just keep talking.

I have a full time dedicated channel

between my device and my switch.

So, I'm allowed to now operate in full duplex mode,

transmitting my data faster,

getting more bandwidth out of this connection,

and ensuring there are no collisions.

All right, now that we covered the basics

about ethernet works,

let's dive into the different ways

that we classify ethernet.

These are known as the ethernet standards,

and they take the form of a number, the word "base,"

and then one or two letters.

For example, earlier I mentioned

that we use 10BASE-T for our CAT 3 ethernet networks.

10BASE-T is the slowest version of ethernet

and only operates at 10 megabits per second

And for up to a distance of a hundred meters.

I already covered the basics of copper ethernet standards

in the lesson on copper cabling.

But in this lesson,

we're going to do a quick review of those standards

as we say which ethernet standard goes for each one,

and then we're going to talk about the ethernet standards

for fiber optic cables too.

So, let's do a quick review of copper ethernet standards.

We're going to start with the slowest

and move away upward to the fastest.

First, we have 10BASE-T,

which operates at 10 megabits per second over CAT 3 cables.

This is known as ethernet.

Second, we have 100BASE-TX,

which operates at a 100 megabits per second

over CAT 5 cables.

This is known as fast ethernet.

Third, we have 1000BASE-T,

which operates at 1,000 megabits per second

or one gigabit per second

over either CAT 5e or CAT 6 cables.

This is called gigabit ethernet.

Fourth, we have 10GBASE-T,

which operates at 10 gigabits per second

over CAT 6, CAT 6a, and CAT 7 cables.

This is known as 10 gigabit ethernet. Go figure.

And fifth, and finally, we have 40GBASE-T,

which operates at 40 gigabits per second over CAT 8 cables.

This of course is known as 40 gigabit ethernet.

As you can see, they stopped getting creative

with the names after CAT 5.

Now, when it comes to distances, remember,

most copper cabling used by ethernet can only go

up to a 100 meters before the signal attenuates,

and you can no longer maintain signal integrity.

The only two exceptions to this rule are CAT 8,

which only goes 30 meters,

and CAT 6, which only goes 55 meters

if you want to reach the full 10 gigabits per second of speed.

Otherwise, you can go up to a 100 meters with CAT 6

if you stick to a one gigabit per second speed.

Now, remember when I say speed,

I'm really talking about bandwidth.

Bandwidth is measured about how many bits

the network can transmit in one second

known as a bit per second.

We can get up to megabits,

and then that's millions of bits per second,

or gigabits, which is billions of bits per second as well.

Depending on the type of cable,

this is going to determine the capacity

for the bandwidth of your network.

So, if you replace all your old CAT 3 cables

and switches with newer CAT 7 ones,

you're going to increase your bandwidth

from 10 megabits per second to 10 gigabits per second,

making it 1,000 times faster.

Now, let's switch from copper to fiber,

and talk about the different ethernet standards

that we use with fiber cables.

Unlike copper,

fiber can go further than just a hundred meters.

Now, multimode fibers can reach distances

of 200 to 500 meters or more.

And single mode fibers can reach distances

of up to 40 kilometers

before you have to repeat that signal.

First, we have 100BASE-FX.

This is going to operate at 100 megabits per second

over multimode fiber,

and it can reach a distance of up to two kilometers.

Now, this is a bit special.

Because normally,

multimode isn't going to be able to go this far.

But this particular multimode range that's being used

is actually going to border on the single mode range

when we talk about the actual light source.

And this is why the distance is longer for 100BASE-FX,

even though it technically uses multimode fibers.

Now, second, 100BASE-SX is going to operate

at 100 megabits per second

at distance of up to 300 meters

using a multimode fiber.

But it's going to use a shorter wavelength than 100BASE-FX,

which makes it cheaper to produce

because it's going to use LEDs as the light source

instead of a laser.

Third, we have 1000BASE-SX,

which operates at 1000 megabits per second

or one gigabit per second over multimode fiber

using nanometer near infrared wavelengths.

1000BASE-SX can reach distances of 200 to about 550 meters.

Fourth, we have 1000BASE-LX,

which is going to operate at 1,000 megabits per second

or one gigabit per second over a single mode fiber.

Now, 1000BASE-LX is going to use a long wavelength laser

as its light source,

and this means it can go further distances

of up to five kilometers.

1000BASE-LX is a bit strange though,

because you can also use it

with multimode fibers if you want to.

If you use it with the cheaper multimode fiber,

then the speed is going to remain the same,

but your distance is going to drop down to about 550 meters.

Fifth, we have 10GBASE-SR.

Now, this is going to operate at 10 gigabits per second

over multimode fiber.

SR stands for short range.

Because this uses a multimode fiber,

it's going to limit your maximum distance to about 400 meters.

And sixth, we have 10GBASE-LR,

which operates at 10 gigabits per second

over a single mode fiber.

Now, LR stands for long reach

because it uses a single mode fiber

that can reach distances of up to 10 kilometers.

All right, when it comes to fiber ethernet standards,

you don't have to memorize the fact

that they're 200 meters, or 500 meters,

or five kilometers, or 10 kilometers, or 40 kilometers.

What is important is to remember their relative distances.

When it comes to distances,

you really need to memorize just a few key things.

First, copper cables have a maximum distance

of a hundred meters.

Second, if you're using CAT 6,

you're going to be using it at a 100 meters,

then you need to limit your speed to one gigabit per second

instead of 10 gigabits per second.

Third, if you're using CAT 6 and you're under 55 meters,

you can increase your speed to 10 gigabits per second

instead of one gigabit per second.

Fourth, if you're dealing with multimode fiber,

you're going to be dealing with shorter distances.

That means something around 200 to 500 meters.

Fifth, if you need longer distances,

you have to use single mode fibers.

This is where we start talking about things

in kilometers in distance instead of meters.

So, we're talking two or 10 or 40 kilometers.

For the exam, you may see questions

about specific cable lengths

or limitations when it comes to copper.

But for fiber, they're only going to ask you about the fact

that multimode distances are longer than copper,

but shorter than single mode.

Now, this is where you have to figure out

which cable you're going to use.

Copper is great for short distances.

Fiber, if you need short distances,

you're going to use multimode.

And if you need long distances,

you're going to use single mode.

Now, another thing they're going to test you on

in regards to fiber is whether a fiber ethernet standard

is going to use multimode or single mode fibers.

For example, you might see a question like this.

You're a network technician,

and you need to select an ethernet standard

that will allow you to connect your main office

to your branch office that's located 35 kilometers away.

Which of the following should you use?

10GBASE-T,

1000BASE-SX,

10GBASE-LR,

or 1000BASE-T?

So, is this question really asking

about the length of the cable

and memorizing the distances?

Well, no, not really.

Instead, it's asking you about a single mode cable

and being able to figure out which of these is single mode.

Because we know copper cables are less than 100 meters

and multimode cables are less than 500 meters

with modern fiber cables.

So, if you can remember that 1000BASE-LR

is a single mode cable,

you've got your answer.

Now, is there an easy way to memorize

which cables are single mode and which are multimode?

Well, I personally use a little memory aid

to help me do this.

Remember, we covered six different types

of fiber cables in this lesson, right?

We covered 100BASE-FX, 100BASE-SX, 1000BASE-SX,

1000BASE-LX, 10GBASE-SR, and 10GBASE-LR.

Now, you may remember

that I said that 1000BASE-LX was special, right?

The reason it was special is because you could use

either single mode or multimode fiber with it.

So, the rest of them though,

we have a really simple little saying

that'll help us remember which ones are single

and which ones are multimode.

And it goes like this S is not single. That's it.

If you remember that S is not single,

it will tell you whether or not that fiber is a single mode

or multimode fiber based on its name.

So, if you see an S there in the fiber ethernet standard,

like 100BASE-SX, 1000BASE-SX, 10GBASE-SR,

you know it's using a multimode fiber

because it's short range.

So, every time you see S in the name,

remember S is not single.

Therefore, if there's an S in the name,

it must be a multimode fiber.

If you don't see the S, it's used for longer distances.

this rule holds true for most everything out there,

except the 1000BASE-LX.

Because this one works

with both single mode and multimode fibers.

All right, in this lesson,

we covered just a few of the ethernet fiber standards.

Specifically, the ones you are going to be asked

about on the Network Plus exam.

But be aware there are many others out there,

including things like 1000BASE-EX, 10GBASE-ER,

100GBASE-LR4, 100GBASE-ER4, and many others.

But for the exam,

you only need to know the six

that we covered in this lesson.

Network infrastructure devices.

Now, for the Network+ exam,

you have to be able

to identify network infrastructure devices.

This means, you have to identify their icons,

as well as knowing what they do,

what broadcast domains they're going to break up,

and what collision domains they're going to break up.

We're going to talk about all of that in this lesson.

Now, the primary devices that we use

in our networks today are routers and switches.

These devices actually evolve from bridges and hubs.

And we're going to talk about this by starting out with hubs

and working our way towards more modern things.

Now, as we look at hubs,

hubs are a layer 1 device,

they are a physical device.

They're used to connect multiple network devices

and workstations together.

You can identify them by a square icon

with an arrow pointing in both directions

when you see them on a chart.

These are known as multiport repeaters.

Now, there are basically three different types of hubs.

We have passive hubs, active hubs, and smart hubs.

A passive hub is simply going to repeat the signal,

but it's going to give no amplification.

Think about this like a splitter,

if I have an 8-port hub and something comes in one port,

it's going to go ahead and pass it out

to all the other ports, ports two through seven.

Now, if I have an active hub,

it's going to do exactly the same thing.

But the difference is

it's going to take the signal that got in,

and it's going to boost it back up, and then send it out.

Now, what I mean here when I talk about boosting the signal

is trying to overcome that 100-meter limitation

that we have with twisted pair cabling.

Because twisted pair can only go 100 meters,

if you hit a passive hub, guess what?

That's still part of your 100 meters.

So, if I have a 100-meter cable,

a passive hub, and a 50 meter cable,

it treats it like it's 150-meter cable,

and your network's not going to work well.

So, if you need to go long distances, you need an active hub,

because it gets power, takes that signal in,

boosts it back up, and restarts that 100-meter count for you.

Now, for example, my office building here

is 300 meters in length.

I can only go 100 meters with Cat 5, right?

So, I might go 60 or 70 meters,

and then put an active hub in there,

then, I can go another 60 or 70 meters,

put another active hub in there,

then go another 60 or 70 meters, put an active hub in there.

And every time I do it, it restarts that 100-meter limit.

Now, notice I only went 60 to 70 meters

and then put the hub in there.

Why is that?

Well, it's just the best practice.

If you only use 60 to 70% of that cable length,

it's going to make sure

you're not coming up towards that 100 meters,

because sometimes you just don't count things right,

and you might go over

and then your network's going to have problems.

So, I like to keep it much further away than 100 meters.

But again, if you're using a passive hub,

all of those connections would have been added together

and it would have had about 300 meters of cable

and it wouldn't work.

Now, if I had three 60s in there,

that's going to be 180 meters of cable, right?

But if I put that active hub in there each time,

I have 60 and 60 and 60.

And so it's not 180.

It's three 60s that way.

Now, the third thing we talked about is a smart hub,

and a smart hub is an active hub,

but it has enhanced features

like Simple Network Management Protocol

so that I can actively control that hub

and configure it from a distance.

It's not just a dumb device,

but it adds a little bit of intelligence this way.

Overall though, in modern networks,

you are not going to see hubs.

Almost exclusively, we're going to use switches.

And I'm going to show you why in just a few minutes.

The next thing we need to talk about here though,

is collision domains.

When I talk about a hub, it is a layer 1 device.

And like I said, it's dumb.

All it does is repeat what it's told.

And because it can be used

to connect multiple network segments together,

guess what we do?

We're actually going to make a bigger collision domain

by doing that.

Let's say we want to take five or six computers

and make them all talk together.

Well, we need a hub to do that.

Each LAN segment then becomes a separate collision domain.

But hubs don't break up collision domains.

Instead, they connect them.

So, if I use this diagram here,

you can see that there are two 4-port hubs.

I have three machines on the left side,

and they're talking to one hub.

I have two machines on the right side,

and they're talking to their hub.

And the hubs are communicating together.

This is as if all of these devices

were on one long bus cable,

they're all treated as one large collision domain.

And that becomes a big issue as we get into bigger networks,

and we start putting in a 24-port hub or a 48-port hub,

because all these machines start to talk at the same time,

and we're going to have a lot of collisions.

So, how do I fix that?

Well, that introduces the concept of a bridge.

A bridge is going to analyze

the source MAC address in the frame,

and it's going to populate an internal MAC address table.

Based on that table, it's going to make forwarding decisions

based on the destination MAC in those frames

because this is a layer 2 device.

In our earlier example, we had six machines on two hubs,

I can now put a bridge in between them

and break them up into two pieces.

This information will allow it to only go across

when it needs to based on its MAC address.

If instead it just wants to talk to another PC

that's on the hub it's sharing,

it never even has to go to that bridge.

And those three machines on the left

will never hear the communication.

This adds security and efficiency to our network

and it breaks up that collision domain into two parts.

If I take a hub and I take a bridge

and I marry them together, guess what I get?

I get a switch.

A switch is a layer 2 device just like a bridge.

It's used to connect multiple network segments together,

just like a hub.

Essentially, I want you to think of a switch

as a multiport bridge.

It's going to have every single port

act as if it was a hub with a bridge on every port.

This way, it breaks up the collision domain

into a single collision domain for each and every port.

It's going to learn the MAC addresses

of the things that are touching that port.

And it's going to make forwarding decisions

based on those MAC addresses, just like a bridge would.

It's going to analyze the source MAC address

and then it's going to decide where to send the information

based on its internal table, just like a bridge would.

So, when I have a switch, each port on there

is going to represent an individual collision domain.

But everything on that switch

is all part of the same broadcast domain.

And we'll talk more about this as we go through this lesson

and we go through this section.

Now, let's talk about how this works in the real world

when we're dealing with a switch.

Now, let's say we have,

I'm sitting here at PC1,

and I want to take remote control of the server

by using SSH or Secure Shell.

Well, how can I do that?

Well, if I'm sitting here on PC1,

and I have a MAC address, let's say of 12 Bs.

I want to talk to the server

who has a MAC address of all 12 Cs.

I'm going to refer to PC1's MAC address as BB,

and server's MAC address as CC, for simplicity's sake,

as I go through and talk in this lesson.

Now, notice I have the switch tables at the bottom

and right now, they're empty.

They don't even know who's connected to them,

but when PC1 talks the first time,

its MAC address at BB says to switch one,

"Hey, I want to talk to server CC."

So, it sends out a thing called an ARP packet.

That ARP packet is going to go to switch one

and check its table.

If the table sees it and says,

"Well, I don't know how to get to CC.

So, I'm going to push out that ARP packet

to every other port that I have on my switch

and see if I can find it for you."

Now, before the switch starts

pushing that information out to try to find CC,

it does know one thing for certain.

It knows where BB is because it just talked to it.

So, since BB came up on port 0/1,

it wants to put that in its table,

and then it's going to push out

the ARP packet to everyone else.

So, PC2 then says,

"Hey, I'm not CC, so I'm going to ignore you".

PC5 goes,

"I'm not CC", and it ignores the switch, as well.

Switch two goes,

"I don't know who CC is,

but even though it's not my MAC address table,

I'll forward that out to all the other people

on my switch and see if I can find it."

That's the idea here.

That's what we do with a broadcast.

So, it sends it out to its broadcast domain,

which has PC3, PC4, and server.

And the ARP packet goes out

and it goes, "Hey, I also learned

that switch one knows where BB is.

So, I'm going to put that in my port table.

So, if people ask me for BB

I know who to talk to."

And as the ARP goes out,

it goes out to all of the servers and all of the PCs,

the server goes, "Oh, hey, I'm CC."

So, it responds with an ARP packet back to switch two

and says, "Hey, CC?

That's me.

You should send me all that traffic."

So, what is switch two going to do?

It's going to populate its table with CC being on port two,

and it forwards that back to the requester on switch one.

When switch one gets it, it populates its table

and pushes it back to the requester at PC1.

At this point, every one in the network

got query to say, "Who is CC?"

And that was a lot of traffic

to figure out who the server was.

That worked pretty much just like a hub, right?

But at this point,

everybody now knows where CC is and BB is.

And so now that we know that,

when PC1 sends out the SSH packet

and says, "I want to talk."

Guess what happens?

It goes to switch one

and instead of bugging everybody,

switch one only sends it out port 0/2

because it knows that is where CC was.

That gets to switch two and when switch two gets it,

it's going to send it out it's port 0/2,

because it knows that's where server was.

So, now we have a two-way connection that's been established

between the PC and the server through those two switches.

And all the other PCs out there,

PC2, PC3, PC4, and PC5,

they don't hear any of this

and they operate on their own

without dealing with that SSH traffic.

So, we have just minimized

the amount of bandwidth that's been eaten up

by five or six times

because we removed all that extraneous information

and all the extraneous equipment from this equation.

This is why switches improve our network performance

and our security so much.

And at this point, switch one and two

are only sending out the traffic

between PC1 and server across one line.

So, PC2, PC3, PC4, PC5 never hear it

and if PC2 and PC3 want to start talking,

they could at the same time

because these switches also support full duplex.

And so, I can have a communication

between the server and PC1

and another separate communication

between PC2 and PC3.

And it's not going to disturb each other.

This is where our efficiency comes in

when we deal with switching.

Now, the next thing we have to deal with is a router.

Because when we deal with switches,

we're dealing with layer 2 devices,

we're dealing with MAC addresses, right?

That's not going to help us if we go across to the Internet.

And so, if we want to connect two dissimilar networks,

like an internal network and an external network,

i.e. your LAN and the Internet,

then we need to make routing decisions.

To make these forwarding decisions,

also known as routing decisions,

this is going to be based on logical network information,

such as IPv4 or IPv6.

And switches aren't aware of that.

Switches only know about layer 2 and MAC addresses.

Routers are all about layer 3 and IP addresses.

Routers are much more feature-rich

and they support a broader range of interface types, as well.

And so, you might have a router that has a serial port on it.

It might have a copper RJ45 port on it,

it might have an ST fiber connector on it,

you might have a GBIC or an SPF or a QSPF on it.

These all have multiple connectors

where we can use a router

to connect different networks

and even different types across them.

Switches, on the other hand, tend to be all one.

They're either all fiber or all copper,

depending on which one you buy.

Now, routers have one distinct advantage over a switch.

And this is that they can actually

separate out broadcast domains.

Now, going back to our earlier example,

I had three PCs on the left and two on the right.

They're talking to those two switches.

Now, if the router wasn't there,

this would be just one big broadcast domain

with five collision domains in it.

Now, because I put a router in there,

I actually have two separate broadcast domains.

And that's going to help me reduce the traffic

and reduce the noise.

Now, this is going to lead

to a lot of efficiency in our networks.

We're not going to get into how routers work

at this particular point in time.

We will dig into that later in a separate lesson

because there is a lot to cover there.

Now, another thing you may come across

is what's known as a layer 3 switch.

And this tends to confuse a lot of students

because when we talk about switches being layer 2,

and routers being layer 3,

it's a lot cleaner and easier.

But over time, they decided to make these things

and make these things called layer 3 switches,

which really muddy the waters.

Now, just like we took hubs and bridges,

and we combine them to make a switch,

well, somebody got the idea

of taking a switch and a router and combining them,

and they call that a layer 3 switch.

Layer 3 switches are layer 3 devices

that are used to connect multiple networks together,

and they can perform routing functions.

Now, they can do routing decisions, just like a router.

And they can connect network segments, just like a switch.

Because they act like a router,

each of their ports is going to act

as its own broadcast domain and its own collision domain.

This is a really efficient way of doing things

on an internal network

because you can use these layer 3 switches

and do things quickly.

Now, if you have a very large network though,

I would not recommend

using layer 3 switches as your router

because they're not as efficient at routing

as a dedicated router would be.

If you're in a small office or a home office environment,

and you have 20 or 30 machines,

you can replace a router and a switch

with a single layer 3 switch.

And that will work well and save you some money

because they are cheaper than having two devices

cause you only need one.

But if you're going to be in a very large network,

I do prefer having a dedicated router

because they are much faster

for large scale routing operations.

Now, the last thing I want to talk about here on the screen

is that I have a nice little summary chart for you

that's going to show you the five types of devices

that we just talked about.

We talked about hubs and bridges,

switches, multilayer switches,

also known as layer 3 switches, and routers.

It'll show you

all the possible collision domains that they have

and the possible broadcast domains that they have.

Remember, hubs are just like one shared cable,

one collision, one broadcast,

whereas a bridge adds a collision domain

for each port on that bridge,

and it still has one broadcast.

A switch is just like a bridge

and so it has one per port and one broadcast domain.

Routers and multilayer switches operate the same way

so you have one port that is a collision domain

and one port is also a broadcast domain.

Now, you can see the layer of operations

here on the right side.

Hubs are operating at layer 1,

bridges and switches operate layer 2,

multilayer switches also known as layer 3 switches

and routers operate at layer 3.

Now, one last thing I want to talk about

with multilayer switches or layer 3 switches,

and it's for the exam.

For the exam, I want you to remember

that anytime they mention a switch,

they are talking almost exclusively

about a layer 2 switch.

Whenever you hear the word switch on the exam,

always be thinking layer 2 devices

that are focused on MAC addresses.

If you hear routers, those are layer 3 devices

and they're focused on IP addresses.

Now, the only exception to this rule

is if the test specifically states in the question

the phrase multilayer switch or layer 3 switch.

If they use those terms of multilayer switch

or a layer 3 switch,

then, you can treat it like a router.

Otherwise, always treat a switch

as a layer 2 device for the exam.

This trips up a lot of my students

who are used to dealing

with switches and routers in the real world

because most switches you're going to buy nowadays

in a small office home office environment

are going to be layer 3 or multilayer switches.

But for the exam, a switch is a layer 2 device.

In this lesson, I want to show you

what some of these devices look like.

We talked in the last lecture about hubs and bridges,

and switches and routers, and all of their different uses

and placements throughout the network.

But when you look at a small office,

or a home office network, you might have everything

in a single device, something that looks like this.

Now, as we look at this a little bit closer,

you can see the front, it's going to have

all the blinking lights and tell you what's going on.

But if we look at the back, we'll actually see

that this single device actually has

multiple different functions in it.

The first function it has,

is what's called the wide area network.

And this is where we're going to connect a cable line,

or a coaxial cable that's going to provide our internet access.

Now that gives us our external network,

that wide area network.

But if we want to connect it to our internal network,

we have to have a routing function.

And so there's a built-in router that routes

the wide area network connection

into this internal, or LAN connection.

Now the internal connection

is going to happen over these ports.

And these switch ports are going to be just a simple switch

that's embedded into this device.

So we have a switch, and we have a router,

and we have this wide area network connection.

Additionally, we have over here, all the way over here,

this is where an antenna gets plugged in.

Because this device

has a wireless access point built in as well.

So now we have a router, a switch,

and a wireless access point, all in a single device,

But there's still something else here.

When I talk about this wide area network connection,

we are actually talking about plugging in coaxial cable,

but converting it so it can talk with these devices here

that are on cat five, or ethernet cables.

To do that, we're going to be using a media converter.

And so in this one single device,

we have four different functions.

We have our switch. We have our router.

We have our wide area connection, or media converter.

And we have a wireless access point.

Now, do all of these things have to exist in a single box?

Well, no.

And in fact, most office networks,

we're going to break these out into individual boxes,

or machines, or functions.

So let me show you what that looks like.

Let's start with the media converter.

So I talked about, in the combined box we were taking

coaxial cable in and converting it,

so we can talk over cat five or ethernet.

Well, media converters can convert

two dissimilar signals any way you want.

You might go from fiber to cable.

You might go from cable to ethernet.

You might go from fiber to ethernet.

You might go all sorts of different ways.

In fact right here I have two different media converters,

and actually part of a set.

So one is the transmitter, and one is the receiver.

So as I bring these over here so you can see them.

I have my transmitter on this side,

and my receiver on this side.

Now you'll notice that they both have HDMI jacks right here.

Now, if I turn them around,

you'll see that they have cat five or ethernet.

What these are used for is for me to be able to take

a long distance run of video over HDMI,

and be able to push it over a cat five or an ethernet cable.

Now, why would I want do that, and send it

from this transmitter over HDMI,

over cat five, from cat five to the other one,

and then from cat five back to HDMI?

Well, because HDMI can only do short cable runs.

In fact, anything over about 20 feet,

you're going to have problems with your HDMI connections.

So if you're going to have a big home theater setup,

you're going to need some kind of a media converter

like this, to be able to run things

from your satellite dish to your TV,

or from your cable box to the projector,

or whatever those things are.

And that's exactly what we use this for.

This is to be able to go,

to do a 50 foot run from where our cable boxes are

all the way up to where our projector is,

so that we can then display the information we want.

And being able to do that media conversion

from HDMI over to cat five allows us to do that.

Now, the next one thing I want to talk about is a switch.

And we talked about the fact that this had

a switch built into it in this all-in-one device.

Well, switches can be small, or they can be big.

In fact, right here, I have a pocket sized switch.

This is an unmanaged switch.

Now this switch has five ports,

and it is an unmanaged switch,

meaning that is going to give me the benefits of a bridge,

and it's going to give you the benefits of a hub,

where I have a single collision domain per port,

and one broadcast domain for the entire five ports.

But other than that, this is a dumb switch.

It doesn't have VLANs, it doesn't have any of

the ethernet features that we're going to talk about

throughout the rest of this section.

Instead, this is just a simple, dumb switch.

And it cost me about five to seven dollars.

It will allow me to take the network connection coming in,

and then pass that out to the remaining ports.

That's all this is going to do.

And again, our combination device

already had one of those built in for us.

Now, the next thing we were talking about

was it had wireless capability.

And so for that, we're going to use a wireless access point.

Now, most of you, when you think of a wireless access point,

are probably actually thinking of a combination device.

If you look at your wireless access point,

and you turn around the backside,

and it has four of those switch ports in there,

it's actually a combination device.

It's a wireless access point,

with a router, and with a switch,

just like the one I showed you at the beginning,

But you also can have

just a wireless access point like this.

And as you look at this one, you'll see

that I have the two network antennas.

But if I turn it over to the side,

you're going to see there's only a single network jack.

And the reason for that is this is just a media converter

to go from cat five to wireless.

That's what a wireless access point is.

It's that simple. It's a very simple device.

It takes whatever's coming in over this cat five connection,

and it sends it out over these antennas.

That's all a wireless access point does.

Now, when I go back to the original one I showed you,

this one is a combination device.

It has a wireless access point,

but it also has a router, and it also has a switch.

This one does not. It's simply a wireless access point.

So I know we covered a lot in this lesson,

but what I really wanted to show you was that these devices

can be broken apart into individual components.

But a lot of times they're not.

And so when you go to the store and you say hey,

I want to buy a wireless router,

there really no such thing as a wireless router.

There is a wireless access point switch and router

combination box that is marketed as a wireless router.

But there is no such thing as a wireless router.

Or if you look at something and you see that it says

it's wireless AC, which we're going to talk about later,

it actually is going to operate in two different spectrums.

But technically, wireless AC only operates in one spectrum.

Now, the reason is because people who are marketing things

are marketing them to consumers.

And so they're trying to dumb it down

to the easiest thing to understand.

But on the network plus exam, I want you to remember

that each of these boxes,

and each of these functions do a different thing.

If they're talking about a wireless access point,

that's not the same as a wireless router,

which has a switch, a router, and a wireless access point

all combined together.

So we'll talk more about this as we go through the course,

but hopefully this was helpful to give you a general idea

of what some of these devices are.

Now, I'll see you in the next lesson.

At this point, we've covered the basics

of ethernet with the cabling and the cable types

and some of the devices like routers

and switches and bridges and hubs,

but there's a lot more to ethernet out there.

And we're going to dive into that in this video.

When we talk about additional features of ethernet,

these features are there to enhance the network performance,

the redundancy, the security, the management,

the flexibility, or the scalability of our networks.

All of these are great things,

and we use different features and different devices

to give us these abilities.

Now, some of the common switch features

that we have are things like virtual LANs

or VLANs, trunking, spanning tree protocol,

or STP, link aggregation,

power over ethernet or POE,

port monitoring and user authentication.

The first three of these, VLANs,

trunking and STP are a little bit more in depth.

So we're going to cover each of those in their own video

as we go through the rest of the section.

But for this video,

we're going to focus on link aggregation,

power over ethernet, port monitoring,

and user authentication.

First, let's talk about link aggregation,

and this falls into the IEEE 802.3ad standard.

Now, if you're taking notes as we go along,

I want you to write this down.

Anytime you see a standard like IEEE 802.3ad,

you want to write it down and what it is.

So write down link aggregation 802.3ad,

because you're going to see questions on the test

where the answer is either listed as the number

like 802.3ad or they might ask you

something like, what is 802.3ad?

And you need to be able to answer that it's link aggregation

or power over ethernet

or port monitoring or stuff like that.

Now it's going to be important as we go through,

to write these things down and remember them.

With link aggregation, we have a problem in our networks,

and that is congestion can occur

when all the ports are operating at the same speed.

If you have 100 megabits per second network,

and each switch port on that network

can operate at 100 megabits per second,

this isn't a problem if everyone's taking their turn.

But if you remember in our last lesson,

we talked about the fact that switches are full duplex,

which means that every port can operate

at 100 megabits per second.

If I have three ports and I have PC one, two and three,

all sending data in at 100 megabits per second,

well guess what?

That means to send it out,

I need at least 300 megabits per second,

but that output port

is still only 100 megabits per second.

And that can cause a bottleneck

where traffic can be dropped, as shown in this picture.

Now to solve this, we use what's called link aggregation.

Now what link aggregation does is combine physical

multiple connections into a single logical connection.

So let's say I have a 24 port switch.

I can use 20 ports of that as service

for 20 different machines, and then take four ports

and combine them together

to give me one virtual 400 megabit per second connection.

This will help alleviate the congestion

by increasing the amount of bandwidth available

for uplink to the next router or switch.

Now, if I have four connections out

and I have 20 connections coming in,

is there a possibility there's going to be a backup?

Well, yes, but it's not going to necessarily happen

all the time.

In fact, it's really rare that every PC on your network

is using all 100 megabits of its network

every time, at the same time,

if you have a 24 port switch,

you're pretty safe using four ports for link aggregation.

And that way, you can use the 802.3ad protocol

to do this for you.

Next, we have power over ethernet.

There are two variants of this,

the power over ethernet and power over ethernet plus.

Now power over ethernet is 802.3af.

Power over ethernet plus is 802.3at.

But I would write both of these down

as part of your memorization guide.

Now the idea of power over ethernet

is that you can supply electrical power

to devices over ethernet.

That's the entire purpose of it.

The benefit of this is that if I'm using

a CAT five or higher cable,

I only need a cable to give both power and data to something

as opposed to having a power cable and a data cable.

Now, each cable can provide you

with up to 15.4 Watts of power to that device.

With power over ethernet, plus this actually can go up

to 25.5 Watts of power

because it does support a higher wattage.

Now, both of those numbers are things I would also add

to your memorization sheet.

So for power over ethernet, it's 15.4 Watts

and power over ethernet plus, it's 25.5 Watts.

There are two types of devices out there.

We have power sourcing equipment and power devices.

The power sourcing equipment or PSE,

is what is going to provide the power to our other devices.

This would normally be your switch.

Now the power devices are things like your phone

or a wireless access point.

These are the devices that are getting power over ethernet

and pulling it from our powered sourcing equipment,

like our switches.

Now all of this is going to occur over an RJ 45 connector

using a CAT five or higher cable.

Now the next one we have is what's known

as port monitoring or port mirroring.

This is not necessarily a number or a standard

you have to memorize,

but you do need to understand the concept of it.

Now we talk about port monitoring,

it's helpful to analyze packet flow over a network.

Each switch port is its own collision domain

as you remember from our last lesson,

you can't listen from PC one to PC two

if you're not PC one or PC two,

because there's going to be

that individual connection for them.

Well, if you wanted to listen to that traffic

because you need to do something for your security system,

you have to connect a network sniffer to a hub,

and then you'd be able to hear everything

because hubs broadcast everything to every port,

or you can do it in a switch by setting up

a port monitoring or port mirroring.

Now, what you do here

is if you have a 24 port switch, for instance,

and all your traffic is going from port one through port 23,

you can then have it all mirrored out over port 24

and attach your sensor there, your network analyst machine,

and be able to collect that data and read it.

Now for this to work, your switch requires

that port mirroring or port monitoring

is set up on the device and configured

to allow all that traffic to be mirrored

and copied over to that 24th port.

In the case of this envelope

that we want to send from PC one to PC two,

we get a copy of that's made as it's sent over the network

to the network analyst machine,

as well as by the switch over through port mirroring,

the port mirroring is going to make that copy and send it over

so we can analyze it using something

like a network analyst tool

like Wireshark or some other network sensor.

Next, we have user authentication

and there is a number for this one, it's 802.1x.

Now for security purposes,

switches can require users to authenticate themself

before they get access to the network.

And 802.1x is going to allow us to do just that.

Once you're authenticated, there's a key

that's generated and it will be shared

between the supplicant,

which is the device wanting to access it

like your laptop or your desktop and the switch itself,

which we call the authenticator.

So how this works is shown here on the screen,

and you see the supplicant,

which is PC one, is going to first talk to the switch.

And it's going to ask for permission to join the network.

That's when it's going to send it straight through

to the authentication server

and the authentication server is going to check

the supplicant's credentials

and create a key for it if it's authorized,

then that key is used to encrypt traffic

between the switch and the client.

You can see here with the key distribution

going from the authentication server

to the authenticator, and then the key management

goes from the authenticator to the switch, to the PC.

At that point, both the switch and the PC have the same key,

and we can create a symmetric encryption tunnel,

which will secure all of our data.

We will talk more about this process in a future lesson.

As we dive into the security of 802.1x.

Next, we have management access and authentication.

To configure a manage our switches,

you can do two different things.

You can use SSH to do it and do it remotely.

Or you can use a console port and do it locally.

With SSH or Secure Shell, it's going to operate over port 22.

And it's a remote administration program

that allows you to connect to your switch over your network.

Anywhere I'm sitting on the network,

I can SSH into that switch and never have to get up

from my desk, this way I can go and remotely manage it.

Now, if I want to use a console port instead,

I have to be there locally to plug into it.

I would use an RS 232 serial cable,

which we call a rollover cable, which has one end

as an RJ 45 and the other end as a DB nine.

And I'd be able to plug my laptop into the console port

of a switch, and then when I'm physically connected to it,

I can then go in and be able to access it

and make different connections and configurations.

Now, which one should you use?

Well, this is going to depend

on the security level of your network.

It is more secure to do it locally

than it is to do it through SSH over the network.

But there's actually a third way

that uses the benefits of both.

And this is known as an out of band management network.

Essentially, you create another network

that sits on top of your network

that you use for your data.

And this network is only used to be able

to connect to devices and configure them.

Now you can do this out of band network

by having a separate network configuration

on this separate physical devices.

And this way we might have a 24 port switch

that connects to each of the other switches in our network.

And that becomes our out of band network.

We call it out of band because it is out

of the normal band of where we send data.

So we have this management network

and then we have this data network.

Now all of my management devices are on one network

and all of my data transfer is on the other network.

This way you have additional security to make sure

all your configurations aren't touchable by the end users.

And only by your system administrators

who have permission to be on the out of band network.

Now, the next thing we need to think about

is this thing called first-hop redundancy.

Now this has to do with layer three switches and routers.

When we deal with first hop redundancy,

we use protocols like HSRP,

which is the hot standby router protocol.

Essentially, it's going to create

a virtual IP address and a virtual Mac address.

And this curtain creates an active and a standby router.

So in the case that you can see here on the screen,

you'll see I have three routers displayed.

I have an active router, which is the .one router.

I have a standby rider, which is the .two.

And I have a virtual router which is the .three.

Now in the real world

if I walk over to my networking cabinet,

there's not three routers.

There's only two physical routers standing there,

the active and the standby,

but my configured PC only sees one router.

They see the virtual router, .three.

So when my PC wants to communicate out,

it communicates to the virtual router on .three.

That way, it connects to the virtual router.

And then the virtual router will know,

based on which router's currently up,

the active or standby, which one to send the traffic to.

This is what the HSRP protocol does.

We're going to go into much more depth on first hop redundancy

later on when we get into the router section of this course.

But for right now,

I just wanted to introduce you to the idea,

because if you're dealing with a layer three switch

or a multi-layer switch,

you might have to deal with first hop redundancy.

Now, the other thing to know

is that HSRB is not the only first hop redundancy protocol.

There's also the gateway load balancing protocol,

known as GLBP.

There's also the virtual router redundancy protocol or VRRP.

There's common address redundancy protocol or CARP,

but HSRP, the hot standby routing protocol

is the most popular that's used in most networks today.

All of these work pretty much the same way.

And for the exam, you just need to remember

that they're all first hop redundancy protocols.

When we get into routing later,

we're going to talk more in depth about them and how they work,

but for now, that's what you need to know.

The next thing we want to talk about is MAC filtering,

which is a layer two function, which again,

we're dealing with switches, so that's really important.

MAC filtering is the process of allowing or denying traffic

based on a device's Mac address.

And this can be used to help improve security.

It's one of many layers of security we can add,

but honestly, it's really not that strong of one,

but it is one that we do need to talk about for the exam.

Because according to the network plus exam,

you should use it.

Now, how does Mac filtering work?

Well here on the screen,

you'll see I have a wireless access point.

We have a wired desktop, a wireless desktop

and a wireless printer.

If I wanted to make sure the only the wired desktop

could talk to the printer,

I can actually block the wireless desktop

by its Mac address.

We can tell the switch that if it comes from Mac address A,

it's allowed, if it comes from Mac address B,

you can block that traffic.

Next, we have traffic filtering

and traffic filtering is kind of like Mac filtering,

except instead of doing it at the Mac address layer,

we're going to do it at the logical layer

using IP addresses or ports

Now, this is where we start talking about things

like layer three and layer four.

And so we have to deal with this on a router

or a multi-layer switch.

So for example, if I have PC one,

trying to talk to PC two,

I can block it at the multi-layer switch.

Seeing that anything coming from the address,

192.168.1.100 is not allowed.

I might put it on my blacklist and block it.

Anything coming from 192.168.1.101 is allowed.

And so it's on my white list and I'll add it in.

Or I might do this based on ports.

And I can say anything coming over port 25

is allowed because there are mail servers,

but anything coming from port 53 53 is not allowed.

And I'm going to block them.

This is the idea of traffic filtering.

I can block it based on an IP address,

or I can block it based off your port address.

And that way, I can do it at layer three or layer four.

If I want to do it at layer two,

I use Mac address, layer three, IP addresses,

layer four, ports.

Either way I want to do it, it's okay.

I can do this. Using an access control list,

and we'll talk a lot more about access control lists

when we talk about firewalls

because that is exactly how they do things.

Last, we want to talk about quality of service

and quality of service is going to forward your traffic

based on different priority markings.

We have this switch, this multi-layer switch shown here,

I have three devices that are connected to it.

PC one, PC two, and a phone.

Well, because phones are dealing with UDP voice traffic,

I want to make sure it has a higher priority.

So it gets first in, first out priority.

Now, if I pick up the phone and I start talking,

I want to make sure the packets are dropped

so I don't have my voice going in and out as I'm talking.

With PC1 and PC2, I can make those lower priorities

and they'll get a lower level of service.

And that's okay, because if they're using TCP,

they'll just retransmit what's dropped and do it again.

Now, in this example,

you can see PC one has a higher priority than PC two,

but the phone has a higher priority

than both PC one and PC two.

Now later on in the course,

we're going to dive deep into the idea of quality of service.

We'll spend a couple of videos on it in fact,

because it's a really important concept, but for now,

I just want you to understand that you can tell a switch

or a router what is more important

and what should get higher priority,

which one is essentially the VIP.

That's the idea of quality of service.

Spanning Tree Protocol, or STP,

is an additional ethernet feature that is really important,

and so we've broken it out into this separate lesson.

Now, when you look at the number for it,

it is known as 802.1d, and so I want you to write that down

in your note sheet as well.

802.1d is the Spanning Tree Protocol or STP.

Now what does spanning tree protocol do?

Well, it allows us to have redundant links

between different switches,

and it will prevent loops in our network traffic.

Now, why is it important to prevent these loops?

Well, you may remember,

that we talked about the availability of networks

is measured in nines.

We want to have five nines of availability, 99.999% uptime,

which means that we're only going to get

five minutes of downtime each and every year.

Now, if I want to have redundant network,

I have to be able to have multiple links to create that

and give me that five nines of availability.

Now, there's this thing out there called SPB,

or the shortest path bridging,

and this is used instead of STP

for really large network environments.

Now, we're not going to go in depth into SPB,

because your exam isn't going to dig into it in depth.

If you go on to do the CCNP,

and you start working in higher level networks,

you may dig into SPB there,

but for the Network Plus exam, you really don't need to.

Instead, we are going to dig deep into STP, because STP,

which is our spanning tree protocol,

is what we're going to be using for our smaller networks

that are covered from the Network Plus exam.

Now, let's take a look at a network

without spanning tree protocol and see how it works.

Now, you can see that the Mac address table here

can have corruption that occurs.

Let's say, that I have PC2 trying to send a message to PC1.

You can see that there's a redundant network here.

It can take a path going from switch four, to switch two,

to switch one, over to PC1,

or it can take a path from switch four, to switch three,

to switch one, to PC1.

And that looks great, because we have two different ways,

but, if you remember how Mac address tables work

inside our switching tables,

you're going to notice that there's going to be a problem here.

When PC2 reaches out to talk to PC1,

switch four is going to learn that the CC Mac address

for PC2 is coming in from that side.

Now it's going to broadcast that out

to switch three and to switch two, who both learn of that,

and they put that in their Mac address table

for port 0/2.

Then, they go and tell switch one,

and it's coming from both sides,

so switch one now thinks

that it can rebroadcast that out both sides,

which then feeds back to switch two and switch three,

and this creates a loop known as a switching loop.

Now you can see it here in red,

because as the data starts going back,

these interfaces start figuring out,

hey, how do I get to network CC?

Well, the way I get to network device CC, that Mac address,

is that it comes through both interfaces,

and so both of those switches now tell me

that I can go there for CC,

and that means I really don't know which way to go.

Because as a device on a network, I can only go one way,

and I need to choose which way that is.

Now, this switching loop can happen,

and we get what's called a broadcast storm.

This is what's going to happen,

if you don't have spanning tree protocol in your network.

But if you do have spanning tree protocol,

you can actually solve this problem.

So let's talk about how we can get through this.

Now we see this broadcast storm that's happening,

and if this broadcast frame is received by both switches,

they'll start to forward it to each other,

and so one tells it, and the other one tells it back,

and they keep going back and forth.

Think of it like this.

I tell you a secret, and then you tell me that same secret,

and then I tell it to you again,

and you tell it to me again.

And we keep doing this over and over, and each time,

more copies of that secret, in this case, a frame,

are being forwarded back and forth between each other.

It can actually start replicating

and then being forwarded again, and again, and again,

until your entire network is just consumed up

by all of these copies of this art packet,

that's being sent out,

trying to tell everybody where that device should be.

Now, it just takes this to happen over time

through your entire network, and eventually,

your network will just crash under the weight of this.

So, if your switch starts having this problem,

the only way to fix it,

if you don't have spanning tree protocol involved,

is to actually unplug the switch,

wait about 30 seconds for all that data

to be forgotten and lost, and then plug it back in.

Now, that's not a great way to run a network,

so someone decided we're going to create something electronic

to fix this problem, and that's where STP,

or the spanning tree protocol, gets involved.

Now, the way STP works is that it uses a thing,

called a root and a non-root bridge.

Now a root bridge is where a switch is elected

to act as a reference point for the entire spanning tree,

the switch is then going to select the lowest bridge ID,

or BID, and that's going to be elected as our root bridge.

Now, the bridge ID is made up

of a priority value and a Mac address,

with the lowest value being considered the root bridge

inside our network.

Now, if everything is considered equal,

we're just going to go with the manufacturer's Mac address,

being the lowest, and that one will be our root bridge,

whichever has the lowest assigned Mac address.

A non-root bridge is every other switch on the topology,

so one root, everybody else becomes non-root.

Now let's assume here we have switches

one, two, three, and four, again.

How's it going to end up looking

when we start implementing STP?

Well, if I look at switch two and switch three,

their Mac addresses are all twos and all threes accordingly,

both have the same priority,

because they're all using the exact same cabling,

because priority is based

on the category of cable you're using.

Now, who is going to end up being my root bridge?

Well, if all the priorities are the same,

we're going to go with the one that has the lowest Mac address.

So, in this case, it's going to be switch number two,

because it had all twos as its Mac address.

That makes switch number three, one, and four,

all non-root bridges.

Now, when we look at the root bridge,

switch two, in our case,

we also have to look at the concept of a root port,

a designated port, and a non-designated port.

Now, when I talk about a root port,

this has to be assigned on every non-root bridge.

So I talked about switches one, three, four,

were all considered non-root bridges,

so each one of those has to have one port assigned

as its root port.

Now, the port that is closest to the root bridge,

in terms of cost and its number,

is going to be the root port.

If the cost is equal,

and all the cost is determined

based on those cable types we talked about,

then the lowest port number on the switch will be chosen.

The way we determine what the cost is,

is faster cables have a lower cost,

and slower cables have a higher cost,

because we want to put things

on the fastest cable as possible.

So, if I have a cat three cable, a cat five cable,

and a cat seven cable plugged into the switch,

then the port with the cat seven cable

is going to be considered the fastest port,

because it has the fastest type of cable on it,

and therefore, it will be the root port

on this non-root bridge.

Now, if you have all the same type of cable,

all cat five, or all cat six, or all cat seven,

then we're going to choose the lowest port number,

in this case, port number one on the switch.

Now, the designated port is,

every network segment is going to have

at least one designated port on it.

The port closest to the root bridge, in terms of cost,

will be considered its designated port.

All of the ports on the root bridge

are considered designated ports,

because they all are on the root bridge,

and therefore, they're really, really fast.

Now I'll show you this in a diagram,

so it'll make a little bit more sense here.

You can see here,

the non-designated ports are the ports

that are going to block our traffic for us.

This is the benefit of STP.

This is where your loop free topology

is going to come into play.

So, as we look at this diagram, you can see,

I have a single root port on a non-root bridge.

The non-root bridge was switch number three.

I designated it as purple,

because this is the lowest number port.

Now, it's port 0/1 versus port 0/2,

and it also has the lowest cost,

because the cost of 19 is assigned to anything

that's using fast ethernet or a cat five cable.

Remember, the faster the cable, the lower the cost.

Now, all the other ports on this non-root bridge,

in our case, switch number three,

are going to be considered non-designated.

This means that we're going to make them red,

and if you think of it red, think of it like a stop.

There's no traffic coming through those ports.

Now, when I go to the root bridge,

which was switch number two,

all of those ports are considered designated.

These are all going to be considered blue in color,

as shown in my diagram here,

and when traffic comes in from PC2 to go to PC1,

what is going to happen?

Well, port number 0/2 on switch three is red,

and it's not going to let traffic go through it.

It acts as a stop sign.

Traffic going from switch four, to switch two,

to switch one, and over to PC1, will be able to go through,

based on the way the diagram shows it here.

If it comes all the way around, and it gets to switch four,

to switch two, to switch one, to switch three,

it's going to get stopped at the root port, because again,

the non-designated port here is not going to allow it

to broadcast back through.

This is what prevents our loop,

and this is what's going to make a C for us in the diagram

instead of a circle.

That's the whole benefit here

of using root and non-root bridges

is that we put blocks in place,

so that we don't have a circle that completes

and allows things to create a broadcast storm.

Now, each port can go through a couple of states

as they do this process.

Non-designated ports are not forwarding traffic

during normal operations,

that's 'cause they're a red stop sign, right.

They receive information as a bridge protocol data unit,

or BPDU, and once they get that information,

they're not going to do anything with it,

and they're not forwarding it,

because again, those are non designated ports.

They're red, they stop information.

Now, if a link in the topology goes down though,

then the non-designated port will detect that failure,

and it can determine whether or not

it needs to transition itself into a forwarding state

and become a designated port or a root port.

As it goes through that forwarding state,

it's going to transition through four different states.

These four states are

blocking, listening, learning, and forwarding.

Now first, it's blocking, and when it's blocking,

this is when it has that big red X on it.

And it's a non-designated port,

and it's going to take any

of those bridge protocol data units,

and it's going to stop them and not forward them through.

They're being used at the beginning and on redundant links,

as shown on the display we had just a few slides ago.

Then we're going to switch to listening,

and it'll do this by populating the Mac address table

and starting to learn,

but it's not forwarding those frames yet.

Again, here we're creating that C not a circle,

and so we don't have a loop that's happening.

Next, we move from listening to learning.

Now, it's going to start processing

those bridge protocol data units, and when it does that,

the switch is going to determine its role

inside the spanning tree.

It's thinking, do I need to become a root port?

Do I need to become a designated port,

or should I just stay as non designated?

Then it's going to decide

if it needs to go into one of those states,

as either a designated port or a root port.

If it decides it needs to do that,

then it's going to start forwarding those frames

and those protocol data units.

Now, this is called forwarding,

and it starts forwarding those frames

over and over and over again,

and it starts taking over the process

of being the root port.

Now, in our example,

we either have a root port or a non-designated port

that are blocking.

We have our designated ports,

which are forwarding things,

so switch three is not sending traffic through.

Now, everything is going to go from switch four, to switch two,

to switch one, to PC1, in our example.

Now, if switch two goes down, what will end up happening is,

switch three will go through those four states,

and it will then be able to start forwarding that traffic.

It goes from blocking, to listening,

to learning, to forwarding,

and it'll take over as the root bridge,

and its ports will become root ports,

and they'll have designated ports on them,

then there'll be able to keep forwarding on.

Now, all this talk about link cost is really important,

and I kind of glossed over it earlier in the video,

so I want to go a little bit more in depth right now.

The link cost is an association

with the speed of a given link, as I said before,

the lower the link speed,

the higher the cost associated with it.

And so, as you can see,

you might have something like a cat three cable,

which is ethernet, and it's only 10 megabits per second

Now, because that is a very slow cable,

it's going to have a very high cost,

so we'll give it a cost of 100.

Now, when I go to fast ethernet, which is cat five,

or 100 megabits per second, that's a faster connection,

so my cost goes down.

It goes from 100 down to 19.

Now, you don't necessarily have

to memorize these numbers for cost for your exam,

but you should realize that if you have a lower speed,

you're going to have a higher number.

If you have a higher speed,

you're going to have a lower number.

In fact, there's this thing called Long STP,

that's been adopted recently,

because higher link speeds kept being created,

and we didn't have much room

to make those numbers smaller and smaller.

So, as you can see here,

with a fiber connection or a cat seven connection,

which might be 10 gigabits per second,

we have a cost of two.

If I went to 100 gigabits per second,

I really can't go much less than two,

I might be able to go to one.

And what they ended up doing with this Long STP

was adopting values

that actually go for 100 from a cat three

to something like 2 million for a cat three,

and then we might have more room here at the bottom

for something like a 10 terabit per second connection.

So again, don't worry too much about the STP cost itself

and the numbers associated with it.

If you're dealing with designing a network,

you can always Google the cost table,

and you can have it in your hand

as you're designing the thing.

So, you don't need to memorize these for the exam.

So, for the exam, I want you to remember,

that a lower speed is going to have a higher cost

and a higher speed is going to have a lower cost.

If you remember

that there's that inverse relationship

between speed and cost,

you're going to do good on those questions

that come up on the exam when you're dealing with STP.

The next major concept that we need to cover

is that of a virtual local area network,

also known as a VLAN.

Now, we talked about switch ports all being

on a single broadcast domain.

And to break them up, we have to use a layer three switch

or a router to do that.

Well, VLANs allow you to break out certain ports

to be used for different broadcast domains,

just like you would if you had a virtual router.

Now, before VLANs, we had to use additional routers

and cables and switches to separate out

our different departments and our different functions

and our different subnets, but with the advent of a VLAN

with layer three switching,

you can have this inside of layer three switching

or even some layer two switches.

This allows you to have a different logical network

that share the same physical hardware.

This is going to provide you with a lot of additional security

and efficiency that you don't get with using everything

in a single broadcast domain on a standard layer two switch.

Now, back before we had VLANs, you had diagrams

that look like this.

Let's say I had the IT department

and the human resource department,

and I wanted to keep them separate for security.

Well, if I wanted to do that,

I had to plug them into different switches,

and then have different routers

and be able to route the traffic between those two networks.

Now, if I had the IT and the HR on floors one and two,

I might have to have switch one and switch three

on the second floor and switch two and switch four

on the bottom floor.

And so, now I have double the equipment

to maintain this logical separation.

And in this case, I also had physical separation too,

because I had four different switches

for those two floors and those two departments.

Now, with virtual local area networks, or VLANs,

I can consolidate all of that into just two switches,

one for the first floor and one for the second floor.

And then, I can logically separate out the traffic

into each of those virtual networks.

Notice that the IT department is cabled into the switch

and logically, it trunks down from switch one

into switch two and then down into our router

and it keeps everything logically separate

as shown by this color scheme.

Now, either with these different switch ports

and they're in different LANs,

they're also going to be in the same physical hardware though,

and they're writing the exact same cable.

That's how you can see this purple and blue cable

going from a switch to down to the router,

it's actually only one cable,

but logically in this diagram, they are going to be

two separate logical cables, right?

But in real life, it is really one cable.

This is the idea of doing VLAN trunking.

And to do this, we use this protocol called VLAN trunking

known as 802.1Q.

So, I want you to write that down in your note sheet,

802.1Q is for VLAN trunking.

Now, this is what happens when you merge

all that data onto a single cable, we call it a trunk.

Now, since we have multiple VLANs,

and they're all going over the same cable,

we have to have a way to identify them.

This again is reducing the amount of physical infrastructure

cables and switches and routers that we need,

while still giving us the logical separation that we desire.

With VLAN trunking using 802.1Q,

this is something you need to be able to recognize

for test day and you want to make sure

that you have it written down

because it's a really important concept.

Now, the way we identify the different VLANs

that are going over this trunk is by using an electronic tag

that is four bytes long and it's called a 4-byte identifier.

Now, there are two pieces to that.

We have the TPI and the TCI.

The TPI is the tag protocol identifier,

and the TCI is the tag control identifier.

When you have one VLAN and it's left untagged,

that becomes your native VLAN,

also referred to as VLAN zero.

Now, you can see the packets here on the screen

and again, you don't have to memorize

the way these packets are laid out.

This is just a graphical depiction to show you what 802.1Q

actually looks like in the real world.

What you really need to know about VLANs

is that they are great for security,

and if you're using VLAN trunking,

your 802.1Q is your standard for VLANs.

Specialized Network Devices.

Now there are many different types

of network devices out there,

and there are lots of them that are outside

of the standard routers, switches, hubs, bridges,

servers, and workstations we've already talked about.

Other devices out there are going to perform

specific functions they'll improve our usability,

our performance, or our security.

Many of these devices include things like VPN concentrators,

firewalls, proxy servers,

and content engines and switches.

We're going to talk about each of those in this lesson.

The first one we're going to cover is a VPN concentrator,

also known as a VPN head-end.

Now a VPN is a virtual private network,

and it's used to create a secure VPN

or virtual tunnel over an untrusted network

like the internet.

If you're at home and you want to be able to dial

into your office,

either over a dial up connection

or over the internet using broadband,

you can do that using a VPN connection,

which creates an encrypted tunnel

so nobody can see what you're doing,

but you get from your home to your office securely

over that public network.

Now the device that terminates this VPN tunnel

is called a VPN concentrator or VPN head-end.

This allows this device to have multiple VPN connections

coming into one location.

Now, if you have a good firewall,

most of them will have this function.

But logically, when it's doing this function,

it's still functioning, not as a firewall,

but as a VPN concentrator.

So keep that in mind for the exam.

The VPN concentrator is this function.

It can be a device or part of another device

like a UTM or a firewall as well.

Now if I have a headquarters

in Washington, DC, for example,

and I have two other branch offices

in Los Angeles and New York,

I can actually tunnel all the traffic

from Los Angeles and New York

back to DC using these tunnels.

In this case, I would use

what's known as a site to site tunnel.

These would allow those locations to connect back

to my headquarters securely through the internet

and save me the cost of having to use leased lines.

Now when you hear the term VPN head-end,

remember this is a specific type of VPN concentrator,

and it's used to terminate IPSec VPN tunnels

within a router or another device.

Again, we're going to talk more about VPNs

and VPN security in a separate lesson,

because they're really important to the security

of our networks and there's a lot to cover there.

The next thing we're going to talk about is a firewall.

Now a firewall is a network security appliance

that's placed at the boundary of your network.

Firewalls can be either software or hardware,

and they come in stateful and stateless methods.

We're going to talk more about firewalls in depth

in their own video, but for now,

I just want you to remember that firewalls

allow traffic to go from inside your network

to outside your network like to the internet

and they can block stuff coming from outside your network,

like the internet, to the inside of your network.

Now on the screen,

I have three different ways to show you

how firewalls will look inside of network diagrams.

The first way is what Cisco likes to use.

They call this a Pix firewall

cause that's their brand of firewall

and it almost looks like a diode with a triangle

and a line on it.

A diode is essentially an electrical component

that only lets things go one direction,

which is why they represent a firewall in this way.

Now the next one that some people will use

is just to put a brick wall in their diagrams

and that will represent a firewall.

The third thing we can use is have your firewall

combined with your router and basically make it look like

a router with a brick wall wrapped around it.

These are the three ways you'll see it in network diagrams

when you're looking at firewalls.

All three of these icons are used to demonstrate

that there's a firewall there inside your diagram

and most of the time,

what you're going to see is that brick wall.

That's pretty much the standard these days.

Besides a regular firewall,

we have these things called NGFWs

or next generation or next gen firewalls.

These can conduct deep packet inspection at layer seven.

A regular firewall is really going to block things

based on your IP address and maybe the port and protocol,

but these next generation firewalls,

they can look through your traffic

to detect and prevent attacks.

They are much more powerful

than your basic stateless firewall

or even your stateful firewalls.

They're continually going to be connected

to the cloud resources, get the latest threat information,

to be able to make sure they know what those signatures are

to do that deep inspection.

Again, we're going to talk a lot more about firewalls

in their own lesson.

Next, we have IDS and IPS,

which is intrusion detection systems

or intrusion prevention systems.

IDS' and IPS' can recognize attacks

through signatures and anomalies.

They can also recognize and respond if they're an IPS.

Now a detection one can only see it and log it.

But if you're using a protection one,

they can actually see it, log it,

and then try to stop it by shutting off ports and protocols.

These IDS and IPS'

can be host based or network based devices,

depending on how you want to set it up in your network.

And they're going to be on one of these two lines

going left and right with a circle through them.

This is considered an IDS or an IPS sensor,

and it's the same diagram that's going to be used

for both of these devices.

The only difference is you'll see an IDS or IPS written

on it to dictate which one it is and the diagram.

Again, we're going to talk more about IDS and IPS in-depth

in their own video because they're really important

to the security of our networks.

Next, we have a proxy server

and this is another type of specialized device

and this one is going to make request to an external network

on behalf of a client.

Essentially, it's a middleman or a go between.

Now, why would we want to do that?

Well, there's really two functions.

The first was for security because

it can perform content filtering and logging.

On my network, I have a proxy server in my home network.

So if my kids are trying to go online,

it actually goes to the proxy server first.

It checks what's allowable for them

and then it decides whether or not

to let them go out or not.

For instance, if they tried to go to a pornographic website,

it's going to block that.

They try to go to Disney channel,

it's going to allow that.

Now workstation clients are going to be configured

so that all their traffic is going to have to go

through a proxy server in your corporate network.

Here, you can see this on the diagram

that if my son's computer and he wants to go make a request,

he's going to go to the proxy server.

Then the proxy server is going to check

if it's on the allowable list.

If it is, it goes out to something like disneychannel.com,

gets the information, brings it back to the proxy server

and then the proxy server gives it back to my kid.

That would be function one.

Now the second function of a proxy server

is they can have a cache in there

that can actually store a copy of that information

that was requested by the user.

In my case, I have two kids.

Let's say my son goes to Disney Channel

and then right after,

my other son decides he wants to go to Disney Channel.

Well, the proxy server already made that request

and has it locally.

So once he gave it to my first son,

it can then give it to my other son

without even having to go back out to the internet again

and this saves bandwidth and saves resources and time.

Proxy servers are really good at that,

but they are not the best.

Instead, we have another device out there

which is called a content engine.

These are dedicated devices that are there

just to do cashing functions of a proxy server.

They're basically more efficient than a proxy server

when it comes to caching and we call them content engines

or caching engines.

Where there's really going to be a big benefit here

is if you have a big headquarters

with a big beefy internet pipe,

but then you have the small branch office

kind of in the middle of nowhere.

It's really expensive for a good internet connection.

If you have a lot of data that goes across a small pipe

like a VPN or a lease line,

this can actually be a big data bog

and slow down inside of your network.

So what you'd want to do is put a content engine

at the branch office.

This way in the middle of the night,

the headquarters can actually sync up data

and update that content engine and then during the day,

anytime somebody requests that information,

they get it locally from that office

with the content engine

and that way they're getting over a gigabit ethernet

instead of going up a slow dial up

or at least line connection back to the head office.

Now, this is a very, very useful thing to use

inside a remote branch office.

Otherwise, if you're in a big headquarters

and there's a big branch office that has a big pipe,

you probably don't need a content engine

because you can just go out over the large connection

of the WAN and get that information from the headquarters.

Again, if you're trying to speed up local access,

content engines are really good for that.

Next, we have a content switch.

This is also known as a load balancer.

Now a content switch or load balancer is going to distribute

your incoming requests across various servers

in a server farm.

This is why we call it a load balancer

because they're balancing the load.

Now, why do we need these?

Well, let's take the example of amazon.com.

Do you think amazon.com

can handle all of the amount of traffic

it gets on a daily basis with just one physical server?

Of course not.

They have millions of users accessing their content

all at the same time.

Instead they have server farms with hundreds

and thousands of servers out there.

And all these have to be able to answer up

for a single domain name, amazon.com.

And that's where the content switch comes in.

When you go to amazon.com,

it actually goes through their router

and to their content switch.

And then it starts handing out those requests

to different servers.

If it's a big task,

it might actually split that up across 20 different servers.

For example, if I have a big task at work to do,

I can actually break that down into pieces

and give it out to 20 different people

to help me get it done.

In this case, I will be acting as the content switch.

I break the load down to small parts and hand it out.

So my boss comes to me and said hey,

here are a hundred things I need done.

I would then break those apart and give four or five

to each person and hand them out

and that way they can start doing the work.

When they're done with it, they hand it back to me.

I consolidate it and then I give it back to my boss.

That's essentially what a content switch is doing.

It's handing out those requests to different people

based on their ability to perform the workload.

In this case, those people are servers.

That's exactly what a load balancer or content switch does.

It's going to send the request and distribute the workload

across all the different servers

in order to provide the best response times

and prevent a single server from becoming too overloaded.

Other devices.

In this lesson, we're going to take a quick look

at some devices you may find on your networks.

This includes things like VoIP phones, printers,

access control devices, cameras, HVACs, internet of things,

ICS and SCADA devices.

First we have VoIP phones.

A VoIP phone is a voice over internet protocol phone,

and it's a hardware device that connects to your IP network

in order to make a connection to a call manager

within your network.

Now a call manager,

also known as a unified communications manager

or a unified call manager in Cisco based networks,

is going to be used to perform the call processing

for hardware and software based IP phones.

Now, if you have an IP phone,

you can call another IP phone directly.

There's no issue with that.

For example, if you VoIP phones in your office building,

and they're all connected to your local area network,

you could pick up a phone and dial another phone

and talk to your coworkers.

But if you want to be able to talk to your spouse

on their cell phone,

you have to configure your call manager

to route those calls to the public telephone network

by connecting it with a telephone provider.

VoIP systems are very popular these days

because they lower the cost of long distance,

and it allows you to provide encrypted voice lines

within your own companies

if you're doing VoIP to VoIP calls.

Now, VoIP phones often look like a regular phone,

but they may also include a digital display

or a video camera,

so you can make face-to-face calls as well.

The next device, we're going to talk about our printers.

Now, if you've already taken your A+ exam

or you've ever printed out a document,

you already know what a printer is.

Printers can be directly connected to a computer using USB,

or they can be connected to your network

and shared by multiple users.

When it comes to network printers,

they can be wired or wireless,

and you have the ability to configure them

with the right IP address statically or by using DHCP.

This way they can connect to the network

and other users can connect to them as well.

Next, we have physical access control devices,

which includes things like security gates,

and turnstiles, and door locks, and many others.

All of these things are often connected to a network,

and when they're connected to the network,

they're often going to be on their own network

for additional security,

not connected to your corporate network.

Now, if you do connect them to your corporate network,

you need to make sure you place them in their own VLAN,

this will give you additional security controls and safety.

But for the most secure environment,

you really should have them

on their own separate physical network.

Next, we have security cameras.

Often these will be connected to your security network,

just like your physical access control devices,

but in a lot of small office or home office environments,

you may have them connected

to your corporate network as well.

This way they can access the internet

and you don't have a bunch of extra gear to have to pay for.

Again, though, if you're going to do this,

I highly recommend getting some additional security,

like putting them on a separate VLAN

and adding ACL's to protect them.

Traditionally, security cameras

can actually be very insecure devices

just because they're part of the greater internet of things

and it's risky to have them

on your enterprise or corporate networks.

Next, we have heating ventilation and air conditioning

or HVAC.

There are going to be lots of different sensors and controllers

to help work with these things.

These like security cameras,

should be placed on their own VLAN

and have security protections in place.

These HVAC sensors and controllers are going to be useful

to have on a network,

so you can detect the temperature and humidity of the spaces

that people don't work in daily,

like server rooms and communication closets.

But again,

these devices tend to not have a lot of security built in.

So you need to put additional protections

and defenses in place when you connect them to your network.

Just like security cameras and HVAC sensors,

there is a whole host of other devices

you may find attached to your network,

we call these the internet of things.

This is things like smart TVs, and smartwatches,

smart refrigerators, and smart speakers, smart thermostats,

and smart doorbells, and many others.

All of these things are part of IOT

or the internet of things.

Just like I said for HVAC and security cameras,

you should be putting these devices on a corporate network

or enterprise network in their own segmented VLAN

or on a separate network altogether,

to be able to give yourself more protection.

Finally, we have ICS and SCADA.

ICS stands for industrial control system

and SCADA stands for supervisory control

and data acquisition.

When we talk about ICS or industrial control systems,

this is a term

that's used to describe different types of control systems

and the associated instrumentation.

This includes devices, systems, networks, and controls

that are used to operate and automate industrial processes.

Now the specific type is going to depend on your industry,

but normally this includes electronic sensors in equipment

that are built specifically

to have an effect on the physical world.

I mentioned earlier, the term SCADA,

well SCADA is one type of industrial control system or ICS.

For example,

I used to work as a nuclear reactor operator

early in my life.

Now, when I was sitting at the reactor plant control panel,

there were a lot of pressure and sensor gauges

that I would monitor,

and lots of different switches

that I could turn to turn things on like pumps and heaters,

SCADA systems allow me to take all that data

and put it in one place.

I could acquire it and transmit the data

from all of these different systems

and put it all into a central panel

so that I, as a reactor operator,

could monitor and control it.

Now, I didn't have to sit inside the nuclear reactor

or next to the reactor to turn on the coolant pump.

Instead, I could be safely away from all that radiation

and be able to control and see everything

using these SCADA systems.

That way, we actually also do things like automation.

We could set a range of values

that are programmable logic controller could take

and that way, if we need to turn the pump on

at a certain temperature or off at a certain temperature,

we could do that too, to keep the reactor

at the right temperature for safe operations.

Now, why is this important to consider ICS and SCADA systems

as a network engineer?

Well, because these things run on networks too,

and you may be asked to work as a network technician

at a reactor plant,

or automobile factory, or on an oil pipeline.

And all of these sensors need to be connected,

and that happens using a network.

Again, like IOT, I highly recommend you have a separate

or segmented network for your ICS and SCADA devices.

The reason for this is that these things are very sensitive

and they need to be protected.

I don't want my reactor plant connected to my email server,

that's a bad day because if somebody has a malicious email

and infects our systems with ransomware,

it can make it so I'm locked out of my ICS and SCADA systems

and won't be able to control the reactor too.

So we don't want that,

you need to have it on a separate network.

All right, for the exam,

I want you to remember,

there are lots of different things

that you might work on as a network technician.

This includes things like VoIP phones, printers,

access control, devices, cameras, HVAC,

internet of things, ICS and SCADA devices.

Each of these devices

is going to bring risks to your networks

and to the devices themselves.

So you have to weigh the operations and the security

as you're building security into your networks

and figure out where to place these devices.