knowt logo

Computer Security Quiz 1

Confidentiality is the concealment of information or resources. The first formal work in computer security was motivated by the military’s attempt to implement controls to enforce a “need to know” principle. A cryptographic key controls access to the unscrambled data, but then the cryptographic key itself becomes another datum to be protected. Confidentiality also applies to the existence of data, which is sometimes more

revealing than the data itself. Resource hiding is another important aspect of confidentiality. Sites often

wish to conceal their configuration as well as what systems they are using; organiza-

tions may not wish others to know about specific equipment (because it could be

used without authorization or in inappropriate ways), and a company renting time

from a service provider may not want others to know what resources it is using.

Access control mechanisms provide these capabilities as well. Integrity refers to the trustworthiness of data or resources, and it is usually phrased in

terms of preventing improper or unauthorized change. Integrity includes data integrity

(the content of the information) and origin integrity (the source of the data, often called

authentication). Integrity mechanisms fall into two classes: prevention mechanisms and detec-

tion mechanisms. Detection mechanisms do not try to prevent violations of integrity; they sim-

ply report that the data’s integrity is no longer trustworthy. Working with integrity is very different from working with confidentiality.

With confidentiality, the data is either compromised or it is not, but integrity includes

both the correctness and the trustworthiness of the data. Availability refers to the ability to use the information or resource desired. Availabil-

ity is an important aspect of reliability as well as of system design because an

unavailable system is at least as bad as no system at all.

Attempts to block availability, called denial of service attacks, can be the most

difficult to detect, because the analyst must determine if the unusual access patterns

are attributable to deliberate manipulation of resources or of environment.

A threat is a potential violation of security. The violation need not actually occur for

there to be a threat. The fact that the violation might occur means that those actions

that could cause it to occur must be guarded against (or prepared for). Those actions

are called attacks. Those who execute such actions, or cause them to be executed, are

called attackers.

four broad classes: disclosure, or unauthorized access to information; deception, or

acceptance of false data; disruption, or interruption or prevention of correct opera-

tion; and usurpation, or unauthorized control of some part of a system.

Snooping, the unauthorized interception of information, is a form of disclosure.

Wiretapping, or passive wiretap-

ping, is a form of snooping in which a network is monitored. (It is called “wiretapping”

because of the “wires” that compose the network, although the term is used even if no

physical wiring is involved.) Confidentiality services counter this threat.

Modification or alteration, an unauthorized change of information, covers three

classes of threats.

Active wiretapping is a form of

modification in which data moving across a network is altered; the term “active” dis-

tinguishes it from snooping (“passive” wiretapping). An example is the man-in-the-

middle attack, in which an intruder reads messages from the sender and sends (possibly

modified) versions to the recipient, in hopes that the recipient and sender will not real-

ize the presence of the intermediary. Integrity services counter this threat.

Masquerading or spoofing, an impersonation of one entity by another, is a

form of both deception and usurpation. It lures a victim into believing that the entity

with which it is communicating is a different entity.

Delegation occurs when one

entity authorizes a second entity to perform functions on its behalf. The distinctions

between delegation and masquerading are important. If Susan delegates to Thomas

the authority to act on her behalf, she is giving permission for him to perform spe-

cific actions as though she were performing them herself. All parties are aware of the

delegation. Thomas will not pretend to be Susan; rather, he will say, “I am Thomas

and I have authority to do this on Susan’s behalf.” If asked, Susan will verify this. On

the other hand, in a masquerade, Thomas will pretend to be Susan. No other parties

(including Susan) will be aware of the masquerade, and Thomas will say, “I am

Susan.”

Repudiation of origin, a false denial that an entity sent (or created) something,

is a form of deception. For example, suppose a customer sends a letter to a vendor

agreeing to pay a large amount of money for a product. Delay, a temporary inhibition of a service, is a form of usurpation, although it

can play a supporting role in deception. Typically, delivery of a message or service

requires some time t; if an attacker can force the delivery to take more than time t,

the attacker has successfully delayed delivery.

Denial of service, a long-term inhibition of service, is a form of usurpation,

although it is often used with other mechanisms to deceive. The attacker prevents a

server from providing a service. Denial of service or delay may result from direct attacks or from nonsecurity-

related problems.

A security policy is a statement of what is, and what is not, allowed.

A security mechanism is a method, tool, or procedure for enforcing a security policy.

Prevention means that an attack will fail. For example, if one attempts to

break into a host over the Internet and that host is not connected to the Internet, the

attack has been prevented.

Typically, prevention involves implementation of mecha-

nisms that users cannot override and that are trusted to be implemented in a correct,

unalterable way, so that the attacker cannot defeat the mechanism by changing it.

Detection is most useful when an attack cannot be prevented, but it can also

indicate the effectiveness of preventative measures. Detection mechanisms accept

that an attack will occur; the goal is to determine that an attack is under way, or has

occurred, and report it.

Recovery has two forms. The first is to stop an attack and to assess and repair

any damage caused by that attack. As an example, if the attacker deletes a file, one

recovery mechanism would be to restore the file from backup tapes.

In a second form of recovery, the system continues to function correctly while

an attack is under way. This type of recovery is quite difficult to implement because

of the complexity of computer systems.

A well-

defined exception to the rules provides a “back door” through which the security

mechanism (the locks) can be bypassed. The trust resides in the belief that this back

door will not be used except as specified by the policy.

A security mechanism is secure if R ⊆ Q; it is precise if

R = Q; and it is broad if there are states r such that r ∈ R and r ∉ Q.

Trusting that mechanisms work requires several assumptions.

1. Each mechanism is designed to implement one or more parts of the

security policy.

2. The union of the mechanisms implements all aspects of the security

policy.

3. The mechanisms are implemented correctly.

4. The mechanisms are installed and administered correctly.

This aspect of

trust is called assurance. It is an attempt to provide a basis for bolstering (or substan-

tiating or specifying) how much one can trust a system.

A system is said to satisfy a specification if the specification

correctly states how the system will function.

A specification is a (formal or informal) statement of the desired functioning of the

system. It can be highly mathematical, using any of several languages defined for

that purpose.

Given a design, the implementation creates a system that satisfies that design. If the

design also satisfies the specifications, then by transitivity the implementation will

also satisfy the specifications.

A program is correct if its implementation performs as

specified.

Because formal proofs of correctness are so time-consuming, a posteriori ver-

ification techniques known as testing have become widespread. During testing, the

tester executes the program (or portions of it) on data to determine if the output is

what it should be and to understand how likely the program is to contain an error.

If the

company is not able to meet its payroll because it does not know whom it is to pay,

Second, the risks change with time. If a company’s network is not connected

to the Internet, there seems to be no risk of attacks from other hosts on the Internet.the company will lose the faith of its employees. First, risk is a func-

tion of environment. Attackers from a foreign country are not a threat to the company

when the computer is not connected to the Internet. Third, many risks are quite remote but still exist. In the modem example, the

company has sought to minimize the risk of an Internet connection. Finally, the problem of “analysis paralysis” refers to making risk analyses

with no effort to act on those analyses. Laws restrict the availability and use of technology and affect procedural controls. Society distinguishes between legal and acceptable practices.

The heart of any security system is people.

Society distinguishes between legal and acceptable practices. It may be legal for a

company to require all its employees to provide DNA samples for authentication pur-

poses, but it is not socially acceptable.

Security provides no direct financial rewards to the user. It limits losses, but it also

requires the expenditure of resources that could be used elsewhere.

Lack of resources is another common problem. Securing a system requires

resources as well as people. It requires time to design a configuration that will pro-

vide an adequate level of security, to implement the configuration, and to administer

the system.

People who have some motive to attack an organization and are not authorized

to use that organization’s systems are called outsiders and can pose a serious threat.

Experts agree, however, that a far more dangerous threat comes from disgruntled

employees and other insiders who are authorized to use the computers. Insiders typi-

cally know the organization of the company’s systems and what procedures the oper-

ators and users follow and often know enough passwords to bypass many security

controls that would detect an attack launched by an outsider. Insider misuse of autho-

rized privileges is a very difficult problem to solve.

Many successful break-ins

have arisen from the art of social engineering. If operators will change passwords

based on telephone requests, all an attacker needs to do is to determine the name of

someone who uses the computer.

Detection mechanisms

may analyze system events (user or system actions) to detect problems or (more

commonly) may analyze the data itself to see if required or expected constraints still

hold.

This means that the mechanisms for keeping the

resource or data available are working in an environment for which they were not

designed. As a result, they will often fail.

deception, or acceptance of false data;

disruption, or interruption or prevention of correct opera-

tion;

usurpation, or unauthorized control of some part of a system.

Modification or alteration, an unauthorized change of information, covers three

classes of threats.

Denial of service, a long-term inhibition of service, is a form of usurpation,

although it is often used with other mechanisms to deceive.

Security rests on assumptions specific to the type of security required and the

environment in which it is to be employed.

A well-

defined exception to the rules provides a “back door” through which the security

mechanism (the locks) can be bypassed.

A system is said to satisfy a specification if the specification

correctly states how the system will function.

The defining quality is a statement of what the system is allowed to do or

what it is not allowed to do. for specification

Because formal proofs of correctness are so time-consuming, a posteriori ver-

ification techniques known as testing have become widespread.

Society distinguishes between legal and acceptable practices. It may be legal for a

company to require all its employees to provide DNA samples for authentication pur-

poses, but it is not socially acceptable.

The most common problem a security manager faces is the lack of people trained in

the area of computer security.

This notion of “trust” is the central

notion for computer security. If trust is well placed, any system can be made accept-

ably secure. If it is misplaced, the system cannot be secure in any sense of the word.

Computer Security Quiz 1

Confidentiality is the concealment of information or resources. The first formal work in computer security was motivated by the military’s attempt to implement controls to enforce a “need to know” principle. A cryptographic key controls access to the unscrambled data, but then the cryptographic key itself becomes another datum to be protected. Confidentiality also applies to the existence of data, which is sometimes more

revealing than the data itself. Resource hiding is another important aspect of confidentiality. Sites often

wish to conceal their configuration as well as what systems they are using; organiza-

tions may not wish others to know about specific equipment (because it could be

used without authorization or in inappropriate ways), and a company renting time

from a service provider may not want others to know what resources it is using.

Access control mechanisms provide these capabilities as well. Integrity refers to the trustworthiness of data or resources, and it is usually phrased in

terms of preventing improper or unauthorized change. Integrity includes data integrity

(the content of the information) and origin integrity (the source of the data, often called

authentication). Integrity mechanisms fall into two classes: prevention mechanisms and detec-

tion mechanisms. Detection mechanisms do not try to prevent violations of integrity; they sim-

ply report that the data’s integrity is no longer trustworthy. Working with integrity is very different from working with confidentiality.

With confidentiality, the data is either compromised or it is not, but integrity includes

both the correctness and the trustworthiness of the data. Availability refers to the ability to use the information or resource desired. Availabil-

ity is an important aspect of reliability as well as of system design because an

unavailable system is at least as bad as no system at all.

Attempts to block availability, called denial of service attacks, can be the most

difficult to detect, because the analyst must determine if the unusual access patterns

are attributable to deliberate manipulation of resources or of environment.

A threat is a potential violation of security. The violation need not actually occur for

there to be a threat. The fact that the violation might occur means that those actions

that could cause it to occur must be guarded against (or prepared for). Those actions

are called attacks. Those who execute such actions, or cause them to be executed, are

called attackers.

four broad classes: disclosure, or unauthorized access to information; deception, or

acceptance of false data; disruption, or interruption or prevention of correct opera-

tion; and usurpation, or unauthorized control of some part of a system.

Snooping, the unauthorized interception of information, is a form of disclosure.

Wiretapping, or passive wiretap-

ping, is a form of snooping in which a network is monitored. (It is called “wiretapping”

because of the “wires” that compose the network, although the term is used even if no

physical wiring is involved.) Confidentiality services counter this threat.

Modification or alteration, an unauthorized change of information, covers three

classes of threats.

Active wiretapping is a form of

modification in which data moving across a network is altered; the term “active” dis-

tinguishes it from snooping (“passive” wiretapping). An example is the man-in-the-

middle attack, in which an intruder reads messages from the sender and sends (possibly

modified) versions to the recipient, in hopes that the recipient and sender will not real-

ize the presence of the intermediary. Integrity services counter this threat.

Masquerading or spoofing, an impersonation of one entity by another, is a

form of both deception and usurpation. It lures a victim into believing that the entity

with which it is communicating is a different entity.

Delegation occurs when one

entity authorizes a second entity to perform functions on its behalf. The distinctions

between delegation and masquerading are important. If Susan delegates to Thomas

the authority to act on her behalf, she is giving permission for him to perform spe-

cific actions as though she were performing them herself. All parties are aware of the

delegation. Thomas will not pretend to be Susan; rather, he will say, “I am Thomas

and I have authority to do this on Susan’s behalf.” If asked, Susan will verify this. On

the other hand, in a masquerade, Thomas will pretend to be Susan. No other parties

(including Susan) will be aware of the masquerade, and Thomas will say, “I am

Susan.”

Repudiation of origin, a false denial that an entity sent (or created) something,

is a form of deception. For example, suppose a customer sends a letter to a vendor

agreeing to pay a large amount of money for a product. Delay, a temporary inhibition of a service, is a form of usurpation, although it

can play a supporting role in deception. Typically, delivery of a message or service

requires some time t; if an attacker can force the delivery to take more than time t,

the attacker has successfully delayed delivery.

Denial of service, a long-term inhibition of service, is a form of usurpation,

although it is often used with other mechanisms to deceive. The attacker prevents a

server from providing a service. Denial of service or delay may result from direct attacks or from nonsecurity-

related problems.

A security policy is a statement of what is, and what is not, allowed.

A security mechanism is a method, tool, or procedure for enforcing a security policy.

Prevention means that an attack will fail. For example, if one attempts to

break into a host over the Internet and that host is not connected to the Internet, the

attack has been prevented.

Typically, prevention involves implementation of mecha-

nisms that users cannot override and that are trusted to be implemented in a correct,

unalterable way, so that the attacker cannot defeat the mechanism by changing it.

Detection is most useful when an attack cannot be prevented, but it can also

indicate the effectiveness of preventative measures. Detection mechanisms accept

that an attack will occur; the goal is to determine that an attack is under way, or has

occurred, and report it.

Recovery has two forms. The first is to stop an attack and to assess and repair

any damage caused by that attack. As an example, if the attacker deletes a file, one

recovery mechanism would be to restore the file from backup tapes.

In a second form of recovery, the system continues to function correctly while

an attack is under way. This type of recovery is quite difficult to implement because

of the complexity of computer systems.

A well-

defined exception to the rules provides a “back door” through which the security

mechanism (the locks) can be bypassed. The trust resides in the belief that this back

door will not be used except as specified by the policy.

A security mechanism is secure if R ⊆ Q; it is precise if

R = Q; and it is broad if there are states r such that r ∈ R and r ∉ Q.

Trusting that mechanisms work requires several assumptions.

1. Each mechanism is designed to implement one or more parts of the

security policy.

2. The union of the mechanisms implements all aspects of the security

policy.

3. The mechanisms are implemented correctly.

4. The mechanisms are installed and administered correctly.

This aspect of

trust is called assurance. It is an attempt to provide a basis for bolstering (or substan-

tiating or specifying) how much one can trust a system.

A system is said to satisfy a specification if the specification

correctly states how the system will function.

A specification is a (formal or informal) statement of the desired functioning of the

system. It can be highly mathematical, using any of several languages defined for

that purpose.

Given a design, the implementation creates a system that satisfies that design. If the

design also satisfies the specifications, then by transitivity the implementation will

also satisfy the specifications.

A program is correct if its implementation performs as

specified.

Because formal proofs of correctness are so time-consuming, a posteriori ver-

ification techniques known as testing have become widespread. During testing, the

tester executes the program (or portions of it) on data to determine if the output is

what it should be and to understand how likely the program is to contain an error.

If the

company is not able to meet its payroll because it does not know whom it is to pay,

Second, the risks change with time. If a company’s network is not connected

to the Internet, there seems to be no risk of attacks from other hosts on the Internet.the company will lose the faith of its employees. First, risk is a func-

tion of environment. Attackers from a foreign country are not a threat to the company

when the computer is not connected to the Internet. Third, many risks are quite remote but still exist. In the modem example, the

company has sought to minimize the risk of an Internet connection. Finally, the problem of “analysis paralysis” refers to making risk analyses

with no effort to act on those analyses. Laws restrict the availability and use of technology and affect procedural controls. Society distinguishes between legal and acceptable practices.

The heart of any security system is people.

Society distinguishes between legal and acceptable practices. It may be legal for a

company to require all its employees to provide DNA samples for authentication pur-

poses, but it is not socially acceptable.

Security provides no direct financial rewards to the user. It limits losses, but it also

requires the expenditure of resources that could be used elsewhere.

Lack of resources is another common problem. Securing a system requires

resources as well as people. It requires time to design a configuration that will pro-

vide an adequate level of security, to implement the configuration, and to administer

the system.

People who have some motive to attack an organization and are not authorized

to use that organization’s systems are called outsiders and can pose a serious threat.

Experts agree, however, that a far more dangerous threat comes from disgruntled

employees and other insiders who are authorized to use the computers. Insiders typi-

cally know the organization of the company’s systems and what procedures the oper-

ators and users follow and often know enough passwords to bypass many security

controls that would detect an attack launched by an outsider. Insider misuse of autho-

rized privileges is a very difficult problem to solve.

Many successful break-ins

have arisen from the art of social engineering. If operators will change passwords

based on telephone requests, all an attacker needs to do is to determine the name of

someone who uses the computer.

Detection mechanisms

may analyze system events (user or system actions) to detect problems or (more

commonly) may analyze the data itself to see if required or expected constraints still

hold.

This means that the mechanisms for keeping the

resource or data available are working in an environment for which they were not

designed. As a result, they will often fail.

deception, or acceptance of false data;

disruption, or interruption or prevention of correct opera-

tion;

usurpation, or unauthorized control of some part of a system.

Modification or alteration, an unauthorized change of information, covers three

classes of threats.

Denial of service, a long-term inhibition of service, is a form of usurpation,

although it is often used with other mechanisms to deceive.

Security rests on assumptions specific to the type of security required and the

environment in which it is to be employed.

A well-

defined exception to the rules provides a “back door” through which the security

mechanism (the locks) can be bypassed.

A system is said to satisfy a specification if the specification

correctly states how the system will function.

The defining quality is a statement of what the system is allowed to do or

what it is not allowed to do. for specification

Because formal proofs of correctness are so time-consuming, a posteriori ver-

ification techniques known as testing have become widespread.

Society distinguishes between legal and acceptable practices. It may be legal for a

company to require all its employees to provide DNA samples for authentication pur-

poses, but it is not socially acceptable.

The most common problem a security manager faces is the lack of people trained in

the area of computer security.

This notion of “trust” is the central

notion for computer security. If trust is well placed, any system can be made accept-

ably secure. If it is misplaced, the system cannot be secure in any sense of the word.

robot