Disaster Recovery Planning and Contingency Planning – Lecture 1

CLO 1: Need for disaster recovery planning

  • Contingency planning involves being ready for incidents and disasters.
  • Course objective: Analyze the need for disaster recovery planning.
  • Statistical context: About 1% of online users are potential attackers, implying roughly two and a half million potential attackers.
  • Historical example: World Trade Center (WTC) organizations had contingency plans due to the February 1993 attack.
  • Information security encompasses contingency planning to ensure confidentiality, integrity, and availability (CIA) of data.

Chapter Overview

  • Core topics:
    • Information Security
    • Overview of Risk Management
    • Contingency Planning and Its Components
    • Role of Information Security Policy in Developing Contingency Plans
  • Chapter outline (repeated across slides):
    • Information Security
    • Overview of Risk Management
    • Contingency Planning and Its Components
    • Role of Information Security Policy in Developing Contingency Plans

Information Security: CNSS Definition and CIA

  • CNSS information security definition: Protection of information and its critical elements.
    • Includes systems and hardware storing, transmitting information.
  • Information security (InfoSec): Protection of confidentiality, integrity, and availability of information in storage, during processing, and during transmission.
  • CIA triangle is foundational to the CNSS model.

Key Information Security Concepts

  • Threat: An object, person, tool, or other entity posing a potential risk of loss or damage to an asset.
  • Asset: Organizational resource being protected (logical or physical).
  • Attack: Attempt to cause damage to or compromise information or supporting systems; arises from a threat; may be intentional or unintentional.
  • Threat-agent or actor: A specific threat instance that exploits asset vulnerabilities.
  • Vulnerability: Flaw or weakness in system security procedures, design, implementation, or internal controls; can be well-known or latent; can be exploited accidentally or intentionally.
  • Exploit: Result of threat-agent action; can exploit a system or information through illegal use; an exploit can be targeted to a vulnerability.
  • Control/safeguard/countermeasure: Security mechanisms that reduce risk.

Threat Categories and Data (Table 1-1)

  • Threat Category rankings (2010/Prior rankings) as listed:
    • Espionage or trespass – 2010: 1; Prior: 4
    • Software attacks – 2010: 2; Prior: 1
    • Human error or failure – 2010: 3; Prior: 3
    • Theft – 2010: 4; Prior: 3
    • Compromises to intellectual property – 2010: 5; Prior: 9
    • Sabotage or vandalism – 2010: 6; Prior: 5
    • Technical software failures or errors – 2010: 7; Prior: 2
    • Technical hardware failures or errors – 2010: 8; Prior: 6
    • Forces of nature – 2010: 9; Prior: 8
    • Deviations in QoS from service providers – 2010: 10; Prior: 10
    • Technological obsolescence and Information extortion also noted (Table 1-1, 1-2).
  • Source: 2003 Study, Communications of the ACM (Table 1-1).

Common Malicious Code and Attack Vectors

  • Common malicious code instances: viruses, worms, Trojan horses, logic bombs, bots, rootkits, back doors, denial-of-service (DoS), distributed DoS (DDoS).
  • Malware threats: vary by propagation method, payload, and infection vector.
  • Viruses: segments of code that perform malicious actions; macro viruses embedded in macros; boot viruses infect key OS files.
  • Worms: replicate themselves without needing another program; can propagate until resources are exhausted.
  • Back doors/trap doors: installed by payload to provide privileged access; allow unauthorized control.
  • Polymorphism: threat changes shape to evade detection.
  • Propagation vectors: social engineering; Trojan horse masquerades as desirable software; use of open network connections, file shares, or software vulnerabilities.
  • Malware hoaxes: false warnings designed to mislead and waste time/resources.

Human Factors, Theft, IP, and Sabotage

  • Human error or failure: acts by an authorized user with no malicious intent; small mistakes can cause large damages; can be intentional refusal or unintentional non-compliance with policies.
  • Theft: illegal taking of property (physical, electronic, or intellectual); includes espionage and breach of confidentiality; methods include competitive intelligence and loss of mobile devices; stored information is often more important than the devices themselves.
  • Compromises to Intellectual Property (IP): ownership of ideas and their representations; includes trade secrets, copyrights, trademarks, patents; exfiltration or unauthorized removal; software piracy.
  • Sabotage or vandalism: destroys assets or damages the organization’s image; cyberterrorism; asset or website destruction.

Software and Hardware Failures; Natural Disasters

  • Technical software failures or errors: software with unknown or hidden faults; bugs can be security-related; traps/bugs in code and vulnerability databases like Bugtraq and National Vulnerability Database.
  • Technical hardware failures or errors: equipment flaws; system operates outside expected parameters; errors may be terminal or intermittent.
  • Forces of nature: force majeure or acts of God; earthquakes, floods, hurricanes; high danger with little warning.

Threats to Service and Obsolescence

  • Deviations in QoS by service providers: service interruptions due to storms, illnesses, or unforeseen events.
  • Technological obsolescence: outdated infrastructure leading to unreliable systems; risk to data integrity from attacks.
  • Information extortion: attacker or insider steals data and demands compensation or non-disclosure; common in credit card theft.

Data Classification and Management

  • Data classification schemes: require data to be classified into mutually exclusive categories based on protection needs.
  • Military classification ratings (example): Public → For Official Use Only → Confidential → Secret → Top Secret.
  • Alternative classification scheme:
    • Public: general public dissemination
    • For Official Use: not highly sensitive but not for public release
    • Sensitive: important to business; could cause embarrassment or loss of market share if revealed
    • Classified: utmost security; disclosure could severely impact the organization
  • Personnel security clearances: on a need-to-know basis.

Threat Identification and Vulnerability Identification

  • Threat identification: conduct a threat assessment to determine which threats present danger, which threaten assets in the environment, which would cost the most to recover from, and which require the greatest prevention expenditure.
  • Vulnerability identification: review each asset and threat to create a list of vulnerabilities; examine how threats could be perpetrated; note that threats may yield multiple vulnerabilities; involve people with diverse backgrounds in the assessment.

Risk Assessment and Risk Management

  • Risk assessment: process of assigning a risk rating or score to each information asset to determine relative risk of vulnerabilities using factors like likelihood and impact; CVSS Calculator is recalled as a reference.
  • Likelihood: probability that a specific vulnerability will be successfully attacked.
  • Risk identification, risk assessment, risk management: see Figure 1-2 (Component diagram).
  • Risk management workflow:
    • Identify assets
    • Inventory and classify assets
    • Identify threats & vulnerabilities
    • Assess risk
    • Analyze risk and determine management approach
    • Reduce risk and control it
    • Transfer risk where appropriate
  • Asset identification and value assessment is iterative and requires classification and weighting.

Asset Identification and Value Assessment

  • Asset identification is iterative and involves classifying information assets with respect to security needs.
  • Weighting criteria: establish weights (1-100) and scores (0.1 to 1.0) for each criterion to compute a weighted score.
  • Example criteria (Table 1-3):
    • Criterion 1: Impact on Revenue (weight 30)
    • Criterion 2: Impact on Profitability (weight 40)
    • Criterion 3: Impact on Image (weight 30)
  • Example worksheet (partial):
    • EDI Document Set 1-Logistics BOL to outsourcer (outbound): scores 0.8, 0.9, 0.5; Weighted score 75
    • EDI Document Set 2-Supplier orders (outbound): scores 0.8, 0.9, 0.6; Weighted score 78
    • EDI Document Set 2-Supplier fulfillment advice (inbound): scores 0.4, 0.5, 0.3; Weighted score 41
    • Customer order via SSL (inbound): scores 1.0, 1.0, 1.0; Weighted score 100
    • Customer service request via e-mail (inbound): scores 0.4, 0.4, 0.9; Weighted score 55
  • The table example demonstrates how to compute a composite asset value using weighted factors.

Data Classification and Management (cont’d.)

  • Data classification schemes (continued): ensure the classification aligns with the protection needs.
  • Data classification categories should be mutually exclusive and comprehensive.

Threat Identification and Vulnerability Identification (Chapter recap)

  • Threat assessment questions: which threats present danger; which threats are most dangerous; which threats would cost the most to recover from; which threats require the greatest expenditure to prevent.
  • Vulnerability assessment: map assets to potential threats and identify vulnerabilities; plan to mitigate those vulnerabilities.

Risk Assessment and Risk Control Strategies

  • Risk control strategies can be grouped as:
    • Defense (preferred): prevent exploitation via policy, training, technology.
    • Transference: shift risk to other assets, processes, organizations, or insurance.
    • Mitigation: reduce impact through contingency planning (BIA, IRP, DRP, BCP).
    • Acceptance: do nothing if risk is deemed acceptable after cost-benefit analysis.
    • Termination: remove asset from the environment when risk outweighs asset value or feasibility of protection.
  • Defense through policy, training, and technology is typically required; protection aims to minimize probability of successful exploitation and reduce impact.

Contingency Planning and Its Components

  • Contingency plan: plans to anticipate, react to, and recover from events threatening operations; restores organization to normal operations.
  • Four subordinate functions:
    • Business Impact Analysis (BIA)
    • Incident Response Planning (IRP)
    • Disaster Recovery Planning (DRP)
    • Business Continuity Planning (BCP)

Business Impact Analysis (BIA)

  • BIA involves investigation and assessment of the impact of attacks.
  • Adds detail to the prioritized threat and vulnerability list created in risk management.
  • Provides detailed scenarios of potential impact for each attack type; recall: CVSS Calculator reference.

Incident Response Plan (IRP)

  • Incident: any clearly identified attack on assets.
  • IRP deals with identification, classification, response, and recovery from an incident.
  • IRP assesses likelihood of imminent damage, informs key decision makers, and enables coordinated action.

Disaster Recovery Plan (DRP)

  • Focus: preparation for and recovery from natural or man-made disasters.
  • Includes: preparations for the recovery process, strategies to limit losses during disaster, and detailed steps after immediate danger.
  • Focus areas: preparation before the incident; actions after the incident.

Business Continuity Plan (BCP) and Business Resumption Plan (BRP)

  • BCP: ensures critical business functions continue at an alternate location after a catastrophic incident.
  • BRP: emerging concept that merges DRP and BCP into a single process.

Contingency Planning Timeline (High-Level)

  • Timeline describes sequencing of IRP, DRP, and BCP during incidents:
    • IRP focuses on immediate response; may move to DRP/BCP if the event escalates.
    • DRP focuses on restoring systems at the original site; BC runs concurrently with DRP.
    • In major/long-term damage, IRP, DRP, and BCP distinctions and actions are invoked as needed.
  • Contingency planning timelines are illustrated with figures (e.g., Figure 1-5): event occurs → incident detected/response/recovery → operations restored; post-event timelines in hours/days.

Major Steps in Contingency Planning (Figure 1-6)

  • Form the CP team.
  • Conduct the Business Impact Analysis (BIA).
  • Develop subordinate planning policies (IR/DR/BC).
  • Create response strategies (IR/DR/BC).
  • Determine mission/business processes and recover criticality.
  • Integrate the BIA.
  • Develop subordinate plans (IR/DR/BC).
  • Develop the CP policy statement.
  • Identify resource requirements.
  • Identify recovery priorities for system resources.
  • Identify preventive controls.
  • Ensure plan testing, training, and exercises.
  • Form subordinate planning teams (IR/DR/BC).
  • Organize response teams (IR/DR/BC).
  • Ensure plan maintenance.

Seven Steps in NIST SP 800-34 (Revision 1) – Key Stages

  • The slide lists: Seven steps in NIST SP 800-34 Rev 1, including (in order as shown):
    • Develop IT contingency planning policy statement
    • Identify critical IT resources
    • Identify outage impacts and allowable outage times
    • Develop recovery priorities
    • Develop contingency plan
    • Plan testing, training, and exercises
    • Plan maintenance
  • Note: The slide also references policy, analysis, and other preparatory elements as part of the overarching process.

Role of Information Security Policy in Developing Contingency Plans

  • Policy must enforce information protection requirements before, during, and after an incident.
  • High-quality security programs begin and end with policy.
  • InfoSec is a management problem requiring robust policy and governance.
  • Difficulties in shaping policy include:
    • Must never conflict with laws; must stand up in court if challenged.
    • Must be properly administered and kept up-to-date.

Key Policy Definitions

  • Policy: plan or course of action; conveys instructions from senior management to decision-makers, operators, and implementers; is organizational law; dictates acceptable and unacceptable behavior; defines penalties for violations.
  • Standard: detailed statement of what must be done to comply.
  • Practice: activities performed within the organization to improve security; includes procedures and guidelines that meet standards.
  • Procedure: step-by-step instructions implementing a policy; more granular than standards.
  • Guidelines: recommended practices; not mandatory but advised.
  • De jure standard: formal standard; De facto standard: informal standard.
  • Policies are sanctioned by senior management; standards are built on policy; practices/procedures/guidelines specify how to meet standards.

Policy, Standards, Practices, Procedures, and Guidelines (Illustration)

  • Diagrammatic relationship (Figure 1-9):
    • Policies drive Standards → Practices → Procedures → Guidelines.

Examples and Table Details

  • Example policy and procedure:
    • Policy: Accept all customer-returned merchandise.
    • Procedure: Follow all steps for completing merchandise return documentation.
    • Rule: Managers must approve all refunds over $50.00.
  • Policy definitions (continued):
    • Mission: Written statement of the organization’s purpose.
    • Vision: Written statement about organization’s goals.
    • Strategic planning: Process of moving toward the vision.
    • Information security policy (InfoSec policy): Provides rules for protecting information assets.
    • Types of policies: Enterprise Information Security Policy (EISP), Issue-Specific Security Policy (ISSP), Systems-Specific Security Policy (SysSP).

Enterprise Information Security Policy (EISP)

  • EISP is executive-level and supports mission, vision, and direction of the organization.
  • It sets strategic direction, scope, and tone for security efforts.
  • Contains requirements to be met; defines purpose, scope, constraints, and applicability.
  • Assigns responsibilities and addresses legal compliance.

Issue-Specific Security Policy (ISSP)

  • ISSP addresses specific areas of technology.
  • Approaches to ISSPs:
    • Independent ISSP documents tailored to specific issues.
    • A single comprehensive ISSP document covering all issues.
    • Other approaches.
  • Key components (Table 1-5):
    1) Statement of policy (scope, applicability, responsibilities, technology addressed)
    2) Authorized access and usage of equipment (user access, fair/ responsible use, privacy)
    3) Prohibited usage of equipment (disruptive use, criminal use, harassment, IP issues, etc.)
    4) Systems management (management of stored materials, monitoring, antivirus, physical security, encryption)
    5) Violations of policy (reporting, penalties)
    6) Policy review and modification (review schedule)
    7) Limitations of liability (disclaimers)

Systems-Specific Security Policy (SysSP)

  • SysSPs are standards and procedures used when configuring or maintaining systems.
  • Access Control Lists (ACLs) govern rights and privileges for users on particular systems.
  • ACLs translate into configuration rules; regulate who can do what, when, and where.
  • Rule policies are more specific than ACLs and may or may not deal directly with users.

Policy Management

  • Policies are dynamic: they constantly change and grow.
  • Effective policy management requires:
    • An individual responsible for creation, revision, distribution, and storage
    • A schedule of reviews
    • A mechanism for recommending revisions
    • Policy revision dates and possibly sunset dates
    • Policy management software (preferred but optional)

Summary (Key Takeaways)

  • Information security protects information and its critical elements; CIA framework underpins risk and protection strategies.
  • Threats, assets, vulnerabilities, and risks are core concepts in risk management.
  • Risk management involves identifying vulnerabilities, assessing risk, and applying controls to reduce risk, including transfer, mitigation, acceptance, or termination strategies.
  • Contingency planning comprises BIA, IRP, DRP, and BCP to prepare for, respond to, recover from, and continue operations after incidents.
  • Policy plays a central role in shaping security posture, driving standards, and guiding procedures; enterprise, issue-specific, and systems-specific policies each serve distinct purposes.
  • The NIST SP 800-34 framework provides a structured approach to contingency planning, including policy development, risk assessment, preventive controls, and testing/maintenance.
  • Real-world relevance: contingency planning is essential for resilience in organizations facing cyber threats, natural disasters, and operational disruptions.

Equations and LaTeX-formatted content

  • Risk as a function of impact and likelihood:
    Risk=Impact×Likelihood\text{Risk} = \text{Impact} \times \text{Likelihood}
  • Alternative risk formulation (illustrative):
    Risk=(Likelihood×Value)% risk currently controlled+uncertainty of assumptions\text{Risk} = (\text{Likelihood} \times \text{Value}) - \%\ \text{risk currently controlled} + \text{uncertainty of assumptions}
  • Weighted factor analysis (example from Table 1-3):
    • Criterion weights:
    • Impact on Revenue: 3030
    • Impact on Profitability: 4040
    • Impact on Image: 3030
    • Weighted scores for assets (sample entries):
    • EDI Document Set 1-Logistics BOL to outsourcer (outbound): scores 0.8,0.9,0.5;Weighted score =750.8, 0.9, 0.5; \text{Weighted score }= 75
    • EDI Document Set 2-Supplier orders (outbound): scores 0.8,0.9,0.6;Weighted score =780.8, 0.9, 0.6; \text{Weighted score }= 78
    • EDI Document Set 2-Supplier fulfillment advice (inbound): scores 0.4,0.5,0.3;Weighted score =410.4, 0.5, 0.3; \text{Weighted score }= 41
    • Customer order via SSL (inbound): scores 1.0,1.0,1.0;Weighted score =1001.0, 1.0, 1.0; \text{Weighted score }= 100
    • Customer service request via e-mail (inbound): scores 0.4,0.4,0.9;Weighted score =550.4, 0.4, 0.9; \text{Weighted score }= 55

Figures and Tables referenced (described in notes)

  • Figure 1-2: Components of risk management (Identify, Assess, Analyze, Risk Management, Reduce, Control, Transfer).
  • Figure 1-5: Contingency planning timeline – IRP, DRP, BCP sequencing and concurrent operations.
  • Figure 1-6: Major steps in contingency planning (CP) lifecycle — CP team formation, BIA, subordinate plans, policy, resource identification, testing, maintenance.
  • Figure 1-7: Stages of contingency planning per NIST SP 800-34 (policy, analysis, identify, contingency planning, preventive controls, recovery strategies, develop contingency plan, testing/exercises, maintenance).

End of notes