Software development notes

SDLC

  • Software Development Life Cycle (SDLC) is the process of creating a new software or system using models and methodologies.

Different Phases of SDLC

  • Defining the Problem Phase
  • Planning Phase
  • Feasibility Study Phase
  • Analysis Phase
  • Requirement Engineering Phase
  • Design Phase
  • Development/Coding Phase
  • Testing/Verification Phase
  • Deployment/Implementation Phase
  • Documentation Phase
  • Maintenance/Support Phase

Defining the Problem Phase

  • The problem to be solved or system to be developed is clearly defined.
  • All requirements are documented and approved by the customer or company.

Example

  • Students' Examination System Development
    Defining the problem:

  • A Students' Examination System is needed to be developed that covers all aspects from Examination taking to Students' results generations.

Planning Phase

  • The project's goal is identified.
  • The necessary requirements for product development are assessed.
  • A thorough evaluation of resources, including personnel and costs, is conducted, accompanied by the conceptualization of the new product.
  • The gathered information undergoes analysis to explore potential alternative solutions. If no feasible alternatives are found, the data is organized into a comprehensive project plan, which is then presented to management for approval.

Example

  • In the Students' Examination System Development project planning will be made to set the ultimate goals and an estimate of resources, such as personnel and costs, is prepared.

Feasibility Study

  • Feasibility study is the analysis and evaluation of a proposed system to determine whether it is technically, financially/economically, legally, and operationally feasible within the estimated cost and time.

Different Feasibility Studies

  • Technical Feasibility
  • Economic Feasibility
  • Operational Feasibility
  • Legal Feasibility
  • Schedule Feasibility

Technical Feasibility

  • Assesses the practicality of implementing a proposed project from a technological standpoint.
  • Evaluates whether the necessary technology (hardware, software), tools, and resources are available or can be developed to support the system.

Example

  • Consider the Students' Examination System Development project. The developing company will evaluate whether the existing infrastructure (hardware/software) can support the proposed system.

Economic Feasibility

  • Evaluates the financial viability of a proposed system by comparing its costs and benefits.

Operational Feasibility

  • Assesses the extent to which the proposed system aligns with the organization's operational processes and goals.

Legal Feasibility

  • Evaluates whether the proposed system complies with applicable laws, regulations, and standards.

Schedule Feasibility

  • Assesses whether a system can be completed within a specified timeframe.

Analysis Phase

  • During the analysis phase, the project team determines the end-user requirements.
  • Assisted by client focus groups, which provide an explanation of their needs and expectations for the new system and how it will perform.
  • The in-charge of the project team must decide whether the project should go ahead with the available resources or not.
  • Analysis also looks at the existing system to see what and how it is doing its job.

Requirement Engineering Phase

  • Focuses on gathering, analyzing, documenting, and managing requirements for the development of the proposed system.
  • Ensures that the software meets the needs and expectations of stakeholders (e.g., End users).

Steps of Requirement Engineering

  • Requirement gathering
  • Requirement validation
  • Requirements management

Requirement Gathering

  • Aims to identify and document the needs and expectations of stakeholders.
    Various techniques are employed for this purpose.

Requirement Gathering Types

  • Interviews
  • Surveys and Questionnaires
  • Observation
  • Document Analysis

Interviews

  • It involves direct conversations with stakeholders to gather information about their needs, expectations, and preferences.

Surveys and Questionnaires

  • This method involves distributing surveys or questionnaires to collect information from a large number of stakeholders.

Observation

  • This technique involves observing users in their natural work environment to understand how they currently perform tasks and identify areas for improvement.

Document Analysis

  • This approach includes reviewing existing documentation, reports, and manuals to extract relevant information about the current system or processes.

Requirements Validation

  • Focuses on scrutinizing the gathered requirements to ensure they align with the stakeholders' intentions.
  • Requirement Verification: Ensures that the product is built correctly according to the specified requirements and design.
  • Requirement Validation: Ensures that the final product meets the user's needs and intended use.

Difference between Verification and Validation

Requirement Verification

  • Purpose: Ensures that the product is built correctly according to the specified requirements and design.
  • Focus: Checks for errors and compliance during development.
  • Question: "Are we building the product right?"
  • Methods: Reviews, inspections, and testing against specifications.

Requirement Validation

  • Purpose: Ensures that the final product meets the user's needs and intended use.
  • Focus: Confirms that the right product is built.
  • Question: "Are we building the right product?"
  • Methods: User acceptance testing and real-world evaluations.
  • Verification is about building the product correctly, while validation is about building the correct product.

Requirements Management

  • A continual process aimed at guaranteeing that the software consistently fulfills the expectations of both the acquirer and users.
  • It involves the collection of new requirements that may emerge due to evolving expectations, changing regulations, or other sources of modification.

Design Phase

  • The design phase is a crucial part of the software development lifecycle (SDLC) as it serves as a bridge between the requirements gathering phase and the actual coding of the software.
  • During this phase, the focus is on creating a blueprint or model of the software that guides developers in implementing the system.
  • Unified Modeling Language (UML) is a standardized visual modeling language widely used in software engineering and system design.
  • It plays a crucial role in the Software Development Life Cycle (SDLC) by providing a common notation that allows developers, analysts, and stakeholders to communicate and visualize the different aspects of a system.

Structures used in the design phase

Algorithms

  • Algorithms are precise and systematic procedures designed to guide the step-by-step solution of a problem. They provide a structured and detailed set of instructions for solving a particular problem or performing a specific task.
Example
  • In the Students' Examination System Development project, the following algorithms will find the result of a student using percentage marks.

Flowcharts

  • A flowchart is a diagrammatic representation used to illustrate an algorithm or a process. It visually presents the sequence of steps in the algorithm through special shapes (symbols) and connects them with arrows to depict their sequence.

UML

  • Unified Modeling Language (UML) is a standardized visual modeling language widely used in software engineering and system design.
  • It plays a crucial role in the Software Development Life Cycle (SDLC) by providing a common notation that allows developers, analysts, and stakeholders to communicate and visualize the different aspects of a system.

Coding Phase

  • In the development/coding phase,
  • Developers translate the plans formulated in the design phase into actions.
  • Develop the database structures,
  • Design codes for the data flow processes. The actual coding is carried out using programming languages, a process commonly referred to as Computer programming.
  • Design the tangible user interface screens.
  • Test data is prepared

Testing/Verification Phase

  • All aspects of the system are tested for functionality and performance.
  • Programming modules are executed to detect errors, commonly known as bugs.
  • Software is aligned with the required specifications.
  • It involves checking items for consistency by comparing results against predetermined requirements.

Types of Testing

Black Box Testing

  • Black Box Testing is a software testing method where the internal workings or logic of a system are not known to the tester. The focus is on evaluating the system's outputs based on specified inputs without considering its internal code structure.
Example
  • Login Functionality of a Web Application
    Scenario: Testing the login functionality of a web application.
    Tester's Perspective:

  • The tester doesn’t have access to the source code or knowledge of the internal implementation details.
    Test Cases:

  • Input: Valid username and password, Expected Output: Successful login.

  • Input: Incorrect password, Expected Output: Login failure.

  • Input: Invalid username, Expected Output: Login failure.

  • Input: Empty username and password, Expected Output: Login failure.

White Box Testing

  • White Box Testing is a testing approach where the tester has knowledge of the internal code, logic, and architecture of the system being tested
Example

In the Students' Examination System Development project the above programming module is tested for errors using White box testing.

Scenario: Testing a specific function within a software application. Here the students' correct percentage marks and the result is being tested.
Tester's Perspective:

  • The tester has access to the source code and understands the internal logic of the function being tested.
    Test Cases:

  • Test each statement within the program code.

  • Verify that variables are correctly initialized and updated.

  • Check for the errors and debug, if needed.

  • Evaluate the code's performance under various conditions by providing values.

Deployment/Implementation Phase

  • Involves a series of activities aimed at making the software/system accessible for use.

Main Activities

  • Installation and activation of the hardware and software.
  • In some cases the users and the computer operation personals are trained on the developed software system.
  • Conversion: The process of changing from the old system to the new one is called conversion.

Deployment/Implementation Methods

  • Direct
  • Parallel
  • Phased
  • Pilot

Direct

  • In this method, the old system is entirely replaced by the new system simultaneously. The transition is abrupt, and once implemented, the old system becomes obsolete.

Parallel

  • The parallel method involves running both the old and new systems concurrently for a certain period. This approach allows for the identification and rectification of major issues with the new system without risking data loss.

Phased

  • The phased implementation method facilitates a gradual transition from the old

Pilot

  • In the Pilot method, the new system is initially deployed for a small user group. These users engage with, assess, and provide feedback on the new system. Once the system is deemed satisfactory, it is then rolled out for use by all users.

Software Development Models

  • Methodologies or processes used to structure, plan, and control the development of a software system.

Waterfall Model

  • A linear and sequential approach where each phase must be completed before moving on to the next.
  • Phases include requirements, design, implementation, testing, deployment, and maintenance.
  • Suitable for projects with well-defined requirements but inflexible to changes.

Phases of the Waterfall Model

  • Requirements
  • Design
  • Development
  • Testing
  • Deployment
  • Maintenance

Types of Maintenance

  • Corrective Maintenance
  • Perfective Maintenance

Corrective Maintenance

  • Involves correcting errors left undiscovered during the development and testing stages.

Perfective Maintenance

  • Entails enhancing the functionality of the software product as and when required, keeping in mind future trends and customer demand.

Advantages of the Waterfall Model

  • Simplicity and Ease of Use
  • Clear Structure
  • Well-Documented Process
  • Defined Requirements
  • Stable and Predictable
  • Easy to Manage
  • Quality Control
  • Suitable for Smaller Projects
  • Minimal Customer Involvement

Disadvantages of the Waterfall Model

  • Inflexibility
  • Late Testing
  • Poor Adaptability to Changes
  • Assumes Stable Requirements
  • Lack of Customer Involvement
  • High Risk and Uncertainty
  • Not Suitable for Complex Projects
  • Delayed Deliverables

Agile Model

  • The meaning of Agile is swift or versatile. "Agile process model" refers to a software development approach based on iterative development.
  • Agile methods break tasks into smaller iterations, or parts do not directly involve long-term planning.
  • The project scope and requirements are laid down at the beginning of the development process.
  • Each iteration is considered as a short time "frame" in the Agile process model, which typically lasts from one to four weeks.
  • The division of the entire project into smaller parts helps to minimize the project risk and reduce the overall project delivery time requirements.

Phases of the Agile Model

  • Requirement gathering
  • Planning
  • Design
  • Implementation
  • Testing
  • Deployment
  • Maintenance

Advantages of the Agile Model

  • Flexibility and Adaptability
  • Customer Collaboration
  • Faster Delivery and Time-to-Market
  • Improved Quality
  • Enhanced Team Collaboration and Communication
  • Higher Customer Satisfaction

Disadvantages of the Agile Model

  • Less Predictability
  • Requires Active User Involvement
  • Potential for Scope Creep
  • Requires Experienced Team Members
  • Documentation Can Be Neglected
  • Can Be Resource-Intensive
  • The Agile model offers significant benefits, particularly in environments where requirements are likely to change, and quick delivery is essential.
  • However, it also presents challenges, especially regarding predictability and resource management.
  • Agile is most effective when teams are experienced and stakeholders are actively engaged throughout the development process.

Network topologies

  • Network topology is a systematic arrangement of computers and other devices in a network. Network devices are called "Nodes". In network topology all nodes on a network are physically or logically arranged in relation to each other.

Bus Topology

  • Bus topology is a network topology in which devices are connected to one cable or line running through the entire network.
  • Ethernet is commonly used in bus topologies.
  • Ethernet operates using the CSMA/CD (Carrier Sense Multiple Access with Collision Detection) protocol to manage access to the shared communication medium.
Advantages of Bus Topology:
  • The main advantage of bus topology is that it is easy to install and maintain.
  • Bus topology is very cost-effective because all the nodes are connected to one cable.
  • Bus topology has a fast data transmission rate because all the nodes are connected to one cable, making it ideal for applications requiring high-speed data transfer.
  • Bus topology is relatively easy to expand by simply connecting additional nodes to the existing cable.
Disadvantages of Bus Topology:
  • In bus topology all nodes share the same bandwidth. This can lead to network congestion and slow down transmission speeds.
  • In this topology if one node goes down, it can cause a network failure because there is only one cable. In addition, the entire network will be affected if the main cable is damaged or cut.
Applications of Bus Topology:
  • Small office networks home networks due to its low cost and ease of use.

Star Topology

  • In a star topology, every device is linked to a central hub or switch.
  • In a star topology, devices are typically categorized as clients or end-user devices.
Advantages of Star Topology:
  • Star topology is relatively easy to install, as all nodes are connected directly to the hub or switch.
  • Star topology is very reliable because it has no single point of failure. If one node goes down, the rest of the network will still be operational.
  • Star topology has a high-performance rate due to its dedicated links between nodes and the hub, making it ideal for applications requiring fast data transfer.
Disadvantages of Star Topology:
  • Star topology can be more expensive than other types of networks because of the additional cost of hubs or switches required to connect all devices together.
  • If the hub or switch fails, the entire network will be affected.
Applications of Star Topology:
  • In large office buildings and campus networks.

Mesh Topology

  • Mesh topology is a network configuration where every node establishes connections with all other nodes within the network.
Advantages of Mesh Topology:
  • Mesh topology provides a high level of redundancy as each node is connected to all other nodes in the network. This ensures continuous data transmission even if one or more nodes experience failure.
  • Mesh topology is very reliable due to its redundant connections. If one node experiences a failure, the remaining network nodes continue to operate seamlessly.
  • Mesh topology demonstrates high flexibility and can be easily scaled to accommodate the requirements of larger networks.
Disadvantages of Mesh Topology:
  • The implementation of mesh topology can costly due to the requirement for multiple links between nodes, increasing expenses related to hardware and installation.
  • Setting up a mesh topology network can be complex due required between nodes.
Applications of Mesh Topology:
  • In corporate networks and military applications, wireless applications

Tree Topology

  • Tree topology (or hierarchical topology) is a network topology that utilizes a hierarchical or tree-like layout of interconnected nodes. It is similar to the star topology but with multiple hubs or switches instead of one, allowing for more efficient data transmission and scalability.
Advantages of Tree Topology:
  • Tree topology is more scalable than other types of networks as it allows for the addition and removal of nodes without disrupting the entire network. This makes it ideal for those networks that require frequent changes or expansion.
  • Tree topology provides a high level of reliability due to its redundant connections between nodes. If one node fails, the rest of the network will still be operational.
  • Tree topology is cost-effective as it eliminates the need for complex wiring and allows for more efficient data transmission.
Disadvantages of Tree Topology:
  • Installing a tree topology network can be complex due to the multiple connections required between nodes.
  • Troubleshooting a tree topology network can be difficult as there are multiple paths for data transmission.
Applications of Tree Topology:
  • Large corporate environments where departments or divisions can be organized in a hierarchical fashion,

Ring Topology

  • Ring topology is a network topology in which all devices are connected to one another in a circular loop. Data travels around the ring in one direction, passing through each device until it reaches its destination.
  • Ring topology is often used Fiber Distributed Data Interface (FDDI) networks, also called dual ring network.
Working of Ring Topology
  • In a Ring Topology any node gets access to the network by passing a special data packet, known as a "token," around the network. Only the device holding the token is allowed to transmit data. The token circulates continuously around the ring.
Dual Ring topology
  • Ring topology often use Fiber Distributed Data Interface (FDDI) networks, also called dual ring network. In a dual ring topology, two separate rings are connected to form one large loop. This provides greater security and efficiency as data can travel in both directions around the loop in a clockwise or counterclockwise direction.
Advantages of Ring Topology:
  • Ring topology provides a high level of reliability as each device has an alternate route in case of failure.
  • Ring topology is cost-effective as it eliminates the need for complex wiring and allows for more efficient data transmission.
Disadvantages of Ring Topology:
  • Ring topology faces the challenge of limited bandwidth since each device on the ring must share the same communication path. This sharing can result in bottlenecks and affect the data transmission speed.
  • The ring topology rely on a continuous circular connection between all nodes. If this connection is disrupted or broken at any point, the entire network may become unavailable.
  • The troubleshooting process for a ring topology network is difficult and can be challenging.
Applications of Ring Topology:
  • Used in Local Area Networks (LANs), Dual ring topology is also used in fiber-optic networks.

Hybrid Topology

  • Hybrid Topology integrates different network configurations to establish a more efficient and dependable network.
Advantages of Hybrid Topology:
  • Hybrid topology is flexible and easily adaptable to changes in the network.
  • With multiple pathways for data transmission, hybrid topology offers higher reliability.
  • The configuration of hybrid topology facilitates high scalability, accommodating modifications or expansions in the network.
  • By merging two or more basic networks, hybrid topology reduces the overall cost of a network by reducing the amount of hardware and wiring needed.
  • Hybrid topology enhances security by providing multiple layers of protection, making unauthorized access to the network more challenging.
Disadvantages of Hybrid Topology:
  • Hybrid topology can be more complex to set up and maintain, as it requires the merging of different types of topologies.
  • Troubleshooting a hybrid topology network is more challenging as there are multiple pathways for data transmission.
Applications of Hybrid Topology
  • Used in large-scale applications and industrial applications

Comparison of all topologies w.r.t scalability and reliability

TopologyScalabilityReliability
BusLow - Difficult to expand; adding nodes can affect performanceLow - Failure of the central bus can bring down the entire network
StarHigh - Easily scalable by adding more hubs or switchesHigh - Failure of a single cable or node doesn’t affect others; hub failure affects the entire network
RingModerate - Adding nodes requires reconfiguration; larger rings can experience higher latencyModerate - Failure of a single node or cable can disrupt the entire network unless it's a dual-ring setup
MeshHigh - Nodes can be added without significantly affecting the network; full mesh is more scalableVery High - Redundant paths provide high fault tolerance; failure of a single connection does not disrupt the network
TreeHigh - Scalable by adding more branches and nodes; hierarchical structure supports growthModerate - Failure in one branch can affect all nodes within that branch, but not the entire network
HybridHigh - Combines aspects of other topologies to enhance scalability; highly adaptableHigh - Reliability varies depending on the combined topologies; can be designed for high fault tolerance

Explanation

  • Scalability: Refers to the ease with which a network can grow and accommodate additional nodes or resources.
    • High Scalability: Topologies like Star, Mesh, and Hybrid generally support easier expansion.
    • Moderate Scalability: Topologies like Ring and Tree are more complex to scale but can be managed with proper planning.
    • Low Scalability: Bus topology is challenging to expand without affecting performance.
  • Reliability: Refers to the network's ability to continue operating despite failures or faults.
    • Very High Reliability: Mesh topology provides multiple paths for data, ensuring that the network remains operational even if multiple connections fail.
    • High Reliability: Star and Hybrid topologies can be designed with redundant components and fault tolerance.
    • Moderate Reliability: Ring and Tree topologies have some inherent redundancy but may be affected by certain types of failures.
    • Low Reliability: Bus topology is susceptible to network-wide failures if the central bus fails.

Cloud Computing

  • Cloud computing is a technology model that provides access to computing resources and services over the internet, rather than owning and maintaining physical hardware and infrastructure.
  • In a cloud computing environment, users can access and use computing resources such as servers, storage, databases, networking, software, and analytics through the internet.

Key characteristics of cloud computing include:

  • On-Demand Self-Service
  • Broad Network Access
  • Resource Pooling
  • Rapid Elasticity
  • Measured Service
  • Security

Models of cloud computing

  • Cloud computing is typically categorized into three service models and four deployment models:

Service Models:

  • Infrastructure as a Service (laas)
  • Platform as a Service (PaaS)
  • Software as a Service (SaaS)

Deployment Models:

  • Public Cloud
  • Private Cloud
  • Hybrid Cloud
  • Community Cloud
Examples of Cloud Computing Services:
  • Amazon Web Services (AWS)
  • Microsoft Azure
  • Google Cloud Platform (GCP)
  • IBM Cloud

Scalability and Reliability in Cloud computing

  • Scalability in cloud computing refers to the ability of a cloud system or service to handle increasing workloads, either by expanding resources (scaling out) or upgrading existing ones (scaling up).

Types of scalability in cloud computing

  • Horizontal Scalability (Scaling Out)
  • Vertical Scalability (Scaling Up)
Horizontal Scalability
  • Means increasing the number of servers that run the application and distributing the workload among them.
Vertical Scalability
  • Means increasing the capacity of a single server by adding more resources, such as CPU, RAM, disk space, or network bandwidth.

Reliability in Cloud Computing

  • Relates to the ability of a cloud service or infrastructure to consistently deliver its intended functionality and maintain uptime, often referred to as "high availability."
  • Reliability ensures that applications and services hosted in the cloud are accessible and perform as expected, minimizing downtime and disruptions.

Cyber Security

  • Cyber security is the protection of internet-connected systems such as computers, servers, mobile devices, electronic systems, networks, and data from malicious attacks. It is also known as information technology security or electronic information security.

Importance of Cybersecurity

  • Cyber security is vital in any organization, Attackers target small and large companies and obtain their essential documents and information.
  • More and more data is stored and transmitted electronically, the risk of cyber-attacks has also increased.

Points that elaborate the need for cybersecurity.

  • Protecting Sensitive Data
  • Prevention of Cyber Attacks
  • Safeguarding Critical Infrastructure
  • Maintaining Business Continuity
  • Compliance with Regulations
  • Protecting National Security
  • Preserving Privacy

Cyber security threats

  • Malware (Malicious Software)
  • Phishing
  • Denial of Service (DoS) and Distributed Denial of Service (DDoS) Attacks
  • Ransom ware
  • Insider Threats
  • Cloud security threats

Malware (Malicious Software)

  • Malware is a broad category of software specifically designed to harm or exploit computer systems, steal data, or gain unauthorized access. Malware can take many forms and often disguises itself as legitimate software.
Types of malware
  • Viruses
  • Worms
  • Trojans
  • Ransom ware
  • Spyware

Phishing

  • Phishing is a social engineering attack where cybercriminals impersonate trusted entities to deceive users into revealing sensitive information, such as login credentials, credit card details, or personal information.
types of phishing
  • Spear Phishing
  • Email Phishing

Denial of Service (DoS) and Distributed Denial of Service (DDoS) Attacks

  • DoS and DDoS attacks aim to overwhelm a target system or network with an excessive amount of traffic, rendering it inaccessible to legitimate users.
    • DoS: A single attacker floods the target with traffic, often using multiple devices.
    • DDoS: Multiple compromised devices coordinate to flood the target, making it more challenging to mitigate.

Ransom ware

  • Ransom ware encrypts a victim's data and demands a ransom for the decryption key. Payment does not guarantee data recovery, and victims may lose access to critical information. Ransom ware can lead to data loss, financial losses (including ransom payments), and operational disruptions.

Insider Threats

  • Insider threats involve individuals within an organization (employees, contractors, or business partners) who misuse their access to systems and data for malicious purposes, for example sharing sensitive information with external parties. This can lead to data breaches, financial losses, and reputational damage.

Cloud security threats

  • Cloud security threats are potential risks and vulnerabilities that can compromise the security of data, applications, and infrastructure hosted in cloud environments. As organizations increasingly adopt cloud computing services, it is important to be aware of these threats and take measures to shrink them.

Methods and techniques for Protection against Cyber-Threats

Use strong passwords

  • A strong password typically possesses the following characteristics.
    Length: A strong password should be long, usually at least 12 characters. Longer passwords are harder to crack.
    Complexity: It should contain a mix of uppercase and lowercase letters, numbers, and special characters (e.g., 1, @, #, $, %).
    Unpredictability: Avoid using easily guessable information like names, birthdays, or common phrases.
    Uniqueness: Use different passwords for different accounts. Reusing passwords increases the risk if one account is compromised.

Keep your software up to date

2FA (Two-Factor Authentication)

  • Authentication is the process of verifying the identity of a user or system trying to access a resource. It ensures that the entity is who it claims to be. 2FA is an authentication method that requires users to provide two different forms of verification before granting access. Typically, it involves something the user knows (password) and something they have e.g. an OTP (one-time password) from a mobile app.

Be wary of suspicious emails

  • Be cautious of unsolicited emails, particularly those that ask for personal or financial information or contain suspicious links or attachments.

Educate yourself

  • Stay informed about the latest cybersecurity threats and best practices by reading cybersecurity blogs and attending cybersecurity training programs.

Firewalls

  • Firewalls are network security devices or software that monitor and control incoming and outgoing network traffic. They establish a barrier between a trusted internal network and untrusted external networks (e.g., the internet) to filter and block malicious traffic.

Antivirus and Anti-malware Software

  • Antivirus and anti-malware software are designed to detect, quarantine, and remove malicious software, such as viruses, Trojans, and spyware, from a system.

Encryption

  • Encryption transforms data (plaintext) into a coded format (cipher text) that can only be deciphered with the appropriate decryption key. It ensures the confidentiality and integrity of sensitive data.

Backup and Disaster Recovery

  • Regular data backups and disaster recovery plans ensure that in case of a cyber attack or data breach, critical data can be restored, minimizing downtime and data loss.

Cryptography

  • Cryptography is a scientific approach of securing information by transforming it into an unreadable format using mathematical algorithms. It ensures data confidentiality, integrity, and authentication.

Data encryption

  • Data encryption is a process of converting plaintext data into an unreadable format called cipher text using encryption algorithms and keys. The primary purpose of data encryption is to protect sensitive information from unauthorized access, ensuring that even if someone gains access to the encrypted data, they cannot decipher it without the appropriate decryption key.

How data encryption works

  • The encryption process involves applying the encryption algorithm to the plaintext using the encryption key to produce the cipher text. This cipher text is a scrambled and unreadable version of the original data. To decrypt the data back into its original plaintext form, the recipient needs to possess the decryption key, which is used with the decryption algorithm to reverse the process.
  • Plaintext: This is the original, human-readable form of the data that you want to protect. It can be any type of digital information, such as text, files, or communication messages.
  • Encryption Algorithm: An encryption algorithm is a mathematical formula or process that transforms plaintext into cipher text. Different encryption algorithms have varying levels of complexity and security.
  • Encryption Key: A key is a piece of information used by the encryption algorithm to control the transformation of plaintext into cipher text and vice versa.

Cryptography with encryption

  • Cryptography and Encryption are related concepts in the field of information security, but they are not synonymous.
  • Cryptography is the science of concealing messages with a secret code. Encryption is the way to encrypt and decrypt data.
  • The first is about studying methods to keep a message secret between two parties (like symmetric and asymmetric keys), and the second is about the process itself.

Types of encryption algorithm

  • Symmetric Encryption
  • Asymmetric Encryption
Symmetric Encryption
  • Is a fundamental data protection technique, relying on a single cryptographic key for both encrypting plaintext and decrypting cipher text.
Asymmetric Encryption
  • Uses two separate keys: a public key and a private key. Often a public key is used to encrypt the data while a private key is required to decrypt the data. The private key is only given to users with authorized access. As a result, asymmetric encryption can be more effective, but it is more costly.

Comparison between symmetric and asymmetric encryption

Symmetric EncryptionAsymmetric Encryption
It uses a single shared key for both encryption and decryptionIt uses a pair of public and private keys for encryption and decryption.
In this, the same key is used by both the sender and the receiverIn this, the Public key is used for encryption, and private key is used for decryption.
It is well-suited for encrypting large amounts of data.It is generally slower, and suitable for smaller amounts of data or secure key exchange.
It requires a safe channel to transmit the secret key for key distribution.It enables secure key exchange over insecure channels without requiring a pre- established secure channel.
It is computationally less complex compared pared to asymmetric encryption.It Involves more complicated mathematical operations, making it computationally more complex.
It provides better performance in terms of speed and efficiency.It is slower due to the complexity of mathematical operations involved.
It is suitable for scenarios where a secure key distribution channel is establishedIt is more suited for scenarios where secure key exchange over insecure channels is required.
It is often used in situations where performance and efficiency are critical.It is commonly used for secure data transmission, digital signatures, and securing communication channels.