Week-3 Lecture-3_Number Systems
Module Title: CSEC1001 Foundation of Computing and Cyber Security Mathematics for Computing
Focus: Understanding number systems and their applications in computing.
Computing is deeply rooted in mathematical principles.
Mathematics provides a systematic framework for:
Modeling systems
Reasoning about systems
Acts as a bridge between theoretical concepts and practical applications.
Decimal (Base 10): Uses digits 0-9.
Binary (Base 2): Uses digits 0 and 1.
Octal (Base 8): Uses digits 0-7.
Hexadecimal (Base 16): Uses digits 0-9 and letters A-F.
Decimal to Binary:
Successive division by 2, recording remainders.
Binary to Decimal:
Weighted multiplication of binary digits based on their positions.
Hexadecimal and Octal Conversions:
Group binary digits into sets (4 for hex, 3 for octal).
Bit: Smallest unit of data (0 or 1).
Nibbles: 4 bits.
Bytes: 8 bits (can represent 256 values).
Word: Typically 16 bits (2 bytes).
Double Word: Typically 32 bits (4 bytes).
Representations in computing vary, with different types falling under:
Unsigned: Only positive values.
Signed: Positive and negative values.
Sign-Magnitude:
Uses an extra bit to indicate the sign (0 = positive, 1 = negative).
For example, the decimal -5 would be represented as 10101 for 5 in binary, where the first bit is 1.
Two’s Complement:
More common representation for signed integers in computers.
To find negative numbers, invert the bits of the number and add 1.
Example, to represent -5, convert 5 to binary (0101), invert to (1010) and add 1 to get (1011).
ASCII uses 7 bits to represent up to 128 characters, allowing for simple text representation.
Characters are mapped to numerical values, enabling computers to process text.
Binary representation can be long and complex, making it difficult to evaluate scale or size.
Conversions to octal or hexadecimal can simplify expressions:
Example: Binary 101011 becomes octal 53 and hexadecimal 2B.
Conversions are essential for understanding how computers process and store information.
Finally, always assume two’s complement for signed numbers unless specified otherwise.
Module Title: CSEC1001 Foundation of Computing and Cyber Security Mathematics for Computing
Focus: Understanding number systems and their applications in computing.
Computing is deeply rooted in mathematical principles.
Mathematics provides a systematic framework for:
Modeling systems
Reasoning about systems
Acts as a bridge between theoretical concepts and practical applications.
Decimal (Base 10): Uses digits 0-9.
Binary (Base 2): Uses digits 0 and 1.
Octal (Base 8): Uses digits 0-7.
Hexadecimal (Base 16): Uses digits 0-9 and letters A-F.
Decimal to Binary:
Successive division by 2, recording remainders.
Binary to Decimal:
Weighted multiplication of binary digits based on their positions.
Hexadecimal and Octal Conversions:
Group binary digits into sets (4 for hex, 3 for octal).
Bit: Smallest unit of data (0 or 1).
Nibbles: 4 bits.
Bytes: 8 bits (can represent 256 values).
Word: Typically 16 bits (2 bytes).
Double Word: Typically 32 bits (4 bytes).
Representations in computing vary, with different types falling under:
Unsigned: Only positive values.
Signed: Positive and negative values.
Sign-Magnitude:
Uses an extra bit to indicate the sign (0 = positive, 1 = negative).
For example, the decimal -5 would be represented as 10101 for 5 in binary, where the first bit is 1.
Two’s Complement:
More common representation for signed integers in computers.
To find negative numbers, invert the bits of the number and add 1.
Example, to represent -5, convert 5 to binary (0101), invert to (1010) and add 1 to get (1011).
ASCII uses 7 bits to represent up to 128 characters, allowing for simple text representation.
Characters are mapped to numerical values, enabling computers to process text.
Binary representation can be long and complex, making it difficult to evaluate scale or size.
Conversions to octal or hexadecimal can simplify expressions:
Example: Binary 101011 becomes octal 53 and hexadecimal 2B.
Conversions are essential for understanding how computers process and store information.
Finally, always assume two’s complement for signed numbers unless specified otherwise.