Primitive Data Type
A basic data type provided by a programming language, such as integers, floats, characters, and booleans, that is not composed of other data types.
Denary
A base-10 number system used for representing integers and real numbers, consisting of digits from 0 to 9.
Binary
A base-2 number system used for representing data in computing, consisting of only two digits: 0 and 1.
Hexadecimal
A base-16 number system used for representing integers, consisting of digits from 0 to 9 and letters A to F.
Denary to binary conversion
The process of changing a number from the denary (base-10) system to the binary (base-2) system, involving dividing the number by 2 and recording the remainders.
Binary to Denary conversion
The process of converting a binary number, which consists of only 0s and 1s, into its equivalent decimal (denary) value in base-10.
Binary to Hexadecimal conversion
The process of converting a binary number into its equivalent hexadecimal value, often grouping binary digits into sets of four.
Hexadecimal to binary conversion
The process of converting a hexadecimal number, which uses the digits 0-9 and letters A-F, into its equivalent binary value by translating each hexadecimal digit into a 4-bit binary representation.
Hexadecimal to Denary conversion
The process of converting a hexadecimal number into its equivalent decimal (denary) value in base-10, using the positional values of each digit.
Denary to hexadecimal conversion
The process of converting a decimal (denary) number into its equivalent hexadecimal value, typically by repeatedly dividing by 16 and recording the remainders.
Reasons for binary
Binary is the fundamental language of computers, allowing for efficient data processing and storage. It simplifies electronic circuitry, as it only requires two states: on and off.
Reasons for hexadecimal
Hexadecimal is a base-16 numbering system that is more compact than binary, making it easier for humans to read and write large binary values. It is commonly used in computing to represent memory addresses and color codes.
Kilobyte
A unit of digital information generally considered to be equal to 1000 bytes, commonly used to measure file sizes and memory capacity. Previously used to represent 1024 bytes (now called a kibibyte)
Kibibyte
A unit of digital information equal to 1024 binary bytes, used to provide a more precise measurement than kilobyte.
ASCII
A character encoding standard for electronic communication that represents text in computers and other devices. ASCII assigns a numerical value to each character, allowing for the representation of letters, digits, and symbols.
Unicode
A character encoding standard that supports a wide range of characters from various languages and scripts, allowing for international text representation.
Character set
A collection of characters that can be used in text files or communications, defining how each character is represented in digital form.
Binary addition
The process of adding binary numbers, where each digit is either 0 or 1, following specific rules for carrying over values when sums exceed the base of 2.
Overflow
A condition in binary addition where the result exceeds the maximum value that can be represented with a given number of bits, leading to incorrect results.
Most significant bit
The bit in a binary number that holds the highest value, typically located at the leftmost position, indicating the number's sign in signed representations.
Sign and Magnitude
A method of representing signed numbers in binary, where the most significant bit indicates the sign (0 for positive, 1 for negative) and the remaining bits represent the magnitude of the number.
Two’s complement
A method for representing signed integers in binary, where the most significant bit indicates the sign and the value is obtained by inverting all bits and adding one to the least significant bit.
Binary subtraction
A method of subtracting binary numbers that often involves borrowing from higher-order bits, similar to decimal subtraction or by using the two’s complement representation and adding.
Fixed point number
A method of representing real numbers in binary where a fixed number of digits are allocated for the integer and fractional parts, allowing for precise representation of decimal values.
Range
The set of possible values that a fixed point number can represent, determined by the number of bits allocated for the integer and fractional parts.
Precision
The degree of accuracy in representing a number, indicating how many digits are used to express a value in a data type.
Floating point number
A method of representing real numbers in binary that allows for a variable number of digits for the integer and fractional parts, enabling the representation of a wider range of values compared to fixed point numbers.
Mantissa
The part of a floating point number that contains its significant digits, representing the precision of the number. It is combined with the exponent to determine the value of the number.
Exponent
The part of a floating point number that indicates the power of the base (usually 2) by which the mantissa is multiplied, determining the overall scale of the number.
Normalisation
The process of adjusting the representation of a floating point number to ensure that the mantissa is within a specific range, typically between 1 and 2, which maximizes precision.
Size of manti
Size of exponent
The number of bits allocated to the exponent in a floating point representation, which determines the range of values that can be represented.
Underflow
A condition that occurs when a number is too small to be represented in the floating point format, leading to a loss of precision or a value of zero.
Logical shift
A bitwise operation that shifts the bits of a binary number to the left or right, filling the vacated bits with zeros. This operation is commonly used in computer programming for efficient data manipulation.
Arithmetic shift
A bitwise operation that shifts the bits of a binary number to the left or right, preserving the sign bit for signed numbers. This operation is used for signed integer arithmetic in computing.
Circular shift
A bitwise operation that rotates the bits of a binary number around, moving the bits at the ends back to the opposite end. This operation is useful in certain algorithms for data processing.
Logical instructions
Operations that manipulate binary data and control the flow of execution in programming, including AND, OR, NOT, and XOR.
Mask
A technique used in bit manipulation to isolate or modify specific bits in a binary number by applying a bitwise operation with another binary number.