address space
The maximum amount of memory that a computer can physically hold; determined by the size of the address field
Arithmetic Logic Unit
computer subsystem that performs mathematical and logical operations such as addition, subtraction, and comparison for equality
bus
(1) a path for electrical signals; (2) LAN topology in which all nodes are connected to a single shared communication line
cache hit rate
The percentage of the time that the information needed is in cache memory
cache memory
A high-speed memory unit that keeps a copy of memory cells with a high likelihood of access in the near future
Central Processing Unit
The part of the computer consisting of the control unit and the ALU
CISC machine
Complex Instruction Set Computer; a machine that has a very large and complex instruction set
cluster computing
independent systems, such as mainframes, desktops, or laptops, are interconnected by a local area network (LAN) like the Ethernet or a wide area network (WAN) such as the Internet; also called MIMD parallel processing
Computer Organization
the branch of computer science that studies computers in terms of their major functional units and how they work
control unit
The computer subsystem that fetches and executes instructions stored in the memory of the computer
data path
The ALU circuits, registers, and interconnections between components
destructive store
When you store a new value in a memory cell and destroy its previous contents
direct access storage devices
A mass storage device in which each unit of information is associated with a unique address, but the time to access each piece of information may not be the same
fetch/store controller
The component that determines whether a value will be placed into memory or copied from memory
functional units
Subunits of a computer that perform tasks such as instruction processing, information storage, computation, and data transfer
grid computing
A MIMD model in which the individual processors can be computer systems belonging to a wide range of groups or individuals
Hierarchy of Abstractions
A series of abstractions, each one more detailed and each one showing lower level components of a system
I/O controller
A special-purpose device that controls the operations of an input/output device
input/output
The devices that allow a computer system to communicate and interact with the outside world, as well as to store information
instruction register
The register that holds a copy of the instruction to be executed
instruction set
set of all operations that can be executed by a processor
interrupt signal
A signal sent by the I/O controller to the CPU to let it know that it has completed an I/O operation
latency
The time required to rotate the disk to the beginning of the desired sector
level of abstraction
An alternate perspective or a different way to view a system
machine language
The programming language that a processor is able to directly understand and execute; written in binary
mass storage systems
Systems or devices where information is kept for long periods of time and not lost when the computer is not being used
memory
Th functional unit of a computer that stores and retrieves instructions and data being executed
memory access time
The time it takes to fetch or store the contents of a single memory cell
memory address
The unique numeric identifier for a memory cell
Memory Address Register
The memory register that holds the address of the cell to be fetched from or stored into
memory cell
The minimum unit of memory access
Memory Data Register
The memory register that holds the data value to be stored or the data value that was just fetched
memory width
The number of bits in a single memory cell
MIMD parallel processing
multiple instruction stream/multiple data stream; a parallel processing model in which multiple processors all work independently on their own program to solve a single problem; also called cluster computing
nanosecond
one billionth of a second
non-destructive fetch
When you access the contents of a memory cell and copy it, but do not destroy it
Non-Von Neumann architecture
Computer designs based on models other than the standard Von Neumann architecture
nonvolatile memory
Memory that does not lose its contents even when the power is turned off
parallel algorithms
Algorithms that exploit the presence of multiple processors to solve a single problem
parallel processing
Building computers with two or more processors that work in parallel
principle of locality
When you access a memory cell, it is likely that you will also access memory cells nearby very soon
processor
A system that is composed of the ALU together with the control unit
program counter
A register that holds the address of the next instruction to be executed
quantum computing
A field of computer design using the principles of quantum mechanics in which a single bit of information can be not just a 0 or a 1 but in both states at the same time.
Random Access Memory
A memory structure in which each cell has an address and it takes the same amount of time to fetch or store any cell
read-only memory
A memory structure that can only be accessed, not written into or changed
register
A special, high-speed storage cell
RISC Machine
Reduced Instruction Set Computer; a machine that has a very small and simple instruction set, but where each instruction is highly optimized and executes very quickly
sector
A disk storage unit containing an address, a data block, and a fixed number of bytes; sectors are arranged in concentric tracks on a disk
seek time
The time required to move the read/write head to the correct track
sequential access storage device
A mass storage device in which information is located by sequentially searching all the information that is stored
sequential execution
One instruction at a time is fetched from memory to the control unit, where it is decoded and executed
SIMD parallel processing
single instruction stream/multiple data stream; a parallel processing model in which multiple processors all execute the same instruction on their own local data
stored program
The instructions to be executed by the computer are represented as binary values and stored in memory
track
A single concentric circle of information on a disk
transfer time
The time required to read the desired sector into main memory.
vector
an ordered collection of values
volatile memory
Memory that loses its contents when the power is turned off
Von Neumann Architecture
The computational model designed by John Von Neumann and first implemented in the EDSAC computer of 1947; the structure and organization of virtually all modern computers
Von Neumann bottleneck
The inability of sequential, one-at-a-time processors to handle extremely large problems in a reasonable time scale