1/110
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Abacus
The earliest calculating tool, invented thousands of years ago in China and is still used in some parts of Asia.
Blaise Pascal and Gottfried Leibniz
Two mathematicians in 17th century who built Mechanical Calculators that could perform the FOUR BASIC ARITHMETIC FUNCTIONS of addition, subtraction, multiplication, and division.
Charles Babbage
Designed an analytical engine that performed general calculations automatically in 1842.
Herman Hollerith
designed a tabulating machine to record census data in 1890.
John Atansoff and Clifford Berry
designed and built the first electronic digital computer in 1939.
Colossus
Built by British, first fully operational working computer designed to crack encrypted German military codes in December 1943.
Automatic Sequence Controlled Calculator
(ASCC) / Mark I
The frst general-purpose modern computer, developed in 1944 at Harvard University.
ENIAC (Electronic Numerical Integrator and Calculator)
The first general-purpose electronic computer, developed in 1946 at the University of Pennsylvania by J. Presper Eckert and John Mauchly
transistor
is an electronic switch that alternately allows
or does not allow electronic signals to pass
William Shockley
developed the transistor in 1948 at the Bell
Telephone Laboratories.
Transistor
It made possible the development of the “stored program” computers.
UNIVAC (UNIVersal Automatic Computer)
The first commercially successful general purpose, stored program electronic digital computer in 1951. Eckert and Mauchly of the Sperry-Rand Corporation.
First-generation computers
vacuum tube devices (1939–1958)
Second-generation computers
individually packaged transistors in 1958.
Third-generation computers
Integrated circuits (ICs), which consist of many transistors and other electronic elements fused onto a chip—a tiny piece of semiconductor material, usually silicon in 1964.
microprocessor
developed in 1971 by Ted Hoff of Intel Corporation
fourth generation of computers
extension of the third generation and incorporated large-scale integration (LSI), then later replaced by very large-scale integration (VLSI), which places millions of circuit elements on a chip that measures less than 1 cm in 1975.
computer
refers to any general-purpose, stored-program electronic digital computer
Analog
refers to a continuously varying quantity; a digital system uses only two values that vary discretely through coding.
Principal parts of computer
hardware and software
hardware
everything about the computer that is visible—the physical components of the system that include the various input and output devices
Hardware
Operations that include input processing, memory, storage, output, and communications.
software
the computer programs that tell the hardware what to do and how to store and manipulate data.
Computer Language
To give a computer instructions on how to store and manipulate data
binary system
All computers languages translate what the user inputs into series of two digits, 0 and 1.
decimal system
10 digits (0–9) in the system is used
Latin for “finger” or toe
digit
A timeline showing the evolution of today’s computer.
Power of 2 notation
Is used in radiologic imaging to describe image size, image dynamic range (shades of gray), and image storage capacity.
Digital images
made of discrete picture elements, pixels, arranged in a matrix.
computed tomography (CT) and magnetic resonance imaging (MRI) images measurement
256 × 256 (2⁸) to 1024 × 1024 (2¹⁰)
digital fluoroscopy image measurement
1024 × 1024 matrix
digital radiography and digital mammography Matrix sizes
2048 × 2048 (2¹¹) and 4096 × 4096 (2¹²)
bit
a single binary digit, 0 or 1
bytes
Bits grouped into bunches of eight
binary digits
Are to encode and is to translate from ordinary characters to computer-compatible characters
One kilobyte (kB)
equal to 1024 bytes
kilo in bytes
represents 2¹⁰ or 1024
The computers typically used in radiology
departments have capacities measured in
gigabytes (GB)
How many bits can be stored on a 64-kB
chip?
542,288 bits
word
Constitutes to two bytes equal to 16 bits
“nibble”
half a byte equal to 4 bits
“chomp”
two words equal to 32 bits
computer program
The sequence of instructions developed by a software programmer. Used to distinguish two classifications of computer programs: systems software and application programs.
Systems software
programs that make it easy for the user to operate a computer to its best advantage.
Application programs
written in a higher level language expressly to carry out some user function.
Computer programs
the software of the computer.
operating system
series of instructions that organizes the course of data through the computer to the solution of a particular problem. It makes the computer’s resources available to application programs.
assembler
computer program that recognizes symbolic instructions such as “subtract (SUB),” “load (LD),” and “print (PT)” and translates them into the corresponding binary code
Assembly
the translation of a program written in symbolic, machine-oriented instructions into machine language instructions
Compilers and interpreters
are computer programs that translate an application program from its high-level language, such as Java, BASIC, C++, or Pascal, into a form that is suitable for the assembler or into a form that is accepted directly by the computer
Interpreters
make program development easier because they are interactive.
Compiled programs
run faster because they create a separate machine language program.
applications programs
Computer programs that are written by a computer manufacturer, by a software manufacturer, or by the users themselves to guide the computer to perform a specific task
Application programs
allow users to print mailing lists, complete income tax forms, evaluate financial statements, or reconstruct images from x-ray transmission patterns.
Application programs
They are written in one of many high-level computer languages and then are translated through an interpreter or a compiler into a corresponding machine language program that subsequently is executed by the computer
The sequence of software manipulations required to complete an operation.
bootstrap
Capable of transferring other necessary programs off the disc and into the computer memory. loads the operating system into primary memory, which in turn controls all subsequent operations.
hexadecimal number system
used by assembly level applications
assembly language
acts as a midpoint between the computer’s binary system and the user’s human language instructions.
hexadecimal numbers
Each of these symbols is used to represent a binary number or, more specifically, a set of four bits.
In hexadecimal numbers, a byte is equal to
two hexadecimal numbers. Which means each of these symbols is equal to 4 bits
The set of hexadecimal numbers corresponds to the binary numbers for 0 to 15
FORTRAN (FORmula TRANslation)
The oldest language for scientific, engineering, and mathematical problem, developed in 1956 by IBM
algorithms
Problems that can be expressed in terms of formulas and equations
BASIC (Beginners All-purpose Symbolic Instruction Code)
Developed at Dartmouth College in 1964 as a first language for students
BASIC
contains a powerful arithmetic facility, several editing features, a library of common mathematical functions, and simple input and output procedures.
COBOL (COmmon Business Oriented Language)
One high-level, procedure-oriented language designed for coding business data processing problems. Provides extensive file-handling, editing, and report generating capabilities for the user
Pascal
high-level, general purpose programming language that was developed in 1971 by Nicklaus Wirth of the Federal Institute of Technology at Zürich, Switzerland. A general-purpose programming language is one that can be put to many different application
C
considered by many to be the first modern “programmer’s language.” It was designed, implemented, and developed by real working programmers and reflects the way they approached the job of programming.
C+ +
response to the need to manage greater complexity, developed by Bjarne Stroustrup in 1980.
contains the entire C language, as well as many additions designed to support object-oriented programming (OOP)
OOP
a method of dividing up parts of the program into groups, or objects, with related data and applications, in the same way that a book is broken into chapters and subheadings to make it more readable.
Visual programming languages
designed specifically for the creation
of Windows applications.
macros
used to carry out user defined functions or a series of functions in the application. The user can create a command to manipulate a series of data by performing a specific series of steps.
recording
process of designing a macro
LOGO
language that was designed for children
ADA
official language approved by the U.S. Department of Defense for software development. It is used principally for military applications and artificial intelligence.
Java
developed in 1995, became very useful in web application programming as well as application software
HTML (HyperText Markup Language)
predominant language used to format web pages.
central processing unit (CPU)
primary element that allows the computer to manipulate data and carry out software instructions
microprocessor
In microcomputers, they are the CPU
control unit and an arithmetic/logic unit (ALU)
Computer's CPU components
bus
an electrical conductor that connects the two components and all other components
processor
The electronic circuitry that does the actual computations and the memory that supports it
Main memory
the working storage of a computer.
RAM
The contents are temporary, capacity usually is expressed as megabytes (MB), gigabytes (GB), or terabytes (TB), referring to millions, billions, or trillions of characters stored.
two types of RAM
dynamic RAM (DRAM) and static RAM (SRAM
DRAM
The memory is not saved after turning off the power
SRAM
retains its memory even if power to the computer is lost
registers
Special high-speed circuitry areas found in the control unit and the ALU. hold information that will be used immediately.
Read-only memory (ROM)
contains information supplied by the manufacturer, called firmware that cannot be written on or erased.
PROM, EPROM, and EEPROM
Three variations of ROM chips used in special
situations
PROM (programmable read-only memory)
blank chips that a user, with special equipment, can write programs to. After the program is written, it cannot be erased.
EPROM (erasable programmable read-only memory)
contents are erasable with the use of a special device that exposes the chip to ultraviolet light.
EEPROM (electronically erasable programmable read-only memory)
can be reprogrammed with the use of special electron impulses.
motherboard or system board
the main circuitboard in a system unit. This board contains the microprocessor, any coprocessor chips, RAM chips, ROMchips, othertypes of memory, and expansion slots, which allow additional circuit boards to be added
Storage
archival form of memory
terminal
input/output device that uses a keyboard for input and a display screen for output.
dumb terminal
cannot do any processing on its own; it is used only to input data or receive data from a main or host computer
intelligent terminal
has built-in processing capability and RAM but does not have its own storage capacity.