Computer Abstractions and Technology

0.0(0)
studied byStudied by 2 people
full-widthCall with Kai
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/80

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

81 Terms

1
New cards

personal computer

general purpose, variety of software, subject to cost/performance tradeoff

2
New cards

server computer

network based, high capacity, performace, reliability, rangle from small to building size

3
New cards

supercomputer

type of server, high-end calculations, highest capability

4
New cards

embedded computer

hidden as components of systems, power/performace/cost restraints

5
New cards

Personal Mobile Device (PMD)

battery operated, connects to internet, $100s

6
New cards

cloud computing

warehouse scale computers, software as a service, portion runs on PMD, portion on Cloud

7
New cards

Algorithm

determines number of operations executedprogr

8
New cards

programming language, compiler, architecture

determine number of machine instructions executed per operation

9
New cards

processor and memory system

determine how fast instructions are executes

10
New cards

I/O system

determines how fast I/O operations are executed

11
New cards

Seven Great Ideas

abstraction, common case fast, parallelism, pipelining, prediction, heirarchy, dependability

12
New cards

abstraction

breaking down complex systems into smaller, manageable components

13
New cards

common case fast

optimization of frequent operations

14
New cards

parallelism

performing multiple tasks simultaneously to increase speed

15
New cards

pipelining

overlapping execution of instructions to reduce time it takes to complete thempredi

16
New cards

prediction

making educated guesses about future events to avoid waiting

17
New cards

heirarchy of memories

organizing memory into different levels based on speed and cost

18
New cards

dependability

using multiple components to increase reliability

19
New cards

application software

written in high level language

20
New cards

system software

compiler and operating system

21
New cards

compiler

translates HLL code to machine code

22
New cards

operating system

service code - handles input/output, manages memory and storage, schedules tasks & sharing resources

23
New cards

hardware

processor, memory, I/O controllers

24
New cards

High-level language code

level of abstraction closer to problem domain, provides productivity and portability

25
New cards

assembly language code

textual representation of instructions

26
New cards

hardware representation of code

binary digits (bits), encoded instructions and data

27
New cards

components of a computer

desktop, server, embedded

28
New cards

Input/Output includes

user-interface devices, storage devices, network adapters

29
New cards

user interface devices

display, keyboard, mouse

30
New cards

storage devices

hard disk, CD/DVD, flash

31
New cards

network adapters

for communicating with other computers

32
New cards

Touchscreen

PostPC device, supersedes keyboard/mouse, resistive and capacitative types

33
New cards

LCD screen

picture elements (pixels), mirrors content of frame buffer memory

34
New cards

Inside the Processor (CPU)

Datapath, Control, Cache memory

35
New cards

datapath

performs operations on data

36
New cards

control

sequences datapath, memory

37
New cards

cache memory

small fast SRAM memory for immediate access to data

38
New cards

instruction set architecture (ISA)

hardware/software interface

39
New cards

application binary interface

the ISA plus system software interface

40
New cards

implementation

the details underlying and interface

41
New cards

volatile main memory

loses instructions and data when powered off

42
New cards

non-volatile secondary memory

magnetic disk, flash memory, optical disk (CDROM, DVD)

43
New cards

networks

communication, resource sharing, nonlocalaccess

44
New cards

local area network (LAN)

ethernet

45
New cards

wide area network (WAN)

internet

46
New cards

wireless network

wifi, bluetooth

47
New cards

electronics technology evolves

increased capacity and performance, reduced cost

48
New cards

can transform silicon to act as

codnuctor, insulator, switch

49
New cards

yield

proportion of working dies per wafer

50
New cards

die

single Integrated Circuit (IC)in

51
New cards

integrated circuit cost

nonlinear relation to area and defect rate

52
New cards

fewer dies per wafer

as die area increases, you get fewer chips from a singular wafer

53
New cards

lower yield

as the die area increases, the defect rate increases as the likelihood of there beign a defect on the die increases

54
New cards

response time

how long it takes to do a task

55
New cards

throughput

total work done per unit time

56
New cards

elapsed time

total response time, determines system performance

57
New cards

aspects of response time

processing, I/O, OD overhead, idle time

58
New cards

CPU Time

time spent processing a given job, comprises user and system CPU time

59
New cards

CPU clocking

operation of digital hardware governed by constant rate clock

60
New cards

cycle time

duration of a clock cycle

61
New cards

clock rate

clock frequency

62
New cards

CPU performance improved by

reducing number of clock cycles, increasing clock rate (clock cycles/clock rate)

63
New cards

instruction count

determined by program, ISA, and compiler

64
New cards

average cycles per instruction (CPI)

determined by CPU hardware, use average CPI affected by instructions

65
New cards

performance depends on

algorithm, programming language, compiler, instruction set architecture

66
New cards

power

capacitative load x voltage² x frequency

67
New cards

the power wall

cannot reduce voltage further, cannot remove more heat

68
New cards

uniprocessor performance

constrained by power, instruction-level parallelism, memory latency

69
New cards

multicore microprocessor

has more than oen processor per chip, requires parallel programming

70
New cards

thread-based parallelism

divide tasks into multiple threads that can run on separate cores

71
New cards

data parallelism

workloads are split across cores to process data in chunks - GPU computing

72
New cards

task parallelism

tasks are assigned to different cores - thread for UI, thread for computations

73
New cards

Spectre vulnerability

exploits speculative execution capabilities of modern CPUs, leak sensitive data

74
New cards

instruction level parallelism (ILP)

executing multiple instructions within a single processor core using pipelining, superscalar execution, out-of-order execution

75
New cards

SPEC Power Benchmark

industry-standard tool designed to evaluate the energy efficiency of server-class computers

76
New cards

SPECpower_ssj2008

most recognized version of SPEC Power Benchmark

77
New cards

Amdahl’s Law

improving an aspect of a computer and expecting a proportional improvement in overall performance

78
New cards

MIPS pitfall - Millions of Instructions Per Second

doesn’t account for differences in ISAs between computers, differnces in complexity between instructions

79
New cards

process

an independent program in execution with its own memory space

80
New cards

thread

a lightweight unit of execution within a process, sharing the process’ resources

81
New cards

multi-process architecture

create separate processes for different tasks