chapter16

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/77

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

78 Terms

1
New cards

what happens when the computer is turned on

The BIOS (stored in ROM) starts off the bootstrap program

2
New cards

what is the bootstrap program

it loads part of the OS into main memory (ROM to RAM from the HDD or SSD) which then takes control of all the computer system and sets it running

3
New cards

how does a tablet/ phone’s main memory work

it’s supplied by flash memory and is split into two parts. One is where the OS is and it’s read only and the other is where apps are stored. the RAM is where apps are executed and where data currently in use is stored

4
New cards

resource management is split into what three areas

the CPU, memory, and the input/output (I/O) system

5
New cards

resource management of the CPU involves what

scheduling, which ensures CPU time and resources are used efficiently 

6
New cards

for I/O operations what does the OS have to deal with 

operations initiated by the user or while software is running and resources are requested

7
New cards

what is the DMA controller 

the direct memory access controller is needed to allow hardware to access main memory independently of the CPU

8
New cards

how does the DMA controller work

it initiates data transfers, and while this is done the CPU carries out other tasks. Once complete, the CPU is sent an interrupt signal for the DMA. this allows the CPU to be fully utilised while  slower I/O operations take place. 

9
New cards

what is the kernel 

software part of the OS that manages the CPU and other hardware. It handles communication between hardware, software, and memory

10
New cards

what is the kernel responsible for

process/ device/ memory management, interrupt handling and I/O file communication

11
New cards

what’s one of the most important tasks of an OS and how can this be achieved 

hiding the complexities of hardware from users. This can be done by using GUI interfaces, device drives,carrying out background utilities, and simplifying the saving/ retrieving of data from memory/ storage devices

12
New cards

what are some roles process management can carry out

multitasking, process states, scheduling routine algorithms and interrupt handling

13
New cards

what is multitasking

It allows computers to run multiple processes (tasks) simultaneously. Scheduling ensures it operates properly and the kernel system overlaps the execution of processes based on the scheduling algorithm ,

14
New cards

what is a process

a program that has started to be executed. Each consumes system storage and other resources, and as more tasks are started, the system may slow or begin to run out of storage

15
New cards

what’s one way scheduling improves multitasking

it can ensure processes don’t clash by deciding which processes should be carried out. They’re swapped out so fast it looks like they’re running at the same time 

16
New cards

how does multitasking benefit computers 

it makes sure resources are best used by monitoring the state of a process. They can be running, ready, or blocked.

17
New cards

what are the two types of multitasking operating systems 

preemptive and non-preemptive 

18
New cards

what’s a preemptive system

a more flexible way of scheduling as a process can be interrupted while running and is only assigned resources for a limited time. It may involve prioritising resources; lower priority processes may lack the resources needed 

19
New cards

what is a non-preemptive system

its a more rigid form of scheduling. Processes are placed in a queue where first in is first served but if a process with a longer burst time is run, another process may be deprived of resources. Processes cannot be interrupted - they must finish first or switch to a waiting state. Allocated resources also stay until burst time is completed or switched to a waiting state h

20
New cards

what is low level scheduling

it decides which process in the ready state can now be put into the running state based on its priorities. It ensures response time is good and the system remains stable 

21
New cards

what is a process control block (PCB)

a data structure which contains all the data needed for a process to run, it can be created in memory ready to receive data when the process is executed 

22
New cards

what does the PCB store

the current process state (i.e. running), process privileges (what resources it can access)/ priority, process ID and CPU time needed to complete the process

23
New cards

what happens when a process is in memory 

once in memory, the state of a process can change 

24
New cards

when is a process changed to the ready state

when a process just arrives in memory & a PCB is made or when a process in the running state is halted by an interrupt

25
New cards

when does a process change into the running state

when a process in the ready state is given access to the CPU by the dispatcher it’s changed to the running state 

26
New cards

when is a process put in a blocked state 

when a process in the running state cannot progress until some event has occurred, it’s changed to the blocked state

27
New cards

what are the scheduling routine algorithms

first come first serve (FCFS), shortest job first (SFJ), shortest remaining time first (SRTF) and round robin

28
New cards

what is first come first serve

the data added to a queue first is the data that leaves the queue first 

29
New cards

what are the similarities vs differences of SFJ and SRTF 

they’re both used when wanting to minimise the process waiting time but SFJ is non-preemptive while SRTF is preemptive 

30
New cards

how does SFJ work

the process requiring the least CPU time is executed first 

31
New cards

how does SRTF work

processes are placed in the ready queue as they arrive, but when a process with a shorter burst time arrives, the existing process is removed and the shorter process is executed first

32
New cards

how does a round robin work

a fixed time slice is given to each process (a quantum) and once a process is executed during its slice, it’s removed and placed in the blocked queue, then another process from the ready queue is executed in its own timeslice. Context switching saves the state of the pre-empted processes 

33
New cards

what happens if a process finishes early in a round robin

if a process completes before the end of its time slice, the next process is bought in for its time slice

34
New cards

what interrupts trigger what mode

if a device interrupt, exception or trap/ software interrupt signal is sent, the system enters the kernel mode

35
New cards

what does the kernel do when an interrupt is received 

It consults the interrupt dispatch table (IDT), which gives the address of the low-level routine to handle the event received. The kernel saves the interrupt process to the kernel stack, and the process state is restored once the interrupt is serviced.

36
New cards

what is an interrupt dispatch table

a table that acts like a key that the CPU uses to find the appropriate interrupt routine

37
New cards

how are interrupts prioritised

using interrupt priority levels (IPL), a process is only suspended if its IPL is higher than the incoming interrupt. The process with the higher IPL is saved in the interrupt register and is handed when the IPL rises to a certain value (IPL is 0-31 so it starts low and increases along the way)

38
New cards

how do processes affect the cpu and how does memory management overcome it 

the CPU may become fragmented, so memory management uses optimisation to determine which processes should be in memory and where they should be stored 

39
New cards

how do processes and memory link

when processes start up, it’s allocated memory via memory management, and once it’s completed the OS deallocates the memory space

40
New cards

what is single allocation 

when all memory is allocated to a single application. This leads to insufficient use of main memory 

41
New cards

what is paged memory (aka paging)

the OS receives pages from secondary storage. Paging uses secondary storage to allow programs to exceed the size of available physical memory

42
New cards

what are pages in paging and what is used for physical memory 

pages are same-sized/ fixed blocks of data used for logical memory. Physical memory blocks are called frames

43
New cards

how does paging work

when a process is executed, its pages from logic memory is loaded into frames in physical memory. Page numbers are used as an index for a page table and each process has one that maps logical addresses to physical ones.

44
New cards

what are some things a page table may show

page number, page frame address, time of entry, and flag status

45
New cards

what is segmentation / segmented memory

when main memory is broken up into variable-sized memory blocks called segments. For execution, segments from logical memory are loaded into physical memory

46
New cards

how does segmentation use addresses

the address of a segment in physical memory is found through the segment number supplied by the user and the offset value (which decides the segment size) 

47
New cards

how are segment numbers used

as an index in a segment map table

48
New cards

whats the main difference between segmentation and paging

paging is fixed sized memory blocks whereas segmentation is variable sized. Pages are normally smaller than segments 

49
New cards

what are some features of paging

due to its fixed size, not all blocks are used, leading to internal fragmentation, users provide a single value and hardware decides the actual page size, its process is invisible to users. Procedures/ data cannot be separated and it consists of state linking and dynamic loading 

50
New cards

what are some features of segmentation

its variable size reduces risk of internal segmentation but external risk is increased, users supply the address in two values, the process is visible to the user, procedures/ data can be separated and it consists of dynamic linking & loading 

51
New cards

what’s a big problem that comes with memory management

Processes running out of RAM, this happens when the available amount of RAM is exceeded due to multiple processes running

52
New cards

how is the lack of RAM problem solved

Through virtual memory. It transfers out pages from HDD to RAM when needed and transfers in currently unused data. This is called swapping and gives the illusion that there is still memory available

53
New cards

what are some advantages of virtual memory

large programs can be run when large amounts of memory isn’t available, only part of a program needs to be in memory, it leads to more efficient multi-programming, memory isn’t wasted on data not being used, it eliminates external fragmentation and reduces  internal frag, and it removes the need to buy/ install more expensive RAM memory 

54
New cards

what are some disadvantages of virtual memory

it can lead to disk thrashing as main memory fills and more data needs to be swapped which can result in the HDD prematurely failing. It can result in the thrash point, and the as more time is spent moving pages than processing, decreasing processing speed

55
New cards

what is disk thrashing and how can it be solved or helped

It’s a high rate of hard disk read/write head movements. Installing more RAM, reducing the number of programs running and reducing the size of a swap file are all things that can help  

56
New cards

what can be said about page replacement

It occurs when a requested page is not in memory, so a page fault occurs and the OS replaces one of the existing pages with the new page

57
New cards

what is a page fault

a type of interrupt raised by the hardware

58
New cards

what are the page replacements algorithms and what’s something they have in common 

first in first out (FIFO), optimal page replacement (OPR),last recently used page replacement (LRU) and clock page/ second-chance page replacement . They all choose which page is to be replaced with the same goal of minimising the number of page faults

59
New cards

what is first in first out (page replacement)

it keeps track of all pages in memory using a queue structure. The oldest is at front and is the first to be removed 

60
New cards

what problem does FIFO suffer from

Belady’s anomaly occurs when the number of page frames increases, resulting in more page faults. The ideal situation is an increase of frames and a decrease in faults  

61
New cards

what is optimal page replacement

it predicts the frame least likely to be used in the future but it’s impossible to create as at the time of a page fault the OS has no way of knowing when each of the pages will be replaced next. It’s more used for comparison studies and have the least page faults

62
New cards

what is LRU

when the page that hasn’t been used for the longest time is replaced. A linked list of all pages in memory (most recently used in front) must be maintained to carry this method out

63
New cards

what is clock page replacement/ second-chance page replacement 

clock page replacement uses a circular queue structure with  single pointer serving as both head and tail. When a page fault occurs, the page pointed to is inspected. If R flag status = 0 the page is removed and a new page is inserted in. If R = 1, the next page is looked at until a page where R = 0 is found 

64
New cards

how is page replacement done

by swapping pages from the storage disk e to memory and vice versa. If the page to be removed has been modified while in memory, it must be written  back to the disk. If not, no re-writings are needed 

65
New cards

process management vs memory management 

process management decides which processes will be executed in which order, while memory management decides where in memory data will be stored and how much 

66
New cards

whats something both a compiler and interpreter construct

a symbol table. Interpreters also allocate space for data used by the program to be stored. It checks each statement and outputs any error, which can be corrected before it’s executed. After, control is given back to the interpreter so the next statement can be checked 

67
New cards

what are four stages of compiling a program

lexical analysis, syntax analysis, code generation, and optimisation

68
New cards

what is lexical analysis

where unnecessary characters are removed, keywords/ constants/ identifiers are replaced by tokens.A keyword or symbol table holds all the tokens for the words and symbols in a programming language. Each program uses the same keyword table but variables can be added.

69
New cards

what is syntax analysis

tokens are checked and parsed to ensure the correct syntax was used. If not, an error message is produced. Only when the entire program has been searched will all the errors be reported. Programmers have to go back and edit the source code and only then re-compile the code

70
New cards

what is code generation 

the stage where object code is generated, it must be syntactically correct and is no longer readable to humans

71
New cards

what’s an advantage of object code

it can be executed by the CPU, or an intermediate form that is only converted when the program is loaded, allowing for greater flexibility

72
New cards

what is optimisation

it tries to create the most optimal program that performs tasks with min resources, storage space, memory, and CPU use. Some optimisation can be used after syntax analysis or part of code generation. Not every compiler can optimise the object code produced 

73
New cards

what is a syntax diagram

it graphically shows the syntax rules a compiler needs to check is being followed

74
New cards

what’s BNF

Backus-Naur form (BNF) notation is a meta-language that uses a set of symbols to describe the syntax rules in a programming language. it can also use recursive definitiona where an item definition refers to itself

75
New cards

what are some BNF notation and what do they mean

< > to enclose an item, : : = separates an item from its definition, l between items indicate a choice, ; is put at the end of a rule

76
New cards

what is RPN

reverse polish notation is a way of representing an arithmetical or logic expression without the use of brackets or special punctuation. It uses postfix notation, where the operator is placed after the variable it acts on

77
New cards

why do compilers use RPN

because any expression can be processed from left to right without using any back tracking

78
New cards

how does RPN use a stack

values are added left to right, when an operator is encountered, the top two values are operated on and put back into the stack. this is repeated until there’s a single value and the end of the expression is reached