Ocr A level Computer Science Definitions

0.0(0)
studied byStudied by 0 people
full-widthCall with Kai
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/78

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

79 Terms

1
New cards

1.1.1 Program Counter (PC)

A register used to hold the address of the next instruction to be executed. It is possible to switch tasks that the CPU is carrying out by changing the address held in the PC.

2
New cards

1.1.1 Data Bus

The connection between the CPU and RAM - literally a collection of wires. Used to send data between CPU and RAM.

3
New cards

1.1.1 Control Bus

Used to connect the various components in the CPU and also to RAM. Used to send control signals to co-ordinate the timing/workings of the CPU and also to tell RAM whether a read/write operation is needed.

4
New cards

1.1.1 Address Bus

Used to connect the CPU and RAM so that addresses which need to be read from or written to can be sent to the memory controller.

5
New cards

1.1.1 Control Unit

Used to coordinate the actions of the CPU during the FDE cycle. Sends signals down the control bus to components.

6
New cards

1.1.1 FDE Cycle

The purpose of the CPU - the Fetch, Decode, Execute cycle. Happens continuously whilst power is on. The number of FDE cycles per second is called the "clock speed" and is measured in Ghz (Gigahertz).

7
New cards

1.1.1 Performance - Speed

The number of FDE cycles a CPU can carry out per second. Has a direct impact on the speed at which program instructions can be executed. Measured in Ghz - billions of cycles per second.

8
New cards

1.1.1 Performance-Cores

A core is a discrete processing unit inside a CPU - The more cores a CPU has, the more instructions can be executed simultaneously. Has a direct impact on the multitasking ability of a CPU

9
New cards

1.1.1 Performance-Cache

Cache is a small amount of memory inside a CPU. Usually comes in different "levels" which differ in speed and size. L1 cache is the fastest and smallest, L3 is the slowest and largest. All are much, much faster than accessing RAM and therefore it is used to hold frequently accessed instructions to speed up program execution. Uses predictive algorithms to pre-fetch what the CPU thinks it will need to execute next.

10
New cards

1.1.1 Pipelining

Pipelining is a method of processing instructions more efficiently. Instructions are ordered by the CPU in such a way as to avoid idle cycles where nothing is being executed. In simple terms, the CPU executes one instruction, whilst decoding the second, whilst fetching a third.

11
New cards

1.1.1 Von Neumann and Harvard Architecture

The Von Neumann architecture is the basic concept on which all modern computers are based. It loosely consists of input, output, a memory store to hold programs and a CPU which has an ALU, Control Unit and Registers. One common criticism of it is the fact that instructions and data are stored in the same memory space. The Harvard architecture seeks to remove that bottleneck by storing instructions separately to data.

12
New cards

1.1.2 CISC vs RISC processors.

CISC - Complex Instruction Set, RISC - Reduced Instruction Set. CISC processors are those made by AMD and Intel, they have large instruction sets designed to make them easier to program for. However, there are downsides. Complex instructions take more cycles to execute and the CPU's are usually more power hungry and produce more heat RISC CPU's are those found usually in phones such as those made by Apple, ARM, Snapdragon etc. They have the advantage that each instruction usually takes one cycle to execute, they consume low amounts of power and produce less heat. However, they are have the disadvantage of fewer instructions, meaning more may be needed to carry out similar tasks than on CISC processors.

13
New cards

1.1.2 GPUs and their uses

Graphics Processors are highly optimised to perform calculations in parallel on large amounts of data. Other than making them ideal for generating graphics, this also makes them suitable for scientific research, medical research, cryptography and cryptocurrency applications

14
New cards

1.1.2 Parallel Systems

Parallel systems are those that are optimised for running multiple tasks simultaneously.

15
New cards

1.1.1 Register

A small, fixed width piece of memory inside the CPU. Usually between 16-128 bits wide. Registers are used to hold data or instructions as they are being worked on. Registers can also be used to represent the status of various CPU components.

16
New cards

1.1.1 Accumulator (ACC)

A general purpose register, usually used to hold the result of the last calculation or instruction carried out.

17
New cards

1.1.1 Memory Address Register (MAR)

Used to hold the address in memory which will be accessed next.

18
New cards

1.1.1 Memory Data Register (MDR)

Used to hold the data which ahs just been fetched from RAM or is about to be written to RAM.

19
New cards

1.1.1 Current Instruction Register (CIR)

Where the instruction about to be run is decoded - held in the form Operator, Operand.

20
New cards

1.1.3 Uses of Magnetic Storage

Magnetic storage includes hard drives and backup tapes. Hard drives are largely being phased out in favour of solid state drives, even in server based systems as longevity and capacity increases as costs fall.

21
New cards

1.1.3 Uses of Flash storage

Flash storage is any non-volatile storage which uses flash memory chips to store data. Has huge advantages over other methods of storage including physical size (tiny), density of storage, low power requirements and robustness.

22
New cards

1.1.3 Uses of Optical storage

Optical storage is largely relegated to games consoles and tv connected media players. A range of discs which are read via a laser can be used to store data. Examples are DVD, CD and Blu-Ray. Blu-Ray is the most common standard for data storage due to its capacity of between 25-50gb.

23
New cards

1.1.3 RAM vs ROM

RAM = Random Access Memory. Used to hold open, running programs and data. Has to be managed by the operating system to avoid programs over writing each other or, worse, accessing the data for another program. ROM = Read Only Memory. Used to hold program code in a non-volatile flash memory chip. Usually refers to the BIOS/UEFI which is the code used to boot up a system at power on.

24
New cards

1.1.3 Virtual Storage

The concept of expanding the amount of RAM available for open, running programs. The operating system uses a section of secondary storage and makes it "appear" as extra memory. The OS manages the movement of pages of data in and out of physical memory as and when necessary.

25
New cards

1.1.3 function and purpose of operating systems

The operating system is software which controls all hardware and software in the system. Consists of a user interface, kernel and drivers. Operating systems provide a platform for all software to run on - what this basically means is it provides a core set of services so you don't have to write code in every single program for simple things like controlling mouse movement, printing, saving and so forth. The OS will also control the resources available in a "fair" way to prevent programs monopolising the CPU, RAM or Storage.

26
New cards

1.1.3 Memory Management

Memory management is a key role of the OS. RAM is split into "pages" and these are allocated to programs. They provide flexibility and efficiency because they can be moved around an re-organised as necessary. The OS allocates RAM to programs, ensures programs cannot overwrite each other or access memory that does not belong to them. It also includes the handling of virtual memory where necessary. Without memory management, an OS could not offer multitasking or multiprogramming.

27
New cards

1.1.3 Types of OS Memory Management

1.Paging
2.Segmentation
3.Virtual memory

28
New cards

1.1.3 ) Interrupts, the role of interrupts and Interrupt Service Routines (ISR)

Interrupts are a key hardware feature of all CPU's. an interrupt is a request for CPU time - in other words, something has happened and it requires CPU execution time to process. Interrupts can be "requested" by hardware devices or software. The CPU recognises an interrupt has occurred but it is not the job of the CPU to decide when an interrupt should be serviced - this is controlled by an algorithm in the OS.

29
New cards

1.2.1 Scheduling

Scheduling is the algorithm used by the OS to decide when a program should be executed by the CPU. The scheduling algorithm must perform a careful balancing act to ensure all processes receive attention from the CPU, that programs receive adequate execution time so that they appear to run smoothly and finally, must not allow a process to prevent others from executing.

30
New cards

1.2.1 Operating Systems

Operating systems are usually optimised/specialised to handle a certain type of environment. Most devices we use are single user operating systems designed to give full access to all system resources to one user and their requirements. However, in a networked environment or distributed, the OS needs to handle the resources of multiple machines and users to ensure that performance is maintained for all users.

31
New cards

1.2.1 BIOS

Basic Input Output System. More modern systems make use of UEFI - Unified, Extensible Firmware Interface. Both exist to boot a PC, the code is stored in ROM on the motherboard and called by the CPU at power on. The BIOS is designed to detect hardware, perform basic self tests and then find a device to boot from such as SSD/HDD.

32
New cards

1.2.1 ) Device drivers

A device driver is a piece of software which allows hardware to communicate with the operating system. An OS provides "generic" services and hardware calls, it is up to the manufacturers of specific devices to write the code which will allow their hardware to interact with an operating system, taking care of all the specifics of how it works. This is essential as it simply isn't possible for OS developers to write code for every possible device that may be connected to a system, it also enables future devices to work with operating systems without the need for them to be updated.

33
New cards

1.2.1 Virtual Memory

Virtual machines are software which emulates (pretends to be) all of the hardware in a PC system. They are extremely useful in server environments as they allow "machines" to be more easily managed. For example, one physical server may run many virtual machines - a much more efficient use of hardware and ensures that resource use is maximised. VM's can also be backed up, moved, restored as necessary as well as have their states saved - making it possible to recover to an earlier known good point in the case of failure.

34
New cards

1.2.2 Utilities

Small programs which usually run in the background. They normally have one specific purpose and are often used for the maintenance of systems. For example, a backup utility would run regularly and save changes to the backup device specified. Other examples include anti-virus, compression, encryption and file management.

35
New cards

1.2.2 Open vs Proprietary source

Open source is where the code for a program is made freely available. Usually it can be used, re-used, changed and re-released as necessary. It can be excellent for the rapid development of new systems and can reduce workload because parts of the code will be pre-written. The only downside is there is no guarantee as to the quality of the code, how robust it will be and whether it will be supported in future. Closed source or "proprietary" code is usually the property of a company or individual that has chosen not to share the code with anyone outside their organisation. This is usually done to protect the intellectual property of the developer and enable them to make money from their work/creations.

36
New cards

1.2.2 Translators/Interpreters

Tools used to turn high level code into machine code. Remember that the CPU cannot understand anything other than binary strings which relate to an instruction in the instruction set. Interpreters are usually used to provide cross platform compatibility for code - think java script on the WWW. An interpreter is installed on the local machine which takes the code, converts it as it is executed, producing machine specific code as it does so. This has the huge advantage of "write once, run anywhere" but has the obvious overhead of several different interpreters needing to be created.

37
New cards

1.2.2 Compilers

Compilers take high level code and convert to an executable in one go. They link/load other libraries as necessary and build this code into the executable automatically. Compiled programs usually run more quickly than interpreted and can be easily distributed between similar systems.

38
New cards

1.2.2 Assemblers

Assemblers are specific to assembly language and are one of the first types of translators to be developed. They allow the quick conversion of assembly in to machine code, usually producing an executable. An assembler will take care of some memory management, the automatic calculation of addresses and simple variable control, but they offer very little support to developers beyond that.

39
New cards

1.2.2 Stages of Compilation

1.Lexical analysis removes unnecessary data and information such as comments and spaces, it outputs a token stream which represents the code written.
2.Syntax analysis takes the token stream and checks it is valid and outputs an abstract syntax tree Code generation is the complex part which parses the syntax tree and then generates appropriate machine code for each part, links in external libraries and produces the executable.
3.Optimisation is carried out by the compiler to increase execution speed and reduce executable size. For example, it may re-order the instructions into a sequence it thinks is more likely to happen.

40
New cards

1.2.2 Libraries

contain code which is likely to be used over and over again in programs, so instead of re-inventing the wheel, we put it in a library and simply import it into our code and reference it. The problem with this is that executables can rapidly grow in size and so we mostly use dynamic linking of libraries (DLL's in Windows) which means that the code can reference a library, but then it is only loaded into memory when it is required, saving space. Static libraries, such as those which may be created by a developer themselves, are included in the executable. This has one advantage - it is guaranteed that the library is available. Dynamic libraries may not be present for some reason or may have been updated and now be incompatible.

41
New cards

1.2.3 Types of Lifecycles

1.Waterfall
2.Spiral
3.Agile

42
New cards

1.2.4 Programming Paradigms

Paradigms are the concepts on which a language are built, but simply just boils down to "different ways of doing the same thing." In terms of programming it is the study of/acknowledgment that different languages may achieve the same things in different ways and there are subtle advantages/disadvantages to this. MAIN POINT: Different Languages have different specialisations and just because something is possible in a language doesn't mean it is optimal.

43
New cards

1.2.4 Procedural Languages

al languages are those which are designed to create programs which are structured in small "blocks" of code called procedures or functions. A Procedure is simply a block of code, with a name, which can be "called" from anywhere within the code. They have the obvious advantage of simplicity of design and the fact that these code blocks can be reused. Procedural languages are intrinsically linked to the concept of Stacks - each time a procedure is called, the current state of the program must be pushed to the stack. There are hardware and software implications of this.

44
New cards

1.2.4 ) Assembly language

Assembly language consists of short three or four letter mnemonics which directly represent a machine code CPU instruction. It exists purely to make machine code programming human accessible. Instead of writing programs in pure binary, we can write in assembly language and then use an assembler to convert the code back into binary. One of the earliest/first types of programming language. LMC is a "learning language" designed to introduce you to assembly through 10 simple commands and is essential that you practise for your exam.

45
New cards

1.2.4 Modes of addressing memory

When issuing an assembly language instruction, it usually comes in the form Operator - Operand, or more simply Instruction, Data. The data in an instruction can either be: A number An address An offset The type of data is determined by its "addressing mode" which is represented by a symbol. Immediate: interpret as a number Direct: interpret as an address Indirect: an address which, when visited in memory, holds the address we actually want Indexed: a base address which we then add an offset to, in order to find the address we want.

46
New cards

1.2.4 Object-oriented languages

Object-oriented programming (OOP) is a programming paradigm based on the concept of "objects", which can contain data and code: data in the form of fields (often known as attributes or properties), and code, in the form of procedures (often known as methods).

47
New cards

1.2.4 Key concepts of Object oriented programming languages

1.Classes
2.Objects
3.Methods
4.Attributes
5.Inheritance 6.Encapsulation 7.Polymorphism

48
New cards

1.3.1 Lossy vs Lossless compression

Lossy compression reduces the size of files by removing unnecessary information, or by reducing quality in a way which is either "acceptable" or likely to not be noticed by the user. In music, for example, MP3 removes frequencies that the human ear cannot resolve. This has the obvious advantage of large reductions in file size, but the disadvantage that we can never restore the original quality.
2.Lossless compression reduces the file size without losing any quality or the ability to reproduce the original file. Common techniques include dictionary and run length encoding which reduce data by removing the need to represent repeated data explicitly.

49
New cards

1.3.1 Run length encoding and dictionary coding for lossless compression.

Run length encoding - anywhere in a file where there is repeated data (colour in an image, words or letters in a text file), store only the colour and number of pixels, or store only the letter and the number of repetitions. This reduces the amount of data stored. Dictionary encoding - commonly used to compress text. Represents each new word as a "token" in a lookup table. The document is then reduced to a collection of numbers. This reduces overall file size due to only storing each word once.

50
New cards

1.3.1 Symmetric vs Asymmetric encryption

Symmetric encryption is any form of encoding in which the process to encrypt is the simply reversed in order to decrypt. This is obviously very insecure. Asymmetric encryption uses the concept of public and private keys in order to encode and decode data. It relies on an incredibly complex mathematical discovery that it is possible to create "one way" algorithms where the method to encode cannot be simply reversed in order to return to the original data. This is further helped by the fact that "key pairs" mean one key can be published and used to encrypt and then a separate, secret, key can be used to decrypt. This is essential for secure transmission of data.

51
New cards

1.3.1 Hashing

Hashing is the process of calculating a fixed length value from another value, such as a string. A value is passed into a Hash Function and the function returns a Hash Value.

52
New cards

1.3.1 Applications of Hashing

File hashing - a quick and convenient way of ensuring two copies of a file are indeed identical and have not been corrupted or tampered with. File systems - a quick method of locating a files by generating a hash which points to its location.

53
New cards

1.3.2 Database (Definition)

Databases are an organised collection of information which are usually manipulated using SQL.

54
New cards

1.3.1 Primary key

A unique identifier for a record in a database table

55
New cards

1.3.1 Foreign key

A primary key from another table acting as a link.

56
New cards

1.3.2Types of Databases

Flat file: The simplest type of database - all data is kept in one table. Has significant disadvantages such as repeated data (redundancy) and difficulty keeping data accurate/up to date Relational: stores information in tabular form, with rows and columns representing different data attributes and the various relationships between the data values. (e.g. customer, appointment etc). Each table is linked by primary/foreign keys which helps to reduce redundancy.

57
New cards

1.3.2 Normalisation (of Databases)

n: The process of taking database design and turning it into a relational design. Normalisation removes repetition, separates data into tables and ensures that there are no many to many relationships. There are 3 normal forms that we care about in the A Level Entity: An object in a database, e.g. "Vehicle", "Customer." Always singular. Entity relationship modelling: A method of displaying the relationships between tables in a database. Relationships can be 1:1, 1:Many, Many:Many. A many to many relationship breaks normalisation rules and indicates a link table must be created.

58
New cards

1.3.2 Normalisation (to 3NF)

1NF: A table is in First Normal Form (1NF) if there are no repeating groups. In other words, each column must contain only a single value and each row must have an item in every column. This can usually be done by putting the data into two tables ... separating the repeated data into a separate group. 2NF: To move to 2NF, any partial dependencies must be removed. This basically means each record should not have a composite primary key (the primary key should be unique in its own right). This removes many to many relationships and repeated data 3NF: 3rd Normal Form removes something called "Transitive Dependency." In plain English this means that the data in each table should only be related to the primary key. If it isn't, then it needs to be in another table.

59
New cards

1.3.2 SQL (Structured Query Language)

a programming language for storing and processing information in a relational database.

60
New cards

1.3.2 Referential integrity

Referential integrity refers to the accuracy and consistency of data within a relationship. In relationships, data is linked between two or more tables. This is achieved by having the foreign key (in the associated table) reference a primary key value (in the primary - or parent - table). Because of this, we need to ensure that data on both sides of the relationship remain intact. So, referential integrity requires that, whenever a foreign key value is used it must reference a valid, existing primary key in the parent table.

61
New cards

1.3.1Transaction processing-ACID

Atomicity: All changes to data are performed as if they are a single operation. That is, all the changes are performed, or none of them are. For example, in an application that transfers funds from one account to another, the atomicity property ensures that, if a debit is made successfully from one account, the corresponding credit is made to the other account. Consistency: Data is in a consistent state when a transaction starts and when it ends. For example, in an application that transfers funds from one account to another, the consistency property ensures that the total value of funds in both the accounts is the same at the start and end of each transaction. Isolation: The intermediate state of a transaction is invisible to other transactions. As a result, transactions that run concurrently appear to be serialized (happen one at a time in sequence). For example, in an application that transfers funds from one account to another, the isolation property ensures that another transaction sees the transferred funds in one account or the other, but not in both, nor in neither. Durability: After a transaction successfully completes, changes to data persist and are not undone, even in the event of a system failure. For example, in an application that transfers funds from one account to another, the durability property ensures that the changes made to each account will not be reversed.

62
New cards

1.3.3 Network protocols

Network protocols are sets of established rules that dictate how to format, transmit and receive data so computer network devices -- from servers and routers to endpoints -- can communicate regardless of the differences in their underlying infrastructures, designs or standards.
Support for network protocols can be built into software, hardware or both.

63
New cards

1.3.3 Types of Network server

1)LAN: A Local area network, usually confined to one small geographic location (e.g. a house or office). A LAN is an internal network, all devices are usually connected using switches and use dedicated equipment, meaning bandwidth is not shared with any external device or organisation.
2)WAN: A Wide Area Network, covers a large geographic area and usually makes use of shared telecommunications equipment meaning bandwidth is shared between multiple users.
3)DNS: Domain Name Server/Services, a hierarchy of public servers which are responsible for converting URL's into IP addresses.
4)PAN:A personal area network (PAN) connects electronic devices within a user's immediate area.

64
New cards

1.3.3 Protocols

Protocol Layers: Protocols are complicated and therefore are usually split into different layers. Each layer will have distinct functionality and will be self contained. A Layer expects certain specific inputs and will provide specific outputs to the next layer. The advantage of layers is that they can be implemented in hardware, software or split between both. This can have significant cost savings when developing a system, or could provide significant performance gains if implemented purely in hardware. A further advantage is that each layer can be implemented, updated or developed separately without affecting the others.

65
New cards

1.3.3 Packets

Packets are small, fixed size pieces of data designed to improve the reliability of data transmission.

66
New cards

1.3.3 Packets Switching

If a packet is lost on a network, the time taken to re-send it is far less significant than if an entire piece of data needs to be sent again. Furthermore, packets can be sent through different paths on a mesh network like the internet meaning they can take the most efficient/fastest path to their destination.

67
New cards

1.3.3 Network Security

Firewall: A firewall is a computer network security system that restricts internet traffic in to, out of, or within a private network.
Proxy:a system or router that provides a gateway between users and the internet. Therefore, it helps prevent cyber attackers from entering a private network.
Encryption: The process of disguising a message so that it cannot be understood by anyone but its intended recipient. Encryption requires the use of a key. The key is secret as to how the message has been disguised.
Network: Two or more computers - or other electronic devices - that are connected together for the purpose of communication.

68
New cards

1.3.3 Server models

Client Server:
Peer to peer:

69
New cards

1.3.3 Search Engine Indexing

The process of finding, exploring and storing data regarding web pages is called "indexing." To index pages, a search engine will use a program called a "crawler." A crawler is simply a program which visits websites, finds all the hyperlinks on that page and then visits those. It is obvious that this process rapidly grows in complexity and time requirements. As it visits each page it will store information it finds (meta data) in the index.

70
New cards

1.3.3 Metadata

defined as the information that describes and explains data.

71
New cards

1.3.3 PageRank algorithm

Page rank is specific to Google. The initial idea came from a simple principle - The more times a website is linked to, the more important it must be. Strangely, this concept had not been explored before Larry Page and Sergey Brin attempted it, and it obviously turned out quite well. The principle was expanded to include adding weighting to the quality of website that was linking to a site and other factors that could affect its "reputation." All of these factors are boiled down to return an indication of where each page should rank when searched for.

72
New cards

1.4.1 The 4 Data Types (Ranked by Memory usage)

Data types have a significant effect on the compilation of a program. Different data types require different amounts of memory, methods of processing and so forth. Incorrect data type, e.g. integer when you need real, can cause significant logic errors in a program.
1.Character (1 byte)
2.Boolean(2-bytes)
3.String(short-2 bytes,long-4 bytes(32-bit size)
4.int (4 bytes)
5.float/real (4 bytes)

73
New cards

1.4.1 Two's Complement

The most common method of representing signed integers on computers, and more generally, fixed point binary values.

74
New cards

1.4.1 Two's Complement method

To get 2's complement of binary number is 1's complement of given number plus 1 to the least significant bit (LSB).

75
New cards

1.4.1 Bitwise manipulation

Shifting allows us to quickly perform integer division and multiplication, logical shifts ignore all sign bits and simply insert zeroes as necessary, arithmetic shifts preserve the sign bit.

76
New cards

1.4..1 Masking

A method of applying a Boolean expression, usually to a register, in order to explore the state of individual bits, rather than changing the entire value stored. AND masks are used to check whether a bit is on or off, OR is used to set a bit and XOR masks are often used as a form of encoding/encryption.

77
New cards

1.4.1 ASCII vs UNICODE

ASCII is traditionally a 7 bit standard, which can also be represented in 8 bits per character. Obviously, the more bits per character a standards has, the more symbols it can represent, however there is the storage implication meaning it will take up more space to store each character. ASCII is limited to 127 characters, whereas UNICODE (depending on which version is used) is a 16 bit standard and can store every printable character in the known world. Even languages that are "extinct" are available in Unicode.

78
New cards

1.4.2 Arrays vs Tuples vs Records vs Lists

A)Arrays: A data structure which acts like a table. Each "row" is accessed via an indexing system, usually zero based. Arrays are fixed size and hold multiple values of the same data type and cannot hold values of differing types.
B)Record: A custom data structure which holds values of differing data types. For example a record "person" may contain fields such as "name", "age", "height" and so forth. Variables can then be declared as type "person" and the fields accessed through the object.property method
C) Lists: A dynamic data structure which can grow and shrink with the requirements of the program. Each node in a list holds a piece of data and a pointer to the next piece of data in the list.
D)Tuples: A strange data type which holds a collection of data, usually ordered and it cannot be changed. Accessed in much the same way as an array

79
New cards

1.4.2 Name structures to store data:

1.Linked list: a data structure consisting of nodes. Each node is a pair of "data" and "link" entities. The link points to the next node in the list. By manipulating the links we can grow, shrink, re-arrange and otherwise manipulate the data stored within. Dynamic.
2.Graph: A data structure similar in nature to a linked list in that it consists of connected nodes. However, in a graph each node can be connected to multiple other nodes.
3.Stack: A FILO (First in, last out) data structure which is used in countless computing applications. Data is either "pushed" on to the stack or "popped" off it. Elements cannot be accessed at random and must be stored and retrieved one at a time. Often used in programming (call stacks), operating systems, interrupts and so forth.
4.Queue: A First in, First Out (FIFO) data structure. Elements in the queue are removed in the order they were added.
5.Tree: A data structure consisting of nodes. Each node contains data and a number of pointers. The tree is organised in a "root" and "leaf" structure. There is one root node at the top of the tree, this may have zero or more "leaf" nodes coming off it. In turn, those leaf nodes may be connected to layers below. The only difference between a tree and binary tree is that a binary tree node may only have a maximum of two child nodes.
6.Hash Table: Basically a table of pointers to data. Each row of the table is accessed by hashing the input to discover the row to be accessed.