APCSP Test 2_2

Big Idea 3.12-1.18 and 4, and projectstem 4.8-4.13 and Unit 5

Big Ideas 3.12-1.18 and 4, Project STEM 4.8-4.13, Unit 5

3.12 Calling Procedures

A procedure is a named group of programming instructions that defines a specific task. Procedures may have parameters, allowing them to accept input values, and can return values based on the execution of their instructions. The key distinction is that a procedure doesn't return a value unless it is explicitly designed to do so, enabling more versatile code management.

Key Aspects of Procedures:

  • Parameters: Parameters are variables that allow passing information to procedures. They enable procedures to operate on different data without needing to rewrite them, thus promoting code reusability. For example, a procedure that calculates the area of a rectangle might take two parameters: length and width.

  • Return Values: Procedures can also return values. The return value is the output of the procedure after execution, and the type of return value can vary based on the programming language being used. For instance, a function in Python can return an integer, a string, or even a list.

  • Scope: Variables defined within a procedure generally have a scope limited to that procedure, meaning they can't be accessed outside it. This is crucial for avoiding variable conflicts and maintaining clean code.

  • Calling Procedures: To execute a procedure, it needs to be called or invoked within the program. This can be done by simply using the procedure's name followed by parentheses, including any required arguments.

  • Benefits: Procedures promote modular programming, allowing developers to break down complex problems into smaller, manageable tasks. This makes code easier to debug, test, and maintain over time.

Example of a Procedure:

Here’s a simple example of a procedure in Python:

def calculate_area(length, width):
    return length * width

In this code, calculate_area is a procedure that takes two parameters (length and width) and returns the product, representing the area of a rectangle.

By utilizing procedures effectively, programmers can enhance the structure and readability of their code, making programming more efficient overall.

3.13 Developing Procedures

Procedural Abstraction allows a procedure to be used without knowing exactly how it works by only knowing what it does. It fits into a bigger program basically. It gives you the result you want but you don’t have to know how it does that.

Modularity is the subdivision of a computer program into separate subprograms.

Procedural abstraction allows for generalized code and programming reuse.

Using procedural abstraction helps with code readability, lessens storage use, and fastens performance.

Using parameters helps with code generalization.

3.14 Libraries

Libraries are files that contain procedures that can be used in programming.

A software library contains procedures that may be used in creating new programs.

Existing code segments can come from internal or external sources such as libraries or previously written code.

The use of libraries simplifies the task of creating complex programs.

APIs, Application Program Interfaces, are specifications for how the procedures in a library behave and can be used.

Documentation for an API/Library is necessary for understanding the behaviors provided by the API/Library and how to use them.

Libraries can speed up development but also encourage a standardized approach to programming as many developers use the same libraries (shared knowledge).

3.15 Random Values

Random(a,b): a and b are inclusive.

Using random number generation in a program means each execution may produce a different result.

We can use the results from randomization to program simulations.

You are creating a Magic Eight Ball program and you want “Yes” to show up 50% of the time. What possible code segment could you use?


a) Random(1,100) <= 50
b) Random(1,100) = 50
c) Random(1,10) <= 5
d) Random(1,10) >= 5
(a, c, d)

Spinner with 2 green, 2 orange, and 2 blue.

3.16 Simulations

Simulations are abstractions of more complex objects or phenomena for a specific purpose.

A simulation representation uses varying sets of values to reflect the changing state of a phenomenon.

Simulations often mimic real-world events with the purpose of drawing inferences, allowing investigation of a phenomenon without the constraints of the real world.

The process of developing an abstract simulation involves removing specific details or simplifying functionality.

Simulations can contain bias derived from the choices of real-world elements that were included or excluded.

Simulations are most useful when real-world events are impractical for experiments (e.g., too big, too small, too fast, too slow, too expensive, or too dangerous).

Simulations facilitate the formulation and refinement of hypotheses related to the objects or phenomena under consideration.

Random number generators can be used to simulate the variability that exists in the real world.

Simulations are overall safer and less expensive than experiments and also present safety issues that may have been previously thought of and provide insight into various possibilities.

3.17 Algorithmic Efficiency

A decision problem is a problem with a yes/no answer. An optimization problem is a problem with the goal of finding the best solution among many (e.g., what is the shortest path?).

Efficiency is an estimation of the amount of computational resources used by an algorithm. Efficiency is typically expressed as a function of the size of the input.

An algorithm’s efficiency reasoning is determined through formal or mathematical reasoning.

An algorithm’s efficiency can be informally measured by determining the number of times a statement or group of statements executes.

Different correct algorithms for the same problem can have different efficiencies.

Algorithms with polynomial efficiency or slower (constant, linear, square, cube) are said to run in a reasonable amount of time. Algorithms with exponential or factorial efficiencies are examples of algorithms that run in an unreasonable amount of time.

Algorithms that run in linear or square are reasonable while exponential or factorials run in unreasonable time.

Heuristic: An approach or a solution to a problem that might not be optimal but gives a good enough or acceptable solution because the optimal solution might take an unreasonable amount of time.

Heuristic solutions are used in optimization problems.

3.18 Undecidable Programs

A decidable problem is a decision problem for which an algorithm can be written to produce a correct output for all inputs (e.g., is the number even?).

An undecidable problem is one for which no algorithm can be constructed that is always capable of providing a correct yes or no answer.

An undecidable problem may have some instances that have an algorithmic solution, but there is no algorithmic solution that could solve all instances of the problem.

The halting problem is defined as: Given an arbitrary computer program with given inputs, will the program stop or will it run forever?

The ending of the program could take an unreasonable amount of time to find or the ending might not exist.

4.1 The Internet

A computing device is a physical artifact that can run a program.

A computer network is a group of interconnected computing devices capable of sending or receiving data.

A computer network is a type of computing system.

A path between two computing devices on a computer network (a sender and a receiver) is a sequence of directly connected computing devices that begins at the sender and ends at the receiver.

Routing is the process of finding a path from sender to receiver.

The bandwidth of a computer network is the maximum amount of data that can be sent in a fixed amount of time. Bandwidth is usually measured in bits per second.

ARPANET - Advanced Research Projects Agency Network.

A packet is a small amount of data sent over a network. Each packet also includes the source and the destination information. Packets can be sent in any order and will be reassembled at the receiver.

4.2 Fault Tolerance

Fault-tolerant: when a system can support failures and continue to function because failures happen often and take down groups of parts of the system. Ensures the user is still able to use the network.

Redundancy requires more resources but makes systems fault-tolerant. More redundancy —> more reliable internet.

  1. Which of the following is NOT a benefit of a fault-tolerant network?

    A) Data has more than one path to travel from one device to another

    B) If a part of the network fails, the network can still function by using other paths

    C) Data will only take one route from one device to another, no matter the number of routes available

    D) More devices create more connections and make the network stronger.

Fault Tolerance: Fault tolerance refers to the ability of a system to continue functioning correctly in the event of a failure of some of its components.

  • Importance: Fault tolerance is crucial in computer systems to ensure reliability and availability, especially in critical applications like banking, healthcare, and transportation.

  • Techniques:

    • Redundancy: Implementing backup components (hardware or software) that can take over if the primary one fails.

    • Error Detection and Correction: Using algorithms to identify and correct errors in data or processing operations.

    • Graceful Degradation: Designing systems to maintain limited functionality instead of failing completely when a part of the system fails.

  • Examples:

    • RAID (Redundant Array of Independent Disks): A storage technology that combines multiple drive components into a single unit for data redundancy.

    • Distributed Systems: Systems where multiple computers work together; if one fails, others can continue processing tasks.

  • Testing for Fault Tolerance: It's essential for developers to test the fault tolerance of systems through simulations, stress tests, and various failure scenarios.

4.3 Parallel And Distributed Computing