AP Computer Science Principles Written Response Exhaustive Study Guide
Guide Overview and Author Details
Source Material: AP Computer Science Principles Written Response Scoring Guide.
Author: Tanner Crow, AP Computer Science Teacher. * Experience: 11+ years teaching AP CS. * Stats: 1,800+ tutoring hours, 5.0 rating (444 reviews). * Website: apcsexamprep.com.
Total Content: * 3 complete projects: Grade Calculator, Quiz App, and Budget Tracker. * Answer models for all 4 written response categories and all 6 scoring rows. * Side-by-side comparisons of "Weak" vs. "Strong" responses. * Annotated callouts for scoring decisions. * Blank practice templates.
Important Exam Note: Written responses are completed during the proctored AP exam. They are not submitted beforehand. The guide serves as a rehearsal for the actual 60-minute exam session.
The AP Exam Written Response Structure
Time Allocation: 60 minutes to answer 2 written response questions (comprising 4 total prompts).
Personalized Project Reference (PPR): Students have their PPR available during the exam, which included: 1. A screenshot of the procedure definition (with its parameter). 2. A screenshot of the procedure call. 3. A screenshot of the list being used. * Restriction: No comments are allowed in the PPR screenshots.
The 4 Prompts and Scoring Allocation: * Prompt 1: Program Design, Function & Purpose (2 points). Tests ability to describe what the program does, why it exists, and the inputs/outputs. * Prompt 2(a): Algorithm Development (1 point). Tests the ability to trace sequencing, selection, and iteration. * Prompt 2(b): Errors & Testing (1 point). Tests identification of test cases with specific inputs/outputs and edge cases. * Prompt 2(c): Data & Procedural Abstraction (2 points). Tests the explanation of list usage for managing complexity and the procedure’s role.
Scoring Principles and Pattern Recognition
Row-Based Scoring: Scoring is holistic within a row. One weak sentence can result in earning zero points for an entire row.
PPR Dependency: Responses must only reference code that is visible within the PPR screenshots.
Logical Consistency: Expected test outputs must be logically computed from the provided code, not estimated or guessed.
Key Vocabulary: Correct responses must mirror the rubric using specific phrases: * "Manages complexity." * "Sequencing/selection/iteration." * "Parameter affects the behavior." * "Without the list, I would need…"
Project 1: Grade Calculator — Code and Prompt Analysis
Program Overview: User enters grades, program stores them in a list, calculates average, and assigns a letter grade.
PPR Screenshot 1 (Code Logic): * List initialization:
. * Input loop:. * Procedure Definition:. * Logic inside Procedure: *(Initialization). *(Iteration). *(Sequencing/Accumulation). *(Sequencing/Calculation). * Selection (If/Elif/Else): ****Prompt 1 (Purpose vs. Functionality): * Weak Response Example: "My program calculates grades. The user inputs their grades and the program shows the average." * Strong Response Example: "The purpose of my program is to help students track their academic performance… accept numeric grade values… outputs calculated average as a decimal and corresponding letter grade… giving students a clear, immediate picture of their standing." * The Rule: Purpose is the WHY (problem solved). Functionality is the WHAT (steps taken). Input/Output must be specific (type and method of entry).
Prompt 2(a) (Algorithm Details): * Strong Logic Trace: If
, total accumulates to, average is calculated as, and selection assigns. * Requirement: You must explain what iteration does (e.g., "accumulates the sum"), not just label it.Prompt 2(b) (Errors & Testing): * Test Case 1 (Normal):
yields averageand letter. * Test Case 2 (Boundary):yields averageand letter. This tests the threshold between C and F.Prompt 2(c) (Complexity and Abstraction): * List usage: Manages complexity because the program can handle any number of grades using one variable. Without it, you would need separate variables (e.g.,
) and 50 separate addition statements. * Parameter usage:allows the procedure to work on any dataset. Passingvs.changes the behavior/output despite code remaining static.
Project 2: Trivia Quiz App — Code and Prompt Analysis
Program Overview: Parallel lists store questions and answers. The procedure loops through, prompts the user, evaluates answers, and returns a score.
PPR Screenshot 1 (Code Logic): *
**. * Selection logic:.Prompt 2(b) (Test Cases): * Test Case 1: All correct answers yields
. * Test Case 2 (Edge Case): Case sensitivity. Entering(lowercase) should yield "Correct!" due to.lower()conversion.Prompt 2(c) (Abstraction): * Parallel Lists: Managing content in parallel structures. Without them, you would need separate strings for every q/a pair (
) and separateblocks for each.
Project 3: Budget Tracker — Code and Prompt Analysis
Program Overview: Stores expense amounts and categories. Procedure
sums expenses matching a specific category string.PPR Screenshot 1 (Code Logic): *
**.Prompt 2(a) (Algorithm Trace): * Example for
: * Index 0: "food" matches, add. * Index 1: "food" matches, add. * Index 2: "rent" no match. * Index 3: "food" matches, add. * Index 4: "transport" no match. * Final Total:.Prompt 2(b) (Testing Logic): * Case 1: Category with multiple entries (tests non-consecutive indices). * Case 2: Category with no entries (e.g., "entertainment"). The procedure should return the initialized value of
.
Final Mastery Checklist for Students
Prompt 1: Use the template: "The purpose of my program is to help [user] [achieve goal / solve problem]."
Prompt 2(a): Explicitly mention (1) sequencing initialization, (2) loop iteration specifics, (3) selection branch outcomes, and (4) a step-by-step trace with math.
Prompt 2(b): Always include a boundary/edge test. Examples: empty list, exact threshold, missing category.
Prompt 2(c): On the list question, specifically describe the code without the list (naming variables like
). On the parameter question, illustrate how two different values (arguments) passed into the call create two different results (outputs).College Board Stats: * 34.8% of students using these methods score 5s on the AP CSP exam. * The National average for 5s is 9.6%.