How to Get a 6 on the AP CSP Create Task

What You Need to Know

You’re trying to earn all 6 rubric points on the AP CSP Create Task by making sure your project (and your written responses using your Personalized Project Reference (PPR)) clearly shows every required concept.

The “6 points” map (what graders are looking for)

The Create Task rubric is essentially checking that your program and explanations include:

  1. Program Purpose & Function: What your program does, for whom, and how it works (inputs → processing → outputs).
  2. Data Abstraction: You use a list (or other collection type) to store and use multiple related values.
  3. Managing Complexity: You explain why the list makes the program easier than using separate variables (or hardcoded values).
  4. Procedural Abstraction: You wrote a procedure (student-developed function) with at least one parameter, and it’s called.
  5. Algorithm Implementation: Inside that procedure is an algorithm that uses sequencing + selection + iteration.
  6. Testing: You describe two distinct calls to your procedure that cause different results (and explain why).

Critical reminder: You don’t get points for “having” code if you can’t clearly identify it and explain it in your responses. Build your program so it’s easy to explain.

What your project must include (minimum “full-score-friendly” features)
  • A real input (user interaction, event, text input, button click, etc.)
  • A real output (text shown, screen change, sound, drawing, etc.)
  • A list that is actually used to compute or decide something (not just stored)
  • One student-developed procedure with:
    • At least one parameter
    • An algorithm using sequence, selection, iteration
    • A meaningful return value or a meaningful visible effect (like updating the screen)
  • Your procedure must be called at least twice in ways that produce different outputs

Step-by-Step Breakdown

Use this workflow to “engineer” a 6-point Create Task.

1) Design your project around the rubric (before you code)

Pick a project idea that naturally uses:

  • A list of items (questions, scores, tasks, flashcards, products, moves, messages, etc.)
  • A procedure that processes that list (filter, search, count, compute average, pick next item, grade answers, etc.)

High-yield project patterns:

  • Quiz app: list of questions/answers; procedure checks answer + updates score
  • Flashcard/randomizer: list of terms; procedure selects next term based on rules
  • Recommendation tool: list of items with categories; procedure filters based on user choices
  • Score/stat tracker: list of attempts; procedure computes stats (min/max/avg)
2) Build the list so it clearly “earns” 2 points (Data Abstraction + Managing Complexity)

Do both:

  • Store multiple related items in a list
  • Use the list to produce output or make decisions

Example “list use” (what graders want):

  • Iterate through list to compute something
  • Use list indexing to select the next prompt/item
  • Filter list based on criteria

Avoid “fake list” patterns:

  • A list that is created but barely used
  • A list used only once in a trivial way (hard to argue complexity)
3) Write one strong procedure that checks every box

Your one best procedure should include:

  • Parameter(s) that matter
  • Sequencing: multiple steps in order
  • Selection: an if/else that changes behavior
  • Iteration: a loop (for/while/repeat until) doing meaningful work

Make the procedure do something you can test two different ways.

Annotated target structure (language-agnostic pseudocode):

  1. Procedure header with parameter
  2. Initialize accumulator / result
  3. Loop through list (iteration)
  4. If condition met (selection), update result
  5. Return result or update display (sequencing across steps)
4) Call your procedure twice for testing (with different results)

Set up two distinct test calls:

  • Different parameter values, or
  • Different list contents, or
  • Different user inputs

You must be able to say:

  • Call 1 → expected output/result
  • Call 2 → expected output/result
  • Why outputs differ (because parameter/list contents differ and selection logic changes)
5) Capture clean evidence for your PPR

Your PPR must include the exact code you’ll reference when answering prompts. Choose code segments that make scoring easy:

  • List segment: shows list being created/used
  • Procedure segment: shows procedure definition and the algorithm (sequence/selection/iteration)

If your chosen segments don’t clearly show the required elements, you can lose points even if the full program elsewhere contains them.

6) Practice writing the explanations using your own code

For full credit, your written responses need to be:

  • Specific to your program (use your variable/list/procedure names)
  • Cause-and-effect (input → processing → output)
  • Rubric language (list, parameter, iteration, selection, etc.)

A good rule: answer like the reader cannot run your program—they must understand from your explanation.

Key Formulas, Rules & Facts

(No math here—this is all rubric logic.)

The 6-point rubric checklist (fast table)
PointWhat you must showWhat “full credit” usually requiresCommon failure mode
1. Purpose & FunctionProgram’s goal + how it worksIdentify input, what happens, and output; describe at least one featureVague purpose (“it’s a game”) or missing I/O explanation
2. Data AbstractionUse of a listList stores multiple related items and is used in the programList exists but isn’t used meaningfully
3. Managing ComplexityWhy the list helpsExplain how list avoids repetitive variables/code and makes updates easier“It’s simpler” with no concrete comparison
4. Procedural AbstractionStudent-developed procedureProcedure has parameter(s) and is calledProcedure has no parameter or is just an event handler wrapper
5. Algorithm ImplementationAlgorithm in the procedureSequencing + selection + iteration all in the same procedure’s algorithmLoop is elsewhere; or missing if/else; or trivial loop
6. TestingTwo calls + different resultsDescribe two different inputs to the procedure and the different outputs/behaviorsOnly one test, or tests don’t produce different outcomes
Non-negotiable “must haves” (high-yield rules)
  • Your procedure must have a parameter (not optional).
  • Your algorithm (sequence/selection/iteration) must be in the procedure you’re using for the algorithm point.
  • Your two tests must be two calls to the procedure that lead to different results.
  • Your list must be used to do work (compute, decide, select, filter), not just exist.
What counts (and what doesn’t)
RequirementCountsOften doesn’t count (too weak / risky)
Inputuser text, button click, slider, sensor value, file data, etc.only hardcoded values; only timer events with no user interaction (harder to explain)
Outputtext shown, screen update, sound, animation state change, saved scoreprinting debug text only (depends), or output not tied to input
List useloop through list, index into list, filter, aggregatelist created but only length checked once; list never affects output
Selectionmeaningful if/else that changes behaviorif that never affects output or is always true
Iterationloop that repeats meaningful stepsloop that runs once, or repeats but doesn’t change result

Examples & Applications

Use these as models for what “easy to score” looks like.

Example 1: Quiz scorer (classic full-credit structure)

List: questions, answers (or a combined list of pairs)

Procedure: gradeAnswer(userAnswer)

  • Parameter: userAnswer
  • Iteration: loop through possible answers or iterate through questions
  • Selection: if userAnswer == answers[currentIndex] then increment score else give feedback
  • Output: updates score display

Testing (two calls):

  • gradeAnswer("A") when correct → score increases
  • gradeAnswer("B") when incorrect → score unchanged + different feedback

Key insight: Different parameter values trigger different branches.

Example 2: Recommendation filter (best for “managing complexity”)

List: movies (titles) and genres (or objects/parallel lists)

Procedure: getRecommendations(chosenGenre)

  • Parameter: chosenGenre
  • Iteration: loop over movies
  • Selection: if genres[i] == chosenGenre add to results
  • Output: show filtered list

Testing:

  • Call with "Comedy" → returns comedy titles
  • Call with "Sci-Fi" → returns different set

Key insight: Your list explanation is strong: without a list you’d need a separate variable per movie and repetitive if statements.

Example 3: Stats calculator (great for clear iteration)

List: attemptTimes

Procedure: countFastAttempts(threshold)

  • Parameter: threshold
  • Iteration: loop through all attempt times
  • Selection: if time < threshold then count++
  • Output: return count or display it

Testing:

  • countFastAttempts(10) → maybe returns 5
  • countFastAttempts(5) → returns 2 (different result)

Key insight: Parameter directly changes selection condition.

Common Mistakes & Traps

  1. Vague purpose/function description

    • What goes wrong: You describe the theme (“a fun app”) but not the computational behavior.
    • Why it loses points: The rubric wants input → processing → output.
    • Fix: Explicitly name one input, what the program does with it, and the resulting output.
  2. “Decorative list” (list exists but doesn’t matter)

    • What goes wrong: You store items in a list but the program output doesn’t depend on it.
    • Why it loses points: Data abstraction credit requires the list to be part of the solution.
    • Fix: Make the list drive a loop, selection, filtering, or calculation.
  3. Weak “managing complexity” explanation

    • What goes wrong: You say “lists make it easier” without a concrete comparison.
    • Why it loses points: You must explain what would be harder without the list.
    • Fix: Say what you’d do instead (e.g., “10 separate variables + repeated code”), and why updates/searching/iterating would be harder.
  4. Procedure without a real parameter (or parameter not used)

    • What goes wrong: Procedure has no parameter, or the parameter is unused.
    • Why it loses points: Procedural abstraction point depends on parameterized, meaningful abstraction.
    • Fix: Use the parameter inside selection/loop logic so it changes behavior.
  5. Algorithm elements split across different places

    • What goes wrong: Loop is in one function, if/else is elsewhere, and you can’t point to one coherent algorithm.
    • Why it loses points: The algorithm point typically expects sequencing + selection + iteration in the algorithm you present.
    • Fix: Put all three inside your chosen procedure (the one you’ll use for the rubric).
  6. Iteration that’s trivial or non-functional

    • What goes wrong: A loop runs once, or repeats but doesn’t change/accumulate meaningful results.
    • Why it loses points: It may not count as a meaningful algorithmic iteration.
    • Fix: Use an accumulator, build a results list, search, count, compute min/max/avg, etc.
  7. Testing that isn’t actually two distinct calls with different outcomes

    • What goes wrong: You describe two tests, but they produce the same result, or you only test different parts of the program.
    • Why it loses points: The rubric wants two calls to the same procedure with different results.
    • Fix: Choose two different parameter values that force different branches or different computed totals.
  8. Choosing PPR segments that hide required evidence

    • What goes wrong: Your pasted code segment cuts off the if/else or loop, or doesn’t show list usage.
    • Why it loses points: You can’t effectively reference what isn’t visible.
    • Fix: Select segments that clearly include the list, the procedure header with parameter, and the full algorithm body.

Memory Aids & Quick Tricks

Trick / MnemonicWhat it helps you rememberWhen to use it
P-F-I-O (Purpose–Function–Input–Output)What to say for the purpose/function pointWhen drafting your program description
List = Store + UseA list must store multiple items and affect computation/outputWhen checking Data Abstraction
“No list = many vars + repeated code”The clearest complexity argumentWhen writing Managing Complexity
P-S-I (Parameter–Selection–Iteration)Your procedure should include a parameter and use it in if/else + loopWhen building your main procedure
2 Calls, 2 OutcomesTesting must show different resultsWhen choosing your two test cases

Quick Review Checklist

  • [ ] You can state your program’s purpose in one sentence.
  • [ ] You can identify at least one input and one output and explain the processing between them.
  • [ ] Your program uses a list to store multiple related items.
  • [ ] The list is used in a meaningful way (loop/index/filter/calculate).
  • [ ] You can explain why the list manages complexity (what the no-list version would look like).
  • [ ] You have a student-developed procedure with at least one parameter.
  • [ ] Inside that procedure: sequencing + selection + iteration are clearly present.
  • [ ] You can point to two different calls to your procedure that produce different results, and explain why.
  • [ ] Your PPR code segments clearly show the list and the procedure + algorithm you plan to discuss.

You’re not aiming for a “cool app”—you’re aiming for code and explanations that are impossible for the rubric to say no to.