GCSE Computer Science
Learn how to write reliable, maintainable code.
Understand defensive design principles including input validation, authentication, and anticipating misuse. Learn about testing strategies: iterative testing, final testing, and syntax/logic error identification.
This unit covers maintainability through clear commenting, meaningful variable names, and proper code structure.
⚠️
AI-Generated Content - Not Yet Reviewed
This lesson content has been partially generated using AI and has not yet been reviewed by a human educator. It likely contains numerous issues, inaccuracies, and pedagogical problems. Do not use this content for actual lesson planning or teaching at this time.
A single missing minus sign cost NASA $327 million and destroyed a spacecraft. How do we stop one tiny mistake from causing disaster?
Present the Mars Climate Orbiter disaster. NASA's spacecraft was destroyed because one team used metric units and another used imperial. Show the scale of consequences from simple errors. Ask: 'If professional engineers at NASA can make these mistakes, what hope do the rest of us have?' Let students discuss in pairs what went wrong and how it could have been prevented.
Resources:
Visual showing the spacecraft, the error, and the cost
Teacher Notes:
The goal is to create productive anxiety - students should feel 'this could happen to me' before we reveal that defensive design is how we prevent it.
Define defensive design as programming with the assumption that things will go wrong. Use analogy of defensive driving - you don't just follow the rules, you anticipate what others might do wrong. Introduce the core principle: 'Never trust the user.' Discuss two categories of misuse: accidental (typos, confusion, mistakes) and deliberate (hackers, trolls, malicious users). Students brainstorm examples of each for a social media app.
Resources:
Visual summary of key concepts
Teacher Notes:
The 'never trust the user' mantra is deliberately provocative. Clarify that we're not saying users are stupid or evil - we're saying we can't control what they do, so we must handle all possibilities.
Provide students with a simple calculator program (or age-checker/password-checker) that has NO defensive design. In pairs, students compete to find as many ways to break it as possible. Categories: inputs that cause errors, inputs that produce wrong results, inputs that could be security risks. Each pair lists their 'attacks' on a shared document. Class compiles a master list of vulnerabilities.
Resources:
Python/JavaScript code with no input validation or error handling
Template for documenting vulnerabilities found
Teacher Notes:
This is intentionally chaotic and fun. Students love breaking things. The mess they create becomes motivation for the next lessons on how to prevent these problems. Circulate and encourage creative attacks - what about really long numbers? Emojis? Nothing at all?
Brief introduction to bug bounty programs. Show real examples of payouts (Google has paid over $50 million to security researchers). Discuss the ethics: why is it legal when companies invite you to hack them, but illegal otherwise? Mention UK careers in cybersecurity and penetration testing.
Teacher Notes:
Keep this aspirational. The message is: 'The attacking mindset you just used is a real career.' Some students will be very interested in pursuing this further.
Return to the vulnerability list from the activity. For each type of attack found, students propose a defence strategy (we'll learn the specifics next lesson). Exit ticket: 'Name one thing a user might do to break a login system, and suggest how a programmer might prevent it.'
Resources:
One question response card
Teacher Notes:
The exit ticket previews input validation (next lesson) and authentication (lesson 3). Use responses to identify who grasped the 'think like an attacker' mindset.
Explore real cases where lack of defensive design caused catastrophic failures: the Mars Climate Orbiter (unit conversion error), the Therac-25 radiation machine (race conditions killing patients), and the 2012 Knight Capital trading glitch ($440 million lost in 45 minutes).
Connection: These real disasters show why defensive design isn't just academic - it saves lives, money, and reputations. Students see that the principles they're learning prevent real-world catastrophes.
Further Reading:
Introduction to ethical hacking and bug bounty programs. Companies like Google, Microsoft, and even the NHS pay people to find vulnerabilities in their software. Some teenagers earn thousands finding bugs.
Connection: Shows students that 'thinking like an attacker' is a legitimate, well-paid career path. The same mindset used to anticipate misuse is used by security professionals.
Further Reading:
Support:
Stretch:
HackerOne, Bugcrowd - show real examples of paid vulnerabilities
What happens if someone types '-50' as their age, or 'DROP TABLE users;' as their name? Let's find out...
Show the famous XKCD 'Bobby Tables' comic. Explain (simply) what happened: a parent named their child something that, when typed into a school database, deleted all records. Ask: 'How could the school's software have prevented this?' Reveal the answer: input validation - checking what users type before using it.
Resources:
XKCD #327 - printed or displayed
Teacher Notes:
You don't need to explain SQL in depth. The joke works even without technical understanding - someone put something unexpected in a name field and broke the system.
Define input validation as checking user input before processing it. Introduce the validation toolkit: presence check (is it empty?), type check (is it the right kind of data?), range check (is it within acceptable limits?), length check (is it the right size?), format check (does it match a pattern like email or postcode?). Use real examples: age must be 0-150, email must contain @, password must be 8+ characters.
Resources:
Summary of check types with examples
Teacher Notes:
Keep each check type concrete with relatable examples. Students often confuse type and range checks - type is WHAT kind of data, range is WHAT values are acceptable.
Demonstrate (or have students follow along) building input validation for an age checker. Start with no validation (crashes on non-numbers), add type checking (try/except or try/catch), add range checking (0-150), add presence checking (not empty). Show the difference in user experience between a crash and a helpful error message.
Resources:
Incomplete code for students to develop
Teacher Notes:
Code live and make deliberate mistakes. Students learn from seeing the problem-solving process, not just the finished code.
Groups receive different scenarios: online shopping (quantity, card number), social media signup (username, email, age), school system (student ID, grade). For each input field, groups must specify: what type of data is expected, what validation checks are needed, what error messages should be shown. Groups present their validation plans to the class.
Resources:
Structured worksheet for planning validation rules
Teacher Notes:
Encourage debate about edge cases. What if someone is 120 years old - is that valid? What about users from countries with different name formats? There's no single right answer.
Students implement validation for one of the scenarios from the previous activity. They choose their programming language. Challenge: handle at least 3 different types of invalid input with clear error messages.
Resources:
Starter code in Python, JavaScript, and C#
Teacher Notes:
This can extend into homework if needed. The key is applying the design thinking to actual code.
Class collaboratively creates a 'validation checklist' that could be used for any input field. Exit ticket: Given a postcode input field, list 3 validation checks needed and explain why each matters.
Teacher Notes:
The checklist becomes a reference for future programming tasks. The exit ticket tests application to a new scenario.
Brief introduction to SQL injection - how entering malicious text into a form can delete entire databases. The 'Bobby Tables' XKCD comic makes this memorable. This attack has been known since 1998 but still affects major companies today.
Connection: SQL injection is the extreme consequence of not validating input. While students don't need to understand SQL injection in depth, knowing it exists shows why validation matters for security, not just usability.
Further Reading:
Good validation helps users succeed, not just blocks bad input. Examples of excellent vs terrible error messages. Apple and Google's design guidelines for form validation.
Connection: Extends validation beyond 'reject bad input' to 'guide users to success' - a more professional and user-centred perspective.
Further Reading:
Support:
Stretch:
Prerequisites: 1
The most common password in the world is '123456'. If you used it, over 23 million other people have the same password as you. How do we build systems that are actually secure?
Show the most common passwords list (123456, password, qwerty, etc.). Discuss: why do people choose these? What's wrong with them? Show how quickly these can be cracked (instantly). Ask: 'If you were building a login system, how would you stop people using these terrible passwords?'
Resources:
Annual list of most-used passwords
Teacher Notes:
This connects to students' lives - many will recognise passwords they've used. Keep it light but make the security risk real.
Define authentication as confirming identity - proving you are who you claim to be. Distinguish from authorisation (what you're allowed to do once identity is confirmed). Discuss the three factors: something you know (password), something you have (phone/key), something you are (fingerprint). Explain why username/password is 'something you know' authentication.
Resources:
Visual distinguishing the two concepts
Teacher Notes:
The three factors framework helps students understand why their bank might text them a code (second factor) even after they enter the password.
Live code a simple login system together. Start basic (hardcoded username/password), then improve: add input validation, limit login attempts, give appropriate error messages (not 'incorrect password' - why?). Discuss why we don't store passwords in plain text (preview, not detail).
Resources:
Step-by-step guide to building the system
Teacher Notes:
The 'incorrect password' point is crucial - saying 'incorrect password' confirms the username exists, helping attackers. Say 'incorrect username or password' instead.
Show two versions of the same working program - one with terrible style (no comments, single-letter variables, no indentation) and one well-written. Challenge: which would you rather have to fix or extend? Introduce the four pillars of maintainability: naming conventions, indentation, commenting, and sub programs.
Resources:
Two versions of the same program
Teacher Notes:
The visceral reaction to messy code is powerful. Students immediately see why maintainability matters when they imagine having to work with bad code.
Provide students with working but messy code. Their task: improve it WITHOUT changing functionality. Apply: meaningful variable names, proper indentation, helpful comments, break into sub programs where appropriate. Peer review: swap with a partner and see if they can understand your improved version.
Resources:
Working but poorly-written programs to improve
Teacher Notes:
Emphasise that the code must still work identically. This focuses purely on readability and maintainability. Include code in multiple languages.
Create a class 'code quality checklist' combining authentication best practices and maintainability standards. Exit ticket: 'Look at this code snippet. Identify two maintainability issues and explain how you would fix them.'
Resources:
Document for class collaboration
Teacher Notes:
This checklist becomes a marking guide for their own future code. Make it something they can actually use.
Brief exploration of multi-factor authentication, biometrics (fingerprints, Face ID), and passwordless authentication. Why the password may be dying - and what's replacing it.
Connection: While the spec focuses on username/password, students should know this is the baseline. Modern systems layer additional authentication factors for security.
Further Reading:
How professional developers review each other's code. Real examples of code review comments from open-source projects. Why 'code is read more often than it's written.'
Connection: Extends maintainability concepts to professional practice. Shows students that code quality isn't just about making teachers happy - it's how real teams work.
Further Reading:
Support:
Stretch:
https://haveibeenpwned.com/ - check if passwords have been leaked (educational demo)
Prerequisites: 2
The video game Cyberpunk 2077 launched with over 1,000 bugs, causing Sony to remove it from their store and offer refunds. How did a game that cost $316 million to make ship with so many problems?
Tell the Cyberpunk 2077 story: massive budget, years of development, unprecedented hype, disastrous launch. Show clips of bugs (cars flying, characters T-posing). Discuss: what went wrong? Lead to the idea that testing was rushed due to deadline pressure. Ask: 'How should testing work in software development?'
Resources:
Short video or images of launch bugs
Teacher Notes:
This story resonates because many students will know or have played the game. It makes abstract concepts real and shows consequences of poor testing.
Discuss the purpose of testing: finding bugs before users do, ensuring software works correctly, building confidence before release. Introduce the testing spectrum: from informal checking to formal verification. Key insight: testing can prove bugs exist but can never prove they don't.
Resources:
Visual of testing benefits
Teacher Notes:
The philosophical point about never proving absence of bugs is important - it helps students understand why testing is ongoing, not a one-time event.
Define iterative testing: testing during development, checking each module as you build. Define terminal testing: testing at the end, checking the complete program. Discuss advantages of each. Use Cyberpunk example: they relied too heavily on terminal testing. Class discussion: why might teams skip iterative testing? (Time pressure, overconfidence, poor planning.)
Teacher Notes:
Link back to real software development practices. Iterative testing catches problems early when they're easier to fix.
Define syntax errors: breaking the grammar rules of the language (missing brackets, typos in keywords). These stop programs running at all. Define logic errors: program runs but produces wrong output. These are harder to find. Show examples of each. Key distinction: computers catch syntax errors; humans must catch logic errors.
Resources:
Code snippets showing each error type
Teacher Notes:
Use concrete examples: missing colon is syntax (Python won't run); using < instead of <= is logic (runs but wrong answer).
Provide programs with deliberate bugs - some syntax, some logic. Students work in pairs to find, categorise, and fix as many as possible within the time limit. Competitive element: teams score points for each bug found and correctly categorised. Debrief: share strategies for finding different types of bugs.
Resources:
Programs with 5-10 deliberate errors each
Teacher Notes:
Include a mix of obvious and subtle bugs. Logic errors should require actually running the code and checking output to find.
Quick-fire round: show error messages and code snippets, students classify as syntax or logic. Discuss any disagreements. Exit ticket: 'Write one example of a syntax error and one example of a logic error, and explain how you would find each one.'
Teacher Notes:
The exit ticket checks both understanding of error types AND debugging strategies.
Introduction to Quality Assurance roles in the games and software industry. What testers actually do all day. Entry-level opportunities and career progression. Average UK salaries for QA roles.
Connection: Shows students that testing is a legitimate career path, not just a chore. Many successful developers started in QA.
Further Reading:
Brief introduction to the concept of writing tests BEFORE writing code. Why some professional teams work this way. The red-green-refactor cycle.
Connection: Extends the iterative testing concept to show how professionals integrate testing into every stage of development, not just as an afterthought.
Further Reading:
Support:
Stretch:
Articles about the launch problems and their causes
Reference guide for frequent mistakes
Prerequisites: 3
How do you test a password field? You could try 'password123' - but that's just one test. Professional testers might run hundreds of tests on a single input box. What are they all testing?
Display a simple password validation rule: 'Password must be 8-20 characters.' Challenge the class: how many different tests would you need to thoroughly test this one rule? Collect suggestions. Reveal: professional testers might use 15+ tests just for this. Introduce the four types of test data as the framework for systematic testing.
Teacher Notes:
Students usually suggest a few obvious tests. The revelation that there are many more creates curiosity about the framework.
Teach each type with the password example: Normal (valid passwords like 'Secure123'), Boundary (exactly 8 characters, exactly 20 characters - testing the edges), Invalid (7 characters, 21 characters - wrong length but right type), Erroneous (numbers only, empty input, special characters - wrong type entirely). Key insight: boundary testing finds the most bugs because programmers often make 'off by one' errors.
Resources:
Visual summary with examples
Teacher Notes:
The distinction between invalid and erroneous trips up students. Invalid is 'right type, wrong value'. Erroneous is 'wrong type entirely'.
Groups receive different scenarios: age verification (13-120), exam mark input (0-100), temperature sensor (-50 to 60). For each, identify at least 2 examples of each test data type. Groups share and peer-assess: did they cover boundaries correctly? Any edge cases missed?
Resources:
Template for each scenario
Teacher Notes:
Circulate and check boundary values specifically. Common mistake: testing 0 and 100 but not -1 and 101 (just outside the boundaries).
Introduce the test plan format: Test ID, Description, Input Data, Expected Output, Actual Output, Pass/Fail. Model creating a test plan for the password validation. Discuss: why do we write down expected output BEFORE testing? (Prevents us fooling ourselves that wrong output is 'close enough'.)
Resources:
Blank test plan document
Teacher Notes:
The 'expected before actual' point is crucial for rigorous testing. Professional testers write expectations first to avoid bias.
Students create a complete test plan for a given program (e.g., a grade calculator that takes a mark 0-100 and outputs a grade A-F). Must include at least 10 tests covering all four data types with particular attention to boundaries. Then they run the tests on provided code and record actual results.
Resources:
Working program to test
Document for recording tests
Teacher Notes:
The provided code should have a subtle bug (perhaps an off-by-one error in grade boundaries) that good test data will reveal.
Discuss: what do you do when a test fails? Introduce the debugging and refinement cycle: identify the bug, understand why it happens, fix it, re-run ALL tests (not just the failing one). Explain why: fixing one bug can accidentally create another. Show how the test plan becomes a safety net for changes.
Teacher Notes:
The 're-run all tests' point is professional practice that students often skip. Emphasise it.
Swap test plans with another student. Peer review: are all four data types represented? Are boundaries tested correctly? Would this test plan catch a bug? Exit ticket: 'A program accepts ages 18-65. Write four test cases: one normal, one boundary, one invalid, one erroneous, with expected outputs.'
Teacher Notes:
The exit ticket tests all four types in one scenario - good for quick assessment of understanding.
Famous edge case failures: the Year 2000 bug, the Boeing 787 integer overflow (plane reboots after 248 days), the Microsoft Excel date bug. Why edge cases are where most bugs hide.
Connection: Boundary testing directly addresses edge cases. These real-world examples show why boundary testing matters - it catches the bugs that seem unlikely but cause real disasters.
Further Reading:
How companies like Google and Netflix run millions of automated tests. The concept of continuous integration - every code change triggers thousands of tests automatically.
Connection: Shows where test plans evolve in professional practice. The manual test plans students learn become automated test suites in industry.
Further Reading:
Support:
Stretch:
BBC archive on Y2K
Professional examples from software industry
Prerequisites: 4