Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help


name: tdd description: Test-driven development with red-green-refactor loop. Use when user wants to build features or fix bugs, mentions “red-green-refactor”, wants integration tests, or asks for test-first development. metadata: author: [mattpocok, berkes] origin: https://github.com/mattpocock/skills

Test-Driven Development

  1. Create a plan for this feature, make sure it follows this skills guide.
  2. Decide if an integration test is needed.
  3. Write a failing integration test.
  4. Or write a failing unit test.
  5. Determine what you expect to fail.
  6. Run tests with just test.
  7. Nothing fails? Something is wrong.
  8. Does the expectation of the failure match the actual failure? No, change the tests.
  9. Write the code to make the test pass.
  10. Run tests with just test.
  11. Refactor the code.
  12. Repeat steps 1-10 until the feature is complete.

Stricly follow other instructions in the projects’ AGENT.md

Test Levels

There are three levels of tests. Use the decision guide below to choose the right level.

Unit tests test a single module in isolation. All collaborators are replaced with test doubles (Stub, Spy, Mock, or Dummy). They are fast, numerous, and specific. See tests.md and mocking.md.

Integration tests test the interaction between modules. Real modules are wired together; only external infrastructure (database, network) may be replaced with adapters. Ports and Adapters make this natural: swap a production adapter for an in-memory one. See tests.md.

End-to-end (e2e) tests test complete user-visible journeys against the running system. They are slow, few, and broad. Run with just test-e2e.

Testing pyramid

Few e2e tests → more integration tests → many unit tests.

Ports and Adapters architecture makes the pyramid work: business logic lives in domain modules that have no infrastructure dependencies, so it can be unit-tested without mocks of infrastructure.

When to write which test

  • New feature → add an e2e test (new file or extend existing file in tests/e2e/)
  • Change behaviour of existing feature → update the existing e2e test, or add one if the change cannot be caught there
  • Change how modules interact → add or update an integration test
  • Change or add detail inside a single module → add or update a unit test
  • User-facing bug that is common/visible → add or update an e2e test
  • Bug in a detail or edge case → add or update a unit test
  • Business logic details → unit tests
  • Happy path, common use → e2e tests

Philosophy

Core principle: Tests should verify behavior through public interfaces, not implementation details. Code can change entirely; tests shouldn’t.

Good tests are integration-style: they exercise real code paths through public APIs. They describe what the system does, not how it does it. A good test reads like a specification - “user can checkout with valid cart” tells you exactly what capability exists. These tests survive refactors because they don’t care about internal structure.

Bad tests are coupled to implementation. They mock internal collaborators, test private methods, or verify through external means (like querying a database directly instead of using the interface). The warning sign: your test breaks when you refactor, but behavior hasn’t changed. If you rename an internal function and tests fail, those tests were testing implementation, not behavior.

See tests.md for examples and mocking.md for mocking guidelines.

Anti-Pattern: Horizontal Slices

DO NOT write all tests first, then all implementation. This is “horizontal slicing” - treating RED as “write all tests” and GREEN as “write all code.”

This produces crap tests:

  • Tests written in bulk test imagined behavior, not actual behavior
  • You end up testing the shape of things (data structures, function signatures) rather than user-facing behavior
  • Tests become insensitive to real changes - they pass when behavior breaks, fail when behavior is fine
  • You outrun your headlights, committing to test structure before understanding the implementation

Correct approach: Vertical slices via tracer bullets. One test → one implementation → repeat. Each test responds to what you learned from the previous cycle. Because you just wrote the code, you know exactly what behavior matters and how to verify it.

WRONG (horizontal):
  RED:   test1, test2, test3, test4, test5
  GREEN: impl1, impl2, impl3, impl4, impl5

RIGHT (vertical):
  RED→GREEN: test1→impl1
  RED→GREEN: test2→impl2
  RED→GREEN: test3→impl3
  ...

Workflow

1. Planning

When exploring the codebase, use the project’s domain glossary so that test names and interface vocabulary match the project’s language, and respect ADRs in the area you’re touching.

Before writing any code:

  • Confirm with user what interface changes are needed
  • Confirm with user which behaviors to test (prioritize)
  • Identify opportunities for deep modules (small interface, deep implementation)
  • Design interfaces for testability
  • List the behaviors to test (not implementation steps)
  • Get user approval on the plan

Ask: “What should the public interface look like? Which behaviors are most important to test?”

You can’t test everything. Confirm with the user exactly which behaviors matter most. Focus testing effort on critical paths and complex logic, not every possible edge case.

2. Tracer Bullet

Write ONE test that confirms ONE thing about the system:

RED:   Write test for first behavior → test fails
GREEN: Write minimal code to pass → test passes

This is your tracer bullet - proves the path works end-to-end.

3. Incremental Loop

For each remaining behavior:

RED:   Write next test → fails
GREEN: Minimal code to pass → passes

Rules:

  • One test at a time
  • Only enough code to pass current test
  • Don’t anticipate future tests
  • Keep tests focused on observable behavior

4. Refactor

After all tests pass, look for refactor candidates:

  • Extract duplication
  • Deepen modules (move complexity behind simple interfaces)
  • Apply SOLID principles where natural
  • Consider what new code reveals about existing code
  • Run tests after each refactor step

Never refactor while RED. Get to GREEN first.

Checklist Per Cycle

[ ] Test describes behavior, not implementation
[ ] Test uses public interface only
[ ] Test would survive internal refactor
[ ] Code is minimal for this test
[ ] No speculative features added