This is a living document of various things I like to add to a document I call...
The Review Checklist
My work implementations may be different, but these are some guidelines.
- When this fails, how do we find out about it? (notification/Alerting)
- When this fails, how do we learn how to fix it? (audit trails/reporting)
- When this fails, can we automate a fix? (nurse process)
- How do we know if users are using this?
- Are consecutive statements at similar levels of abstraction? (Clean Code - Uncle Bob)
- What something does is more important to readability than how.
- API should be at top, details at bottom.
- Things should not be named for their usage.
- Can lint rules be improved to capture readability concerns?
- Is there any user facing documentation?
- Internal Documentation?
- Does the documentation live near the thing it documents?
- How does the Changelog get updated
- What do we do with the data when it is old?
- Does linting happen in the build step? (Are there any PR conversations that could be lint checks instead?)
- Do Unit Tests run before Integration Tests?
A great guide to testing guidelines is Kent Beck's "Test Desiderata" (After all, Kent Beck is the father of TDD) Test Disiderata
- Are tests coupled to the behavior?
- Are tests decoupled from structure?
- Are tests easier to write than production code?
- Do the tests run quickly?
- When tests pass, do they inspire confidence about the releasability of the code?
- Isolated - do tests return same results regardless of how they are run?
- When test fails, is the reason obvious?
- Can this test be run without human intervention? (will CI pick it up?) ... ...Additional
- Are we missing any tests?
- Is it possible that two callers are competing for the same resource? If important, how to address/test?
- Is the code compatible with upgrades/downgrades?
- Have APIs remained stable.
- Does anything in this code need to be versioned?
- Is there any versioning that needs to happen?
- Who needs to be notified of this change - (Can this be automated)?
- Was there any work done outside of the code that could somehow be checked in to produce a useful archive for other team members. ie, a one off curl request that could be checked in... etc...
- Should any of the manual testing setup be automated?