The Reality Check
Where I Lean on AI
Boilerplate and repetition
Explaining and navigating code
Drafting tests and docs
Small refactors and renames
Where I Don’t Rely on AI
Architecture and design
Security- and data-sensitive logic
Critical paths and business logic
How I Review Every Suggestion
- Read it. Not just the changed lines the surrounding context. Does it match the intent? Does it introduce a new dependency or side effect?
- Run the linter. Most “almost right” code fails lint (wrong types, unused vars, style). Fixing those often surfaces logic issues too.
- Run the tests. If there are tests for the area I changed, I run them. If not, I add a quick sanity check or run the app and click through the flow.
- Understand it. If I can’t explain the suggestion to someone else, I don’t merge it. If I’m in a hurry, I leave a TODO and come back.
Prompts That Actually Help
- Bad: “Fix this function.”
- Better: “Refactor this function to use async/await and return a Result type; keep the same error messages.”
- Bad: “Write tests.”
- Better: “Add unit tests for this function; cover the success case and when the API returns 404 and 500.”
- Bad: “Optimize this.”
- Better: “This loop runs on a large array; suggest a more efficient approach and keep the same return shape.”
When the Output Is “Almost Right”
- Use the good parts. Keep the structure or the lines that are clearly correct.
- Fix the rest. Correct the logic, types, or edge cases instead of re-prompting forever.
- Note the failure mode. If the same kind of mistake repeats (e.g. wrong null checks), I adjust how I prompt or what I ask the tool to do next time.

