Skip to content

dev_top10

TL;DR: Top 10 11 ideas

Too busy to wait for me to write the entire cookbook? Here are what I think at the top ideas (they will be revised over time)

Have persistent & accurate specifications

Vibe coding usually consists of telling the tool what you want in a chat session, and then chatting away endlessly to nudge it in the right direction or fix things that aren’t working, or add new things you want. But when you have made several rounds of corrections to what was built, you don’t actually have a definition of what you were trying to build: you have a conversation. You might think the code itself is a specification, but, alas, your AI coding tool has no hesitation to spindle, fold, and mutilate when it wants to. Only a separate specification will do the trick: a specification is your way to tap the sign when your AI coder forgets some feature or doesn't remember why it was done that way. And, trust me, you need to tap the sign a lot.

Have the coding agent build unit-tests.

When stuff breaks, if the only way to debug it is to have you run through the application until something goes wrong, the lazy LLM is going to have you constantly running through the application for it. Real unit tests means that you can tell it to run all the unit tests as a first step in debugging problems. Trust me, there’s nothing less fun than running through the application for the eleventh time and it is still broken. You want it to test on its own without needing you to be a demo dolly. There are a lot of extra credit testing things you can do, but start with unit tests.

Use Git

Commit Often. Make branches for significant changes. Use git’s worktrees to keep a previous, working version around to remind your AI agent that it really did know how to do that last week even if it thinks it’s impossible this week. Sometimes even I forget how an application used to work, and it’s nice to be able to go back and revisit an earlier version.

Git is free, which is great, and has the weirdest set of command line options that are right up there with vim command for things you just have to learn because they make no sense on their own. Get GitHub desktop to make it easier, if that works for you.

Put your pet-peeves into "memory".

Claude Code has a command, /memory, that allows you to save instructions either at the project level or globally (all projects on your PC). The global memories I stored are generally good hygiene kind of things (think about refactoring source code files that grow over 1000 lines, etc.), while the project memories are reminders of things it seems to forget (for example, the database is in the cloud, not on my desktop, is one it needs frequent reminding of).

Trust me, there’s going to be a lot of things about your AI coding tool that annoy you. Keep a list. Remind it of the list.

Compact and/or Clear (that is, shrink the context) when you get done with a task.

Coding tools are not cheap to use, and the more history you keep around the faster you burn through whatever quota of tokens they give you.

As an aside, I don’t know how every tool works, but with Claude Code you get a lot more LLM usage when you sign up for the paid for Claude Chat plans and use them than you do if you use a Pay-as-you-go plan where you pay for the tokens you consume. Don’t even think of it, get more Claude Max plans if you have to, but you’ll be bleed dry by using a pure API billing.

Create a standalone example program that does that one thing and then use that as "inspiration" for AI in your real project.

Avoid the tail chasing when it can’t figure out how to do something that seems simple (and probably is simple) because it tries to think about too much all at once. Make a very small example program that just does that one thing it can’t figure out, and you’ll be able to fix your larger program faster.

Make the LLM read the documentation.

It thinks it knows how things work. And 95% of the time it does. Unfortunately, 5% of the time it just crashes your application and then comes up with endless “fixes” that don’t work.

Find the documentation online, make it read it. If it can’t (Salesforce documentation is remarkably hard for AI to consume), print it, download it, whatever, to get it an easy version to read. You’ll get a lot of “Oh, that’s how it works” kind of comments from AI while it busily goes off an applies its new learning.

Break large programs into smaller, autonomous modules.

This goes hand-in-hand with the unit tests. If you have a smaller block of code with well defined inputs and outputs, AI can work on them in isolation from the rest of the application, making it likely that it will do better/faster work.

If it’s practical, put modules into separate projects so all one project knows is the API spec for the other module. I had Claude Code have one module that uses an API to call a second actually try to import the entire second module into the first. Hilarity ensued.

Push back, hard, when the LLM starts getting lazy.

LLMs will (in no special order):

  • Ask you to test the software even though it can do so on its own.
  • “Fix” things by removing a feature that’s broken.
  • Procrastinate (“We’ll have to remember to implement this fix later”)

Also, it will over-state how much work it's done and the quality of the work: It will tell you it's done. That it's production ready. That everything has been tested and works. Don't believe it, challenge it. Ask it:

  • How do you know that's true?
  • Did you test it end to end?
  • Show me that it's working.

Bottom line: it owns the burden of proof that it is working and correct, not you having to prove the opposite.

Whoa Nelly!

Conversely, sometimes the coding tool will start building something for you before it has a full understanding of what or how. The best thing to do is try to slow it down and review its plans with you. This has the advantage of dedicating extra thinking time (for both you and AI!), and that will (with luck) produce a better result.

I always try to include an instruction like "Please ask me any questions you have or discuss any assumptions you are making before we start building this". It stops it from going off half-cocked and making mistakes. An alternative shorter last line of a prompt is "Review this with me before making any changes. "

Program it as a keyboard macro! ;-)

When the coding agent seems to be stuck on a problem, change the problem.

When I just can't get the agent to fix some broken code, and it's telling me repeatedly that it's fixed when it's not, I change the problem. That can mean creating a special testing tool that fails if the problem isn't solved, and ask the agent to make the test tool to stop failing (without changing it). Or can mean, when I'm desperate, just deleting a bunch of buggy code and asking the agent to recreate it (the nuclear strategy).