I recently, unexpectedly, won a bid for a short-term software development gig. My task was to write a data entry system with some bells and whistles about accessibility, input validation, pdf generation, etc. Very normal usual stuff. But the timeline was extremely aggressive. I used Claude to great effect getting it done. The client liked the result.
Some people have trouble getting LLMs to write useful code for them. Here I explain what I think works and what does not. Let’s dive in.
What not to do
The first step in writing a program is identifying the problem you’re solving. You sort of know “where you are now” and you sort of know “where you want to end up” and maybe you know some stuff about constraints and requirements between those places.
Do not take the vague problem and throw it into a chatbot and expect useful results. You will have a bad time, like the bad time my example user is having below:
This conversation sucks! The user dumps multiple complex requirements without any specifics. The core problem is that the thing they are asking for is confused. Worse, they’re likely to try to “fix” it by getting into an argument with the chatbot. At which point they’ll be given a monkeypatch, which has a good chance of not working. And when they ask for a fix-for-the-fix, now they’re getting “a monkeypatch for the already incorrectly monkeypatched code”. The user is accumulating technical debt, in the chat session! Not fun, not where you want to be, likely to result in getting mad at computers.
What to do
Write better questions
I kid. Let’s try again, here is what to do:
Revise your earlier messages aggressively as you refine your understanding of what you want.
Use “problem decomposition” aggressively to break down the task into bite-sized pieces. Have the bot write code that finishes off those bite-sized pieces.
Those are the only rules! The rest of the post is about how to do that. Using LLMs for programming is, ideally, something like having a combination rubber duck and grunt-worker. Discuss the task until you understand the task. Walk the bot through the process of breaking the task down. Doing this is often easier than doing it yourself!
Figure out what shape your solution should have. Break the problem down some. Then do it more. Have the chatbot do the work of breaking it down! Just keep it on-task and prune conversational branches where it gets lost. Eventually you can arrive at having a bunch of what are basically “interfaces”: little snippets of psuedocode that LLMs are very good at filling in for you. Below is a very heavily redacted conversation that went especially well and demonstrates some things.
So now the context contains the input code and a bunch of nitpicking about the exact thing being done. This is good so far. We then gave it a bit more context about other people’s past attempts to solve the problem. This seems useful but isn’t really worth posting verbatim so I’ll skip ahead to the next notable thing:
We then discuss hammer out the exact shape data is expected to be at the beginning end of each “module”. So far we’re just creating “documentation”.
At this point the bot returned a slick, helpful implementation of the first step. And with handholding similar to the above, it was able to do the same for the other five stages. Problem was broken down into tiny little pieces and the questions in the conversation history are pruned for only being the exact question we want to be asking.
If I get an unhelpful answer then it is very likely I have asked the wrong question. The most likely cause of asking the wrong question is “not knowing what it is I actually want”. Go back and try again until the conversation is the idealized version of the conversation, where you always ask the exact right questions. This avoids the problem of having endless cascades of “fixes” – simply generate code that is already either correct or close enough to correct that you can trivially correct it on the spot.
Overall I came away from this task feeling like I understood it better than when I went in, and also like I had to do approximately no work to get that understanding, and to get the work product. I think this generalizes fairly well. As of 2025, chatbots are much better about not hallucinating stuff. So they are extremely good at writing boilerplate, and you can transform many tasks into boilerplate by breaking them down over and over and over until they assume the shape of “write 10 boilerplate things”, which you then have the chatbot write. Ez Ez GG!