Using Claude

I recently, unexpectedly, won a bid for a short-term software development gig. My task was to write a data entry system with some bells and whistles about accessibility, input validation, pdf generation, etc. Very normal usual stuff. But the timeline was extremely aggressive and I was not confident with some of the technology I was to work with. I used Claude to great effect getting it done. The client liked the result.

Some people have trouble getting LLMs to write useful code for them. Here I explain what I think works and what does not. Let’s dive in.

What not to do

The first step in writing a program is identifying the problem you’re solving. You sort of know “where you are now” and you sort of know “where you want to end up” and maybe you know some stuff about constraints and requirements between those places.

Do not try to go directly from “the state before you know what your problem even is” to “done”. You will have a bad time, like the bad time my example user is having below:

ImpishUser
ImpishUser hi can you build me a machine learning model that predicts stock prices and also works for crypto and forex and commodities and can you make it better than what hedge funds use and also add sentiment analysis from twitter and reddit and make sure it has really good accuracy like 99% or better and works in real time and maybe add some reinforcement learning and GANs and also make it easy to use thanks ps it needs to work perfectly with no bugs and handle any market conditions
Chatbot
Chatbot Here’s a Python script that implements a basic LSTM model for price prediction:
WearyUser
WearyUser this doesn’t work at all… it’s not even complete code and doesn’t do anything I asked for. can you make something that actually works?

This conversation sucks! The user dumps multiple complex requirements without any specifics. The core problem is that the thing they are asking for is confused. Worse, they’re likely to try to “fix” it by getting into an argument with the chatbot. At which point they’ll be given a monkeypatch, which has a good chance of not working. And when they ask for a fix-for-the-fix, now they’re getting “a monkeypatch for the already incorrectly monkeypatched code”. The user is accumulating technical debt, in the chat session! Not fun, not where you want to be, likely to result in getting mad at computers.

What to do

If you find yourself in an unproductive aggravating loop: revise your earlier messages aggressively as you refine your understanding of what you want.

Use “problem decomposition” aggressively to break down the task into bite-sized pieces. Have the bot write code that finishes off those bite-sized pieces.

Those are the only rules! The rest of the post is about how to do that. Using LLMs for programming is, ideally, something like having a combination rubber duck and grunt-worker. Discuss the task until you understand the task. Walk the bot through the process of breaking the task down. Doing this is often easier than doing it yourself!

Figure out what shape your solution should have. Break the problem down some. Then do it more. Have the chatbot do the work of breaking it down! Just keep it on-task and prune conversational branches where it gets lost. Eventually you can arrive at having a bunch of what are basically “interfaces”: little snippets of psuedocode that LLMs are very good at filling in for you.

Below is a very heavily redacted conversation that went especially well and demonstrates some things.

CoolUser
CoolUser I’m going to show you a script. explain what it does. Note any idiosyncratic language strengths being leveraged that might make it challenging to port to another language. write a documentation/readme document to help a user understand what this is doing and what assumptions it is making or what data shape it is assuming
Chatbot
Chatbot I’ll create comprehensive documentation for this script that analyzes what appears to be [redacted].

So now the context contains the input code and a bunch of nitpicking about the exact thing being done. This is good so far. We then gave it a bit more context about other people’s past attempts to solve the problem. This seems useful but isn’t really worth posting verbatim so I’ll skip ahead to the next notable thing:

CoolUser
CoolUser next up: would it be easier to comprehend this task if we split it into modules? does the script/library basically do a series of stages of separable/discrete tasks?
Chatbot
Chatbot Yes, looking at the code and documentation, this pipeline can be clearly separated into distinct modules. Let me break down the major stages:

We then discuss hammer out the exact shape data is expected to be at the beginning end of each “module”. So far we’re just creating “documentation”.

CoolUser
CoolUser alright. are we clear on the (that is: data shape) coming in and out of each phase? if so, let’s proceed to do the data preproccessing module
Chatbot
Chatbot Let me first explicitly document the expected data shapes for each module, then we can proceed with the first module.
CoolUser
CoolUser write documentation for this module. what assumptions is it making about what is coming in, what is it doing, and what goes out
Chatbot
Chatbot The key points about this implementation are:
CoolUser
CoolUser let us proceed, yes. module, interface, documentations

At this point the bot returned a slick, helpful implementation of the first step. And with handholding similar to the above, it was able to do the same for the other five stages. Problem was broken down into tiny little pieces and the questions in the conversation history are pruned for only being the exact question we want to be asking.

If I get an unhelpful answer then it is very likely I have asked the wrong question. The most likely cause of asking the wrong question is “not knowing what it is I actually want”. Go back and try again until the conversation is the idealized version of the conversation, where you always ask the exact right questions. This avoids the problem of having endless cascades of “fixes” – simply generate code that is already either correct or close enough to correct that you can trivially correct it on the spot.

Overall I came away from this task feeling like I understood it better than when I went in, and also like I had to do approximately no work to get that understanding, and to get the work product. I think this generalizes fairly well. As of 2025, chatbots are much better about not hallucinating stuff. So they are extremely good at writing boilerplate, and you can transform many tasks into boilerplate by breaking them down over and over and over until they assume the shape of “write 10 boilerplate things”, which you then have the chatbot write. Ez Ez GG!