Coding against the beat

Play your heart out.


When I was learning to play the drums (and here I should mention that I’m pretty bad at the drums), my instructor told me about how you could know you had really learned a song you were practicing. “Take a different song, something loud and punchy,” she said, “and put it in your headphones. If you can play your song while another beat is playing in your ears, you can really play it.”

I work very regularly with Cursor, which provides LLM-powered autocompletes. Very good ones!

pic

I accept ~20% of the suggestions I get, which to me feels like quite a lot. It often writes big chunks, so I would say it’s responsible for a solid 30% of the total code I write—more, in certain domains like throwaway shell scripts.

These completions are a pretty big speed-up in my daily workflow, and I’d miss them dearly if one day they turned off forever. However, I frequently take “days off” (particularly at the start of a project) to code unassisted in order to fight the core problem with incorporating LLMs into your workflow: even (especially?) if you’re already good at the thing you ask them to do, the tendency is to turn off your critical thinking, accept whatever comes out, and cruise right along.

I think a lot of people assume, intuitively, that this is a bad thing. I think those people are correct—but it’s worth being specific about why! Personally, it’s bad for me to offload my thinking to an LLM because:

  1. I don’t build as sharp of a mental model of my solution. This makes it hard, at a certain point, to improve or debug the solution.
  2. I don’t think as critically about error cases and sharp edges, because I don’t see them emerge from the structure of the code as I write it.
  3. I lose fluency with the language/framework/codebase that I’m working on—at the very least, I don’t get more fluent.

These are all, y’know, straightforwardly bad. But for one thing, they might be worth it anyway; for another, they might be easy to mitigate.

I’ve got two practices that have helped me stay sharp with LLM-powered programming workflows. The first, like I mentioned, is frequently working unassisted. I highly recommend binding an easy keyboard command to whatever your “enable/disable LLM autocomplete” is. It’s a really good habit to be frequently asking yourself whether the code you’re about to write is something you should write alone.

The other practice is something I (admittedly cheesily) call “coding against the beat”: writing code with autocomplete enabled—ideally, a very fast and effective one—and refusing to accept any of the solutions, even if they’re exactly what I want to write.

Programming this way forces me to have a crystal-clear vision of my intended solution. If I’m being bombarded with ideas by an overeager intern who types at 150wpm, I have to constantly reaffirm to myself why what I’m writing will solve the problem better. It’s kind of a mental workout; it’s actively draining, but I feel significantly more in-control of my solution after doing it.

Now, this isn’t a perfect mitigation. In particular, I’ve noticed that I get serious tunnel vision on whatever solution I’ve decided to implement. I’m so focused on writing a websocket event service that I forget to ask whether I even need real-time updates at all—which is likely a more important skill in the first place!

Despite the downsides, however, I’m confident that LLM-generated code will continue to have a place in my workflow—likely, a larger and larger one—and so I’m happy with anything that lets me mitigate the downsides so I can get back to adopting cool new tools.

If you have a practice you use to keep yourself sharp as you use AI tooling, I’d love to hear it! Please feel free to drop me a line at reedbarnes98@gmail.com.