Writing is only getting more important
The right words have never been more valuable than they are right now.
Writing and speaking, for me, are thinking.
I learned early in school that I had a talent for winging a presentation or bashing out a paper hours before the deadline. In hindsight, this had less to do with procrastination than it did with my own brain: I wasn’t skipping the work to make something up at the last minute; I was doing the work by making something up. I wish I’d known earlier that what I thought was a cheap hack was actually just how I processed information!
This distinction—between “making something up” and “thinking through an idea”—has gotten much more interesting to me over the last few years, for probably obvious reasons. The concept of a computer “making something up” was essentially science fiction half a decade ago; now we talk about LLM hallucinations as a matter of daily pragmatism.
It’s common knowledge, these days, that we’re all thinking less: LLMs are stealing our cognitive abilities, we’re just operators for our soon-to-be robot overlords, and so on.
Except I don’t actually find this to be true! Even as an enthusiastic early adopter of a ton of different AI tools, I’ve never had to think harder about the words I put into a computer.
Oddly, I’ve always found it easy to write code without thinking—to jump straight to the friendly shapes of loops and logs without actually working through the problem front to back.
It’s pretty funny to see this exact tendency manifest in coding agents. Writing more code is easy! It’s quite a bit harder to write less code, and harder still to know when you’re going the wrong way entirely.
Now, I spend ~20% of my day explaining things to LLMs! As forgiving as coding agents might seem to be at first, anybody who’s let Claude loose in a codebase knows it’s incredibly easy to blast out 1000s of lines of code over which you have almost no understanding.
The wonderful side effect of this is that when I take prompting and explaining really seriously, it makes my work much better. Software engineers have long been familiar with the idea of rubber-duck programming. Just taking the time to explain your thoughts to someone, or something, can reveal things in minutes that you hadn’t realized in hours of your own attempts to understand. Now, that’s the start of nearly every software task I undertake.
While many AI tools make it absurdly easy to accomplish 80% of a task, I’ve yet to find any tool that can nail the final 20% that always seems to take 80% of the effort. In fact, I’m beginning to suspect that’s just the nature of building. The more quickly you can get work done, the faster you get to the edge cases, the iterative improvement—the genuinely valuable stuff, as a matter of fact.
The nascent profession of “prompt engineer” received a great deal of mockery in 2023. “Imagine,” we laughed, “a whole job title for somebody who tweaks the input to LLMs doing the actual work!”
Except I don’t know anyone today who denies the importance of putting the right information into an LLM. Prompt engineering isn’t an individual job, it’s a table-stakes skill for an increasingly-large share of all knowledge work.