もっと こだわり

A Rubber Duck that Talks Back

Posted

In software development, there's a classic technique called "Rubber Ducking". The idea is that when you are stuck on a problem, you explain it to a rubber duck (or any inanimate object) in detail. The very act of explaining the problem often helps you see the solution.

So many times I've been stuck on a tricky bug or complex piece of logic, and by the time I've set out the problem to a colleague, I've figured it out myself. They don't even have to say anything!

I have been using ChatGPT, Claude Code, Gemini CLI and other AI tools for a while now, trying to find how they fit into my workflow. One time when I was futilely explaining to the LLM how a piece of code worked and why its suggestions were wrong, I figured out the issue and also realised I was rubber-ducking, only this duck was talking back.

Infinite Code Review

I don't let LLMs write code, because all of the ones I have tried have been spectacularly bad at it. But having a continuous code review, especially when I prompt the model to ask me questions, rather than suggest solutions, has been surprisingly good. Just thinking about the answers to the LLM's questions is usually enough to get me unstuck, and it doesn't care if I leave it hanging!

Another bonus is that I don't get riled up by the crap it generates, because negative emotions are draining and I'm much more civil with the LLM as a result, so I may avoid the first rounds of extermination in the inevitable AI uprising.

Copy Editing

Similarly, my recent blog posts have been written with the help of an LLM. I don't want it to write for me - I want it to help me be a better writer. So typically, I write a first draft, and then ask the LLM to critique based on the criteria I give it.

I always hate my own writing, so having someone who is always willing to read my drivel is great. I don't always agree with the suggestions, but they often make me think about what I'm trying to say in a different way.

Conclusion

A common and valid concern with LLMs is that over-reliance on them can lead to an atrophy of our own skills. If we only use them to generate answers, we risk losing the ability to think critically.

However, using the model as a "talking rubber duck" does the opposite. It doesn't let me offload the work. Instead, by forcing me to articulate my logic and defend my choices, it makes me engage with the problem more deeply.

It's the Socratic method, brought bang up to date.