back

AI in the Daily Workflow of Software Development

In 2025, two statements about AI caught my attention.

The first came from Tobi Lütke, Shopify’s CEO. He was pretty direct: using AI at the company is no longer optional1. The second was from Sahil Lavingia, CEO of Gumroad. According to him, the company has basically stopped hiring junior and mid-level developers because AI already writes most of the code they need2.

If you work with software today, this changes the context. Mastering these tools isn’t a competitive advantage anymore. It’s part of the job. Andy Hunt already mentioned something similar in The Pragmatic Programmer: good professionals master their tools well.

With that in mind, I want to share how I’ve been using AI in my day-to-day work, specifically in my workflow. The goal isn’t to propose a definitive model, but to show what’s been working for me and raise some practical thoughts about using these technologies more consciously.

Tools

I tested Cursor extensively. The autocomplete is really good, clearly better than GitHub Copilot, which pioneered this space and which I used for quite a while. The main differences are in speed, accuracy, and the ability to complete multiple lines at once, plus navigating between code sections with more context.

The problem is the Pro plan. You hit the limit fast and “auto” mode doesn’t perform consistently. At times, the code quality drops noticeably.

Because of that, I switched to Claude Code. So far, the experience has been positive. An important point is that it’s developed by Anthropic, which currently has some of the best coding-focused models. In practice, this means access to Claude Sonnet 4.5 and Opus 4.5 at a lower cost, sometimes close to cost price.

Spec-driven development

My current workflow is based on spec-driven development. Instead of jumping straight into code, I write the specifications manually. In some cases, I use audio transcription to speed up this initial step. From there, I enter an iterative process of refining the spec.

What made a real difference was using a prompt that forces the AI to question the specification. It raises points I didn’t make explicit, identifies implicit assumptions, and exposes ambiguities that would normally only show up later, during implementation.

The prompt I use today is this:

Read this and interview me in detail using the AskUserQuestionTool about literally anything: technical implementation, UI & UX, concerns, tradeoffs, etc. but make sure the questions are not obvious. Be very in-depth and continue interviewing me continually until it's complete, then write the spec to the file

I create a command in Cursor or Claude Code, run it, and keep answering questions until the spec is solid enough to turn into code.

React-grab

React-grab is more specific for those working with React on the front-end. You install it in the project as a dev dependency and can select UI elements to copy directly to your clipboard.

The copied content includes the component path, plus classes and children, both in truncated form. This significantly reduces the number of tool calls needed when I’m using AI to generate or adjust UI code, which also reduces token consumption. Day-to-day, this translates to more speed and less friction.

Repogrep

I started using Repogrep recently and so far the experience has been good. In many cases, consulting a library’s source code helps clarify questions about behavior, limitations, and implementation decisions.

Repogrep indexes public GitHub repositories and lets you ask questions about them through a chat interface. For those working with specific languages and ecosystems, this shortens the path between question and actual understanding of the code.

Footnotes

  1. Statement from Tobi Lütke on X (Twitter).

  2. Statement from Sahil Lavingia on X (Twitter).