Table Stakes for Pragmatic Development Using LLMs

I've been using Claude Code since early access, and while it's dramatically boosted my productivity and code quality, the learning curve was real. Here's what actually works after months of trial and error.

1. Master Your Workflow Strategy

Plan Mode vs Accept Edits

Plan mode is your friend for complex features. Let Claude think through the architecture before touching code. Have a dialogue here - encourage Claude to ask you questions about requirements, constraints, and trade-offs. The best solutions emerge from this back-and-forth, not from one-shot prompts.

Accept edits mode keeps momentum flowing for implementation. I toggle between the two constantly and it works great.

Building Your PRD

Write specs like you're explaining to a smart junior developer. Include edge cases, error handling, and integration points. Vague requirements create vague code.

I always start by building out a proper PRD foundation. First comes CLAUDE.md - basic context about the project, my coding rules, preferred tech stacks, and style guidelines. Think of it as your project's constitution that any agent can reference.

Then I create supporting markdown files covering the business context, core idea, market positioning, and technical architecture. These aren't afterthoughts; they're the knowledge base that makes everything else possible.

I keep all this documentation in a notes/ directory. When spinning up new agents or switching between different parts of the project, these files become the fastest way to get an agent up to speed. No more repeating the same context over and over.

This approach is crucial when running multiple agents in parallel. Each one can quickly understand not just what to build, but why and how it fits into the bigger picture. It's the difference between having a coordinated team and a bunch of confused contractors.

Pick Your Tech Stack Carefully

LLMs work best with popular frameworks and well-documented libraries. Obscure packages or heavily customized setups will slow you down. Stick to mainstream choices unless you have a compelling reason.

Human in the Middle

Set up proper git workflows from day one. I create feature branches for each Claude session and review everything before merging. This isn't just good practice; it's how you catch the subtle bugs that slip through.

Ask Claude to build custom debugging tools when working on complex logic. Simple console outputs or test helpers save hours of back-and-forth.

Red Flags in Your Prompts

If Claude's taking forever or producing overcomplicated solutions, step back. Either your codebase is too messy or you're asking for too much at once. Break it down or clean up first.

Time-lapse visualization of pragmatic LLM development workflow in action

The evolution of development speed with LLM tools - from hours to minutes

2. Leverage Terminal Support Tools

The terminal support tools are awesome and can be a bit redundant, but here are a couple of my favorites:

Claude Squad helps coordinate multiple AI agents on larger projects. Think of it as project management for your AI team.

Claude Composer streamlines the prompt-to-code pipeline with better templating and session management.

YOLO mode - suppressing confirmations for faster development workflow

YOLO mode in action - when you trust your workflow, suppress the confirmations

The recipe for agents working in parallel on the same codebase seems to be sessions with something like tmux and savvy use of the git tree. This prevents them from colliding with each other's work.

The second key feature is commonly referred to as "YOLO mode" - it suppresses most confirmations. If you're keeping the agents' work small and testable, this works well and keeps momentum high.

Check out the awesome-claude-code repository for the latest tools. This ecosystem is evolving fast.

The evolution of a developer using AI tools - from typing code to orchestrating intelligent agents

When you master LLM development workflows, you become the orchestrator, not just the coder

3. Graduate from Prompts to Agents with Capabilities

Stop thinking in terms of "prompts" and start building actual agents with specific capabilities.

Learn the vernacular: agents have tools, capabilities, and workflows. They also have their own weaknesses. Adopt tooling that allows you to test, monitor and improve your LLM interactions.

LangGraph and the broader LangChain ecosystem provide the scaffolding for more sophisticated automation. You can build agents that handle entire development workflows, not just individual coding tasks.

Advanced hacking workflow with AI agents coordinating complex development tasks

When your agents start coordinating complex workflows autonomously

4. Mind the Gaps

Entity Resolution is Hard

LLMs struggle with keeping track of entities across large codebases. Be explicit about what you're referencing. Use full paths, clear variable names, and consistent naming conventions.

Evals and Finding Your Sanity Before Shipping

Build automated testing into your workflow from the start. LLMs can write tests, but you need to verify they actually catch bugs. Don't ship AI-generated code without proper validation.

Start with error analysis - actually look at what's failing in your interactions with the LLM. Keep a running log of where things go wrong. Are you getting inconsistent formatting? Logic errors? Hallucinated dependencies? Pattern-match these failures and build specific checks for them.

Create simple pass/fail evaluators for the most common issues you identify. A binary judgment is more useful than a complex scoring system. Either the code compiles and tests pass, or it doesn't. Either the response follows your specified format, or it doesn't.

Don't over-engineer this. Start with code-based checks for objective failures (syntax errors, missing imports, wrong file paths) and only build more sophisticated evaluators for subjective quality issues if they're blocking you repeatedly.

5. Prioritize Feedback Tools

Browser-Based Testing Always

If your project has a UI, test in the browser constantly. LLMs can't see what you see, so visual feedback loops are crucial.

Documentation as Collaboration Hub

Docusaurus works brilliantly for PRDs because both humans and AI can read and contribute to the same structured documents. It becomes your single source of truth.

Tooling MCP Servers

Figma is a good example, where your code agents can interact with design specs directly. Keep in mind you may need to purchase a license, and copy designs that customers share with you to your personal workspace to give your MCP access.


The key insight: pragmatic LLM development isn't about replacing human judgment; it's about amplifying it. You're still the architect, product manager, and final reviewer. Claude Code just happens to be an incredibly fast and capable contractor.