United States - Ekhbary News Agency
Creator of Claude Code Reveals His Workflow, Sending Developers into a Frenzy
When the creator of the world's most advanced coding agent speaks, Silicon Valley doesn't just listen — it takes notes. For the past week, the engineering community has been dissecting a thread on X (formerly Twitter) from Boris Cherny, the creator and head of Claude Code at Anthropic. What began as a casual sharing of his personal terminal setup has spiraled into a viral manifesto on the future of software development, with industry insiders calling it a watershed moment for the startup.
"If you're not reading the Claude Code best practices straight from its creator, you're behind as a programmer," wrote Jeff Tang, a prominent voice in the developer community. Kyle McNease, another industry observer, went further, declaring that with Cherny's "game-changing updates," Anthropic is "on fire," potentially facing "their ChatGPT moment."
Read Also
- Robotic Canines Revolutionize Agriculture: From Field Hauling to Crop Protection
- Bond Strength, Biocompatibility, and Beyond: Master Bond's Guide to Medical-Grade Adhesives
- AI-Powered Startup Streamlines Access to Rehabilitation Facilities, Tackling U.S. Healthcare Referral Crisis
- DraftKings Unveils Ambitious Super App Strategy to Consolidate US Digital Betting Ecosystem
- Gambling Is Not Investing Coalition Challenges Prediction Markets Over Consumer Protections
The excitement stems from a paradox: Cherny's workflow is surprisingly simple, yet it allows a single human to operate with the output capacity of a small engineering department. As one user noted on X after implementing Cherny's setup, the experience "feels more like Starcraft" than traditional coding — a shift from typing syntax to commanding autonomous units. This represents a significant departure from conventional development methodologies.
From Solitary Coder to Digital Fleet Commander
Cherny's approach radically redefines the developer's role. Instead of the traditional "inner loop" of coding, where a programmer writes, tests, and iterates on a single function, Cherny acts as a conductor orchestrating multiple AI agents. His revelation that he runs "5 Claudes in parallel in my terminal," numbering tabs 1-5 and using system notifications to manage input, has captivated the tech world.
Utilizing iTerm2 system notifications, Cherny effectively manages five simultaneous work streams. While one AI agent runs a test suite, another refactors a legacy module, and a third drafts documentation, Cherny oversees the entire process. He further extends this capability by running "5-10 Claudes on claude.ai" in his browser, employing a "teleport" command to seamlessly hand off sessions between the web interface and his local machine. This multi-agent, multi-platform strategy is key to his amplified productivity.
This operational model strongly validates the "do more with less" strategy articulated by Anthropic President Daniela Amodei. While competitors like OpenAI focus on massive infrastructure investments, Anthropic, through Cherny's workflow, demonstrates that superior orchestration and intelligent utilization of existing AI models can yield exponential productivity gains. This highlights a strategic divergence in how leading AI companies are approaching growth and innovation.
The Counterintuitive Choice: Opus 4.5, the Slowest, Smartest Model
In a move that defies the industry's obsession with speed, Cherny revealed his exclusive reliance on Anthropic's most powerful, albeit slowest, model: Opus 4.5. "I use Opus 4.5 with thinking for everything," Cherny explained. "It's the best coding model I've ever used, and even though it's bigger & slower than Sonnet, since you have to steer it less and it's better at tool use, it is almost always faster than using a smaller model in the end."
This insight is critical for enterprise technology leaders. The primary bottleneck in modern AI development is not token generation speed, but the human time invested in correcting AI errors. Cherny's workflow suggests that incurring the "compute tax" for a more capable model upfront significantly reduces the subsequent "correction tax." This paradigm shift encourages a re-evaluation of cost-benefit analyses in AI adoption, prioritizing cognitive capability over raw processing speed.
A Shared File for Collective AI Memory
Cherny also addressed the persistent challenge of "AI amnesia" – the inability of standard large language models to retain company-specific coding styles or architectural decisions across sessions. His team's solution is elegantly simple: a single file named CLAUDE.md within their Git repository.
"Anytime we see Claude do something incorrectly we add it to the CLAUDE.md, so Claude knows not to do it next time," he wrote. This practice transforms the codebase into a self-improving entity. When a human developer reviews a pull request and identifies an error, they not only fix the code but also update the CLAUDE.md file, effectively retraining the AI. As product leader Aakash Gupta noted, "Every mistake becomes a rule." The longer the team collaborates, the more refined and intelligent the AI agent becomes.
Slash Commands and Sub-Agents: Automating Tedium
The efficiency of Cherny's "vanilla" workflow is powered by rigorous automation. He employs slash commands—custom shortcuts integrated into the project's repository—to execute complex operations with a single keystroke. A command like `/commit-push-pr`, invoked dozens of times daily, autonomously handles the version control bureaucracy, eliminating manual Git commands, commit message writing, and pull request creation.
Related News
- A Relentless Pursuit: MLB Teams Intensify Focus on Pitch Tipping in 2026 Season
- TCL's Swarovski Crystal Earbuds: Kitsch Appeal Meets Open-Ear Audio Innovation
- SpaceX Falcon 9's Fiery Re-entry Unleashes Massive Lithium Plume Over Europe, Scientists Warn of Escalating Space Pollution
- New AI Assistant 'IronCurtain' Designed to Prevent Rogue Agent Behavior
- Why We Can't Just Launch Our Trash Into the Sun
Furthermore, Cherny utilizes sub-agents, specialized AI personas, for distinct development phases. A "code-simplifier" refines architecture post-development, while a "verify-app" agent conducts end-to-end tests before deployment. This layered automation streamlines the entire software development lifecycle.
Verification Loops: The True Unlock for AI-Generated Code
The remarkable success of Claude Code, reportedly achieving $1 billion in annual recurring revenue, is largely attributed to its robust verification loops. The AI functions not merely as a code generator but as an integrated tester.
"Claude tests every single change I land to claude.ai/code using the Claude Chrome extension," Cherny stated. "It opens a browser, tests the UI, and iterates until the code works and the UX feels good." He posits that enabling AI to verify its own work—through browser automation, command execution, or test suites—enhances final output quality by "2-3x." The agent doesn't just write code; it rigorously validates it.
A Paradigm Shift in Software Engineering
The industry's reaction to Cherny's workflow signifies a pivotal shift. "AI coding" has evolved from a simple editor autocomplete feature to a comprehensive operating system for labor. As Jeff Tang summarized on X, "Read this if you're already an engineer... and want more power." The tools to multiply human output significantly are readily available. The key lies in reframing AI not as an assistant, but as a capable workforce. Developers who embrace this mental leap will not only achieve greater productivity but will fundamentally redefine their role in the future of software engineering.