High-Frequency Claude Code Terminal Shortcuts
Shift + Tab: Cycle Three Modes (Most Important)
Press Shift + Tab repeatedly to cycle:
- Normal (default): asks before modifying any file.
- Accept edits: stops asking — just makes the changes.
- Plan mode: discusses approach only, no modifications.
Esc: Interrupt the Current Action
If Claude is going off-track or you want to change direction, hit Esc to stop the current action and just keep talking with your corrections.
@: Reference Files
Typing @ opens file autocomplete, e.g., @README.md or @src/auth.ts. Much more efficient than copy-pasting entire blocks, and saves context.
Fengtian: Two Max Plans + One Workflow = 10x Productivity
Subtitle: How I write code with AI every day — tools, environment, and workflow.
Intro: Practice Is the Only Criterion of Truth
I often chat with candidates about their AI-coding tool usage in interviews. Most people can't build their own agent tools — that's normal; but leveraging what others have built well is "low-hanging fruit." Yet the common feedback is "it works, but only marginally improves productivity," with the usual reasons: AI hallucinates, doesn't understand the business, error-prone, etc.
You ask them how they tried it? Used XX tool before, casually chatted a few lines, at most threw a PRD or feature doc at the AI, then concluded: today's LLM isn't ready, still immature, ultimately people are the answer.
Truth is, AI tools aren't useless — you just don't know enough of how to use them.
I. Tool Selection: The Floor of Productivity
1. Why I switched entirely to Claude Code (CC) after a year on Cursor and Trae
- Model capability: Claude's own model in Claude's own product has no third-party API restrictions; its coding dominance remains.
- Dangerously mode fits AFK execution: my whole ralph-loop thing was about "hand the AI a task, let it run to completion." Cursor can't do this — you babysit every diff. CC's dangerously mode skips permission prompts, letting the agent truly execute autonomously, with Git as the safety net.
- Terminal-native consistency: my whole toolchain (Ghostty, Yazi, various CLI tools) is terminal-native. CC fits in naturally; switching to an IDE is friction.
- Stronger community: lots of experts share workflows; easy to stand on the "shoulders of giants."
In one line: Cursor/Trae suits "AI assists me writing code"; CC suits "AI executes tasks for me." The latter feels lighter — I only judge.
2. Model selection: don't go cheap — "results first" is the highest ROI
Cheaper, weaker models look cost-efficient per token but have terrible overall ROI. To get satisfactory results, you burn more time tuning prompts, filling context, correcting errors. For developers, time is the most expensive cost.
Currently using two Max plans: Claude-Code and GLM. No token anxiety.
3. Terminal: Ghostty
After switching to CC, the terminal is actually great — especially for parallel projects. Typically 3-4 projects in development at once; cmd+1/2/3 to switch. With music on, immersion and efficiency max out. Ghostty needs almost no complex config; I installed just two plugins: yazi (directory view) and lazygit (git state).
II. Environment Setup: Build a Flow Where Your Mouth Works, Not Your Hands
1. Find your "optimal voice input"
The core of AI coding is thinking, not typing. I got a desktop directional microphone. After trying everything, the best cost-performance combo: custom ASR API + ShandianShuo config + global shortcut.
Now I work 6-7 hours a day barely touching the keyboard. Say what comes to mind, AI receives it accurately, massively lowering input friction and fatigue. To solve the voice-input-to-CC folding issue, I even built a Mac app that intercepts paste and feeds CC token by token. For vibe coding, voice input is a 10x experience jump.
2. Don't rush CC configuration
The CC community is great — projects like everything-claude-code abound. Many people jump straight to complex config, which is unnecessary. Heavy config raises the usage bar and adds onboarding friction.
The most important file is claude.md. There are tons of "10k-word magnum opus" templates online, but your task hasn't even started and context is already half-spent. Add slowly; start small. When it gets big, go progressive — tell CC where files live, it'll fetch them on demand.
III. Common Commands and Techniques
1. Six very useful Claude Code commands
- insights: auto-analyzes session history into a report (domain, interaction patterns, friction points). I check every 1-2 weeks to tune CC config.
- context: shows current session context usage. Adjust loading strategy if startup already eats too much (close unused MCPs, trim memory.md).
- loop: run a command repeatedly. Good for testing — generate cases first, then loop them for verification.
- new: though CC supports 1M context with auto-compaction, when a topic is done and the next is unrelated, I use new to start fresh.
- simplify: launches three parallel review agents to check your diff from the angles of reuse, quality, and efficiency.
- claude-md-improver: audits, evaluates, and improves the CLAUDE.md in your repo.
2. Two high-frequency MCP tools that extend capability
- Context7: helps AI pinpoint official docs and APIs, eliminating "making things up" at the source.
- Playwright: lets AI drive the browser directly — page testing, scraping, repetitive flows all automate.
3. Full-cycle productivity tool: Superpowers Skill
This is the skill I use every day. Almost every need runs the full flow through it: ideation, tech-stack selection, task breakdown, dev plan. It bridges fuzzy ideas and executable steps.
4. The complex-task killer: Agent Team mode
For complex projects, two concepts must be distinguished:
- Sub Agent (hub-and-spoke): main agent divides tasks, sub-agents execute and return. Single-threaded, no coordination.
- Agent Team (collaborative): each sub-agent has independent context and can call skills; sub-agents communicate, sync info, and collaborate.
Standard play: combine Agent Team + Superpowers. I built a custom skill adding Agent Team as an option for the Superpowers "explain" skill, so complex needs split into multi-role collaboration.
IV. SOP and Thought Process
1. Standard project SOP
A project usually starts with Superpowers brainstorming; then rounds of Q&A to settle plan and execution; then hand off to Sub Agent or Agent Team; finally validate with multi-layer tests (unit, integration, E2E). Sometimes I loop Playwright with screenshots for testing.
2. What if business projects don't work?
Many feel business projects don't work because: fear of being blamed for AI-generated bugs, can't clearly describe business context, too many deps (DB, cache, queue). If the pressure is that high, start with personal projects. Interest is the best teacher — those niche, low-ROI, un-customizable needs become viable with AI coding.
3. Build your own "infinite game"
Use personal projects to learn LLM behavior and tool behavior; end goal is a loop that fits your habits. It's not just about finishing projects but about the process. In CC, I often stop execution and ask: "why did you think about it that way just now?" "why didn't you consider XX in that situation?" These conversations polish the pipeline, improve claude.md and memory.md, and customize skills.
4. Random ideas
I've built many "for myself" systems: aquarium water-quality logger/analyzer, finance, medical records. And hobby projects — I used to get lost in mutual-fund investing, so I built a CC-based assistant with MCP and custom skills, turning the Youzhiyouxing content into a local vector knowledge base. It helps with market analysis and position advice.
Example instruction: "Create an Agent Team to refine and execute the plan in xxx.md. The team includes one PM, one architect, one backend, one frontend, one tester."
5. Keep rechecking the production process. Treat "process optimization" as its own horizontal project.
V. Actual-Usage Advice: Mindset and Pitfalls
1. No universal template — find your own rhythm
Don't worship fixed prompt templates or universal config files. The core is "keep using, practice more." With use, intuition arrives: when to trim context, when to clarify a vague command, when to pull AI back from a rabbit hole.
2. Learn by doing
Official docs, GitHub, community are all great resources. Personally I prefer GitHub projects — clone, run, have AI deconstruct the design and implementation. Listening to AI walk through implementation details, you find many interesting points.
3. Self-assessment
The core is getting the loop running; the focus is evaluating and guiding AI, not producing code. Two clearest improvement metrics:
- Tasks you originally couldn't finish — now relatively easy.
- Things that used to take 2-3 weeks — now 1-2 days. With 4-5 projects running in parallel, you basically seamlessly hand off between them; about 2-3 hours in, the brain stalls out and the body can't keep up with AI.
4. Advice for people transitioning to agent dev
First master the AI coding tools already in your hands. These daily tools are the most mature, best-landed agent apps on the market. Study their design logic, capability boundaries, and orchestration patterns — you'll realize agent core principles aren't that complex, and many capabilities are already in your daily work.