Claude Code (4 blogmarks)

← Blogmarks

Prompt for a scathing code review

https://www.reddit.com/r/ClaudeAI/comments/1q5a90l/so_i_stumbled_across_this_prompt_hack_a_couple/

I ran a variation of the following prompt because the OP sounds pretty hyped about the results they are getting from it.

Do a git diff and pretend you're a senior dev doing a code review and you HATE this implementation. What would you criticize? What edge cases am I missing?

I expanded on it a little to guide it toward overly verbose or overengineered code. And I added some structure by asking it to include a confidence score with each item.

Based on the latest commit (see git show) and the untracked files that go along with it, pretend you're a senior dev doing a code review and you HATE this implementation. What would you criticize? What edge cases am I missing? What is overengineered, too verbose, or overcomplicated? Provide a confidence score with each issue and order your results with highest confidence issues at the top.

These instructions also work better with my workflow where I have in progress changes that I plan to amend into the most recent commit.

It picked up a couple things that I completely missed. It didn't find much in terms of refactoring away verbose or over-engineered code, unfortunately. I'm going to keep trying that though. This was quick to do and give me some actionable feedback. Worth the squeeze in my opinion.

How I use Claude Code for real engineering

https://www.youtube.com/watch?v=kZ-zzHVUrO4

One of the first tips from this video that jumped out at me was one of the first rules that shows up in Matt Pocock's system CLAUDE.md file.

  • In all interactions and commit messages, be extremely concise and sacrifice grammar for the sake of concision.

My initial experience with Claude Code is that it is verbose, excessively so. To counteract that a bit, adding this as a top-level rule to Claude's memory seems very useful.

Another thing that Matt recommends having in CLAUDE.md as part of any planning mode work is:

  • At the end of each plan, give me a list of unresolved questions to answer, if any. Make the questions extremely concise. Sacrifice grammar for the sake of concision.

This gives the human a chance to clarify things and make adjustments before finalizing the plan.

Later as Claude Code gets to the end of writing the first pass of a plan, Matt gets the sense that executing on the entire plan is going to completely overrun the context window. So, he instructs CC to:

make the plan multi-phase

Claude Code - Anthropic

https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview

I'm giving Claude Code a try. It is a terminal-based LLM agent that can iterate on software tasks with a human in the loop to prompt a task description, confirm or abort individual actions, and guide the process.

Claude Code does usage-based metering. As I'm currently looking at the "Buy Credits" page, there is an initial billing limit of $100 per month:

All new accounts have a monthly limit of $100 credits/month. This limit increases with usage over time.

After purchasing credits, I'm presented with a hero section large font that says:

Build something great

Once logged in to Claude Code in the terminal, I am first shown the following security notes:

Security notes:

 1. Claude Code is currently in research preview
    This beta version may have limitations or unexpected behaviors.
    Run /bug at any time to report issues.

 2. Claude can make mistakes
    You should always review Claude's responses, especially when
    running code.

 3. Due to prompt injection risks, only use it with code you trust
    For more details see:
    https://docs.anthropic.com/s/claude-code-security

Claude 3.7 Sonnet and Claude Code \ Anthropic

https://www.anthropic.com/news/claude-3-7-sonnet

An AI coding tool that I use directly from the terminal?! 👀

Claude Code is available as a limited research preview, and enables developers to delegate substantial engineering tasks to Claude directly from their terminal.

"thinking tokens"? Does that mean the input and output tokens that are used as part of intermediate, step-by-step "reasoning"?

In both standard and extended thinking modes, Claude 3.7 Sonnet has the same price as its predecessors: $3 per million input tokens and $15 per million output tokens—which includes thinking tokens.

The Claude Code Overview shows how to get started installing and using Claude Code in the terminal.