Anthropic (4 blogmarks)

← Blogmarks

How AI assistance impacts the formation of coding skills

https://www.anthropic.com/research/AI-assistance-coding-skills

Anthropic released the results of a recent study they conducted where they warn of the impact that LLM agent use for software development tasks can have on learning and skill development. This impacts everyone, but is crucial for earlier-career developers who haven't developed as many of these skills without LLMs in the picture.

None of this means we shouldn't be using LLMs for software development tasks, but rather we have to be intentional about how we use them.

Importantly, using AI assistance didn’t guarantee a lower score. How someone used AI influenced how much information they retained. The participants who showed stronger mastery used AI assistance not just to produce code but to build comprehension while doing so—whether by asking follow-up questions, requesting explanations, or posing conceptual questions while coding independently.

And here is another excerpt from the end of the article:

Our study can be viewed as a small piece of evidence toward the value of intentional skill development with AI tools. Cognitive effort—and even getting painfully stuck—is likely important for fostering mastery. This is also a lesson that applies to how individuals choose to work with AI, and which tools they use.

Anthropic CEO: AI Will Be Writing 90% of Code in 3 to 6 Months

https://www.businessinsider.com/anthropic-ceo-ai-90-percent-code-3-to-6-months-2025-3

Around March 14th, 2025:

"I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code," Amodei said at a Council of Foreign Relations event on Monday.

Amodei said software developers would still have a role to play in the near term. This is because humans will have to feed the AI models with design features and conditions, he said.

Via this Armin Ronacher article, via this David Crespo Bluesky post.

Claude Code - Anthropic

https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview

I'm giving Claude Code a try. It is a terminal-based LLM agent that can iterate on software tasks with a human in the loop to prompt a task description, confirm or abort individual actions, and guide the process.

Claude Code does usage-based metering. As I'm currently looking at the "Buy Credits" page, there is an initial billing limit of $100 per month:

All new accounts have a monthly limit of $100 credits/month. This limit increases with usage over time.

After purchasing credits, I'm presented with a hero section large font that says:

Build something great

Once logged in to Claude Code in the terminal, I am first shown the following security notes:

Security notes:

 1. Claude Code is currently in research preview
    This beta version may have limitations or unexpected behaviors.
    Run /bug at any time to report issues.

 2. Claude can make mistakes
    You should always review Claude's responses, especially when
    running code.

 3. Due to prompt injection risks, only use it with code you trust
    For more details see:
    https://docs.anthropic.com/s/claude-code-security

Claude 3.7 Sonnet and Claude Code \ Anthropic

https://www.anthropic.com/news/claude-3-7-sonnet

An AI coding tool that I use directly from the terminal?! 👀

Claude Code is available as a limited research preview, and enables developers to delegate substantial engineering tasks to Claude directly from their terminal.

"thinking tokens"? Does that mean the input and output tokens that are used as part of intermediate, step-by-step "reasoning"?

In both standard and extended thinking modes, Claude 3.7 Sonnet has the same price as its predecessors: $3 per million input tokens and $15 per million output tokens—which includes thinking tokens.

The Claude Code Overview shows how to get started installing and using Claude Code in the terminal.