Skip to main content
In this lesson, we’ll walk through creating and using agents in Claude Code. You’ll see both interactive CLI workflows (great for developer iteration) and programmatic SDK examples (ideal for CI/CD automation). The examples below show how to create, configure, and run agents for code review or incident diagnosis, and how to persist those agent definitions in a repository.
A presentation slide titled "Working with Claude Code Agents" with a dark curved design on the right containing the word "Demo" in blue. A small "© Copyright KodeKloud" appears in the bottom-left corner.
Overview: two common approaches
  • CLI sub-agents: Create sub-agents you can call interactively from the CLI during development and reviews.
  • SDK-driven agents: Create and invoke agents programmatically (Python, JS/TS, Go, Ruby, etc.) to automate checks in CI/CD pipelines.
Table: When to use CLI vs SDK
ApproachBest forTypical use case
CLI agentsInteractive iterationLocal development reviews, ad-hoc audits
SDK agentsAutomationCI/CD checks, scheduled audits, pre-merge gating
CLI: listing agents and creating one The CLI shows whether any project agents exist and offers a guided flow for creating new agents. Example CLI output when no agents are present:
cwd: /Users/jeremy/Repos/project/Express-login-demo

> /agents

Agents
No agents found

 Create new agent

No agents found. Create specialized subagents that Claude can delegate to.
Each subagent has its own context window, custom system prompt, and specific tools.
Try creating: Code Reviewer, Code Simplifier, Security Reviewer, Tech Lead, or UX Reviewer.

Built-in (always available):
general-purpose · sonnet
statusline-setup · sonnet
output-style-setup · sonnet

Press ↑↓ to navigate · Enter to select · Esc to go back
Create an agent: generation vs manual configuration You can either generate a draft agent from a description (recommended) or build the agent manually. Generation uses Claude to produce a starting configuration you can edit. Example selection UI showing the generation option:
cwd: /Users/jeremy/Repos/project/Express-login-demo

> /agents

Create new agent
Creation method

> 1. Generate with Claude (recommended)
2. Manual configuration

Press ↑↓ to navigate · Enter to select · Esc to go back
When generating, provide a clear description of the agent’s responsibilities — the tool will create a draft configuration (metadata, system prompt, and rule set) that you can refine. Tool selection and model selection Agents can be granted fine-grained tool permissions (e.g., read-only, edit, execution) and you choose the model that balances reasoning capability and cost. Example tool-selection UI:
cwd: /Users/jeremy/Repos/project/Express-login-demo

> /agents

Create new agent
Select tools

[ Continue ]
────────────────────────
All tools
Read-only tools
> Edit tools
Execution tools
MCP tools
────────────────────────
[ Show advanced options ]

All tools selected

Press Enter to toggle selection · ↑↓ to navigate · Esc to go back
Example model-selection UI:
cwd: /Users/jeremy/Repos/project/Express-login-demo

> /agents

Create new agent
Select model
Model determines the agent's reasoning capabilities and speed.

  1. Sonnet           Balanced performance -- best for most agents
  2. Haiku            Fast and efficient for simple tasks
> 3. Inherit from parent  Use the same model as the main conversation

Press ↑↓ to navigate · Enter to select · Esc to go back
Confirm and save the agent After generation and selection, the CLI shows a confirmation summary. Review and save (or save-and-edit) the generated agent:
Create new agent
Confirm and save

Name: code-readability-reviewer
Location: .claude/agents/code-readability-reviewer.md
Tools: Bash, Glob, Grep, LS, Read, WebFetch, TodoWrite, WebSearch, BashOutput, KillBash
Model: Inherit from parent

Description (tells Claude when to use this agent):

    Use this agent when you need to evaluate code for readability, variable naming conventions, and overall code clarity.

System prompt:

    You are a Senior Code Quality Specialist with expertise in software readability, maintainability, and naming conventions across multiple programming languages. Your primary focus is evaluating code for clarity, comprehensibility, and adherence to clean code principles.

Press s or Enter to save, e to save and edit
Agent created and listed Once saved, the agent appears in the project agent list and can be invoked interactively:
cwd: /Users/jeremy/Repos/project/Express-login-demo

> /agents

Agents
4 agents

Created agent: code-readability-reviewer

Create new agent

Project agents (/Users/jeremy/Repos/project/Express-login-demo/.claude/agents)
> code-readability-reviewer .inherit

Built-in agents (always available)
general-purpose · sonnet
statusline-setup · sonnet
output-style-setup · sonnet

Press ↑↓ to navigate · Enter to select · Esc to go back
Editing an agent’s configuration Agent definitions are stored as files in your repo (YAML-style metadata plus the system prompt and rules). Open and edit them in your preferred editor. Example agent metadata:
name: code-readability-reviewer
description: Use this agent when you need to evaluate code for readability, variable naming conventions, and overall code clarity.
tools: Bash, Glob, Grep, LS, Read, WebFetch, TodoWrite, WebSearch, BashOutput, KillBash
model: inherit
color: green
System prompt and review rules (summary)
  • Role: Senior Code Quality Specialist focused on readability and maintainability.
  • Review checklist:
    • Assess variable and function naming clarity.
    • Flag vague or misleading names (e.g., data, temp, x).
    • Check naming consistency (camelCase vs snake_case).
    • Suggest descriptive alternatives and refactors.
    • Analyze function length, logic flow, and comment quality.
    • Identify magic numbers and hardcoded values.
    • Provide a readability score (1–10) with reasoning and prioritized fixes.
Using the agent via CLI Invoke a saved agent from the CLI to run a review. Example interaction and a cleaned-up sample report:
cwd: /Users/jeremy/Repos/project/Express-login-demo

> use the code-readability-reviewer agent to inspect auth.js and give me a report.

 I'll use the code-readability-reviewer agent to analyze the auth.js file and provide you with a readability report.

• code-readability-reviewer(Review auth.js readability)
  Done (2 tool uses · 12.1k tokens · 29.9s)

• The code-readability-reviewer agent has analyzed your auth.js file and provided a comprehensive readability report. Here are the key findings:

Overall Score: 8/10 - The code is well-structured with good naming conventions and follows Express.js patterns effectively.

Main Strengths:
 - Clear variable names like userQuery, isPasswordValid
 - Consistent camelCase naming
 - Proper separation of validation and business logic
 - Good security practices (parameterized queries, password hashing)

Key Areas for Improvement:
 1. Extract database error handling – Error codes (e.g., '28P01', '3D000') should be defined as constants.
 2. Function length – The main handler is 78 lines and could be broken into smaller, testable functions.
 3. Magic strings – Move SQL queries and error code strings into constants.

Recommended Priority Fixes:
 - Define constants for database error codes
 - Extract the authentication logic into a separate function
 - Move hardcoded SQL query to a constant

The code follows security best practices but would benefit from better organization and consistent constant definitions for maintainability.
Example Express route (cleaned-up) Below is a sample Express route that demonstrates the quality improvements an agent might recommend: validation, shorter handler, and clear structure.
// routes/auth.js
const express = require('express');
const { body, validationResult } = require('express-validator');
const pool = require('../config/database'); // assume configured pg pool
const router = express.Router();

router.post(
  '/login',
  [
    body('email').isEmail().withMessage('Please provide a valid email address').normalizeEmail(),
    body('password').isLength({ min: 8 }).withMessage('Password must be at least 8 characters long')
  ],
  async (req, res) => {
    const errors = validationResult(req);
    if (!errors.isEmpty()) {
      return res.status(400).json({ error: 'Validation failed', details: errors.array() });
    }

    try {
      const { email, password } = req.body;
      // Extracted authentication logic would go here (e.g., verify user and password)
      // Example: const user = await findUserByEmail(email);
      // if (!user || !verifyPassword(password, user.hash)) return res.status(401).send('Unauthorized');

      res.status(200).json({ message: 'Login successful' });
    } catch (err) {
      console.error('Auth error:', err);
      res.status(500).json({ error: 'Internal server error' });
    }
  }
);

module.exports = router;
Typical recommendations from the agent
  • Define constants for database error codes.
  • Extract authentication logic to helper functions for testability.
  • Move hardcoded SQL queries into named constants or query builders.
  • Add unit tests for extracted functions.
Programmatic use: Python SDK for CI/CD integration Create and run agents programmatically using the Claude Code SDK. This integrates agents into CI/CD pipelines and automates code-quality checks on pull requests. Example Python SDK usage (async):
import asyncio
from claude_code_sdk import ClaudeSDKClient, ClaudeCodeOptions

async def main():
    async with ClaudeSDKClient(
        options=ClaudeCodeOptions(
            system_prompt="You are an SRE agent. Diagnose incidents step-by-step and propose safe fixes.",
            allowed_tools=["Read", "Grep", "Bash"],  # fine-grained control
            max_turns=4
        )
    ) as client:
        await client.query("API error rates just spiked in payments — investigate and propose fixes.")
        async for msg in client.receive_response():
            if hasattr(msg, "content"):
                for block in msg.content:
                    if hasattr(block, "text"):
                        print(block.text, end="")

asyncio.run(main())
You can commit agent definitions (YAML) into your repository and call them from CI pipelines to enforce automated checks before merge. Example agent definition (commit to repo):
---
name: code-readability-reviewer
description: Use this agent when you need to evaluate code for readability, variable naming conventions, and overall code clarity.
tools: Bash, Glob, Grep, LS, Read, WebFetch, TodoWrite, WebSearch, BashOutput, KillBash
model: inherit
color: green
---
Supported models and language SDKs Claude Code supports multiple model families (examples):
Model familyUse case
Claude 4 (Opus, Sonnet variants)High reasoning, complex tasks
Claude 3.7 (Sonnet variants)Balanced performance
Claude 3.5 (Haiku variants)Fast, cost-efficient for simpler tasks
Example listing (illustrative):
# Claude 4 Models
"claude-opus-4-1-20250505"
"claude-opus-4-1"
"claude-opus-4-20250514"
"claude-opus-4-0"
"claude-sonnet-4-20250514"
"claude-sonnet-4-0"

# Claude 3.7 Models
"claude-3-7-sonnet-20250219"
"claude-3-7-sonnet-latest"

# Claude 3.5 Models
"claude-3-5-haiku-20241022"
"claude-3-5-haiku-latest"
There are SDKs and example snippets available for many languages. Two short illustrative examples:
// Go (illustrative)
package main

import (
	"context"
	"fmt"
	"github.com/anthropics/anthropic-sdk-go"
	"github.com/anthropics/anthropic-sdk-go/option"
)

func main() {
	client := anthropic.NewClient(option.WithAPIKey("my-anthropic-api-key"))
	msg, err := client.Messages.New(context.Background(), anthropic.MessageNewParams{
		Model:     anthropic.ModelClaudeSonnet4_0,
		MaxTokens: 1024,
		Messages: []anthropic.Message{{
			Role:    "user",
			Content: "Hello, Claude",
		}},
	})
	if err != nil {
		panic(err)
	}
	fmt.Println(msg.Content)
}
# Ruby (illustrative)
require "anthropic"

anthropic = Anthropic::Client.new(api_key: "my_api_key") # or ENV["ANTHROPIC_API_KEY"]

message = anthropic.messages.create(
  max_tokens: 1024,
  messages: [{ role: "user", content: "Hello, Claude" }],
  model: "claude-sonnet-4-20250514"
)

puts(message.content)
Automating agents in CI/CD is a common pattern: run code-readability and security agents on pull requests, prevent merges when critical issues are found, and surface recommended fixes for developers to review before deployment.
Summary and best practices
  • CLI: Use the interactive CLI to quickly generate, iterate, and run agents during development.
  • SDK: Use language SDKs (Python, JS/TS, Go, Ruby, etc.) to integrate agents into CI/CD and automate checks.
  • Agent files: Commit agent YAMLs to the repository so teams share the same audit and review rules.
  • Permissions: Grant agents only the specific tools they need (read-only vs edit vs exec) to reduce risk.
  • Model selection: Balance model capability vs. cost by selecting appropriate Sonnet/Haiku/Opus variants or inheriting the parent conversation model.
Links and references

Watch Video