Demonstrates creating, configuring, and using Claude Code agents via CLI and SDK to automate code reviews, incident diagnosis, and CI/CD checks, including agent files and best practices.
In this lesson, we’ll walk through creating and using agents in Claude Code. You’ll see both interactive CLI workflows (great for developer iteration) and programmatic SDK examples (ideal for CI/CD automation). The examples below show how to create, configure, and run agents for code review or incident diagnosis, and how to persist those agent definitions in a repository.
Overview: two common approaches
CLI sub-agents: Create sub-agents you can call interactively from the CLI during development and reviews.
SDK-driven agents: Create and invoke agents programmatically (Python, JS/TS, Go, Ruby, etc.) to automate checks in CI/CD pipelines.
Table: When to use CLI vs SDK
Approach
Best for
Typical use case
CLI agents
Interactive iteration
Local development reviews, ad-hoc audits
SDK agents
Automation
CI/CD checks, scheduled audits, pre-merge gating
CLI: listing agents and creating one
The CLI shows whether any project agents exist and offers a guided flow for creating new agents. Example CLI output when no agents are present:
Copy
cwd: /Users/jeremy/Repos/project/Express-login-demo> /agentsAgentsNo agents found› Create new agentNo agents found. Create specialized subagents that Claude can delegate to.Each subagent has its own context window, custom system prompt, and specific tools.Try creating: Code Reviewer, Code Simplifier, Security Reviewer, Tech Lead, or UX Reviewer.Built-in (always available):general-purpose · sonnetstatusline-setup · sonnetoutput-style-setup · sonnetPress ↑↓ to navigate · Enter to select · Esc to go back
Create an agent: generation vs manual configuration
You can either generate a draft agent from a description (recommended) or build the agent manually. Generation uses Claude to produce a starting configuration you can edit.Example selection UI showing the generation option:
Copy
cwd: /Users/jeremy/Repos/project/Express-login-demo> /agentsCreate new agentCreation method> 1. Generate with Claude (recommended)2. Manual configurationPress ↑↓ to navigate · Enter to select · Esc to go back
When generating, provide a clear description of the agent’s responsibilities — the tool will create a draft configuration (metadata, system prompt, and rule set) that you can refine.Tool selection and model selection
Agents can be granted fine-grained tool permissions (e.g., read-only, edit, execution) and you choose the model that balances reasoning capability and cost.Example tool-selection UI:
Copy
cwd: /Users/jeremy/Repos/project/Express-login-demo> /agentsCreate new agentSelect tools[ Continue ]────────────────────────All toolsRead-only tools> Edit toolsExecution toolsMCP tools────────────────────────[ Show advanced options ]All tools selectedPress Enter to toggle selection · ↑↓ to navigate · Esc to go back
Example model-selection UI:
Copy
cwd: /Users/jeremy/Repos/project/Express-login-demo> /agentsCreate new agentSelect modelModel determines the agent's reasoning capabilities and speed. 1. Sonnet Balanced performance -- best for most agents 2. Haiku Fast and efficient for simple tasks> 3. Inherit from parent Use the same model as the main conversationPress ↑↓ to navigate · Enter to select · Esc to go back
Confirm and save the agent
After generation and selection, the CLI shows a confirmation summary. Review and save (or save-and-edit) the generated agent:
Copy
Create new agentConfirm and saveName: code-readability-reviewerLocation: .claude/agents/code-readability-reviewer.mdTools: Bash, Glob, Grep, LS, Read, WebFetch, TodoWrite, WebSearch, BashOutput, KillBashModel: Inherit from parentDescription (tells Claude when to use this agent): Use this agent when you need to evaluate code for readability, variable naming conventions, and overall code clarity.System prompt: You are a Senior Code Quality Specialist with expertise in software readability, maintainability, and naming conventions across multiple programming languages. Your primary focus is evaluating code for clarity, comprehensibility, and adherence to clean code principles.Press s or Enter to save, e to save and edit
Agent created and listed
Once saved, the agent appears in the project agent list and can be invoked interactively:
Copy
cwd: /Users/jeremy/Repos/project/Express-login-demo> /agentsAgents4 agentsCreated agent: code-readability-reviewerCreate new agentProject agents (/Users/jeremy/Repos/project/Express-login-demo/.claude/agents)> code-readability-reviewer .inheritBuilt-in agents (always available)general-purpose · sonnetstatusline-setup · sonnetoutput-style-setup · sonnetPress ↑↓ to navigate · Enter to select · Esc to go back
Editing an agent’s configuration
Agent definitions are stored as files in your repo (YAML-style metadata plus the system prompt and rules). Open and edit them in your preferred editor.Example agent metadata:
Copy
name: code-readability-reviewerdescription: Use this agent when you need to evaluate code for readability, variable naming conventions, and overall code clarity.tools: Bash, Glob, Grep, LS, Read, WebFetch, TodoWrite, WebSearch, BashOutput, KillBashmodel: inheritcolor: green
System prompt and review rules (summary)
Role: Senior Code Quality Specialist focused on readability and maintainability.
Review checklist:
Assess variable and function naming clarity.
Flag vague or misleading names (e.g., data, temp, x).
Check naming consistency (camelCase vs snake_case).
Suggest descriptive alternatives and refactors.
Analyze function length, logic flow, and comment quality.
Identify magic numbers and hardcoded values.
Provide a readability score (1–10) with reasoning and prioritized fixes.
Using the agent via CLI
Invoke a saved agent from the CLI to run a review. Example interaction and a cleaned-up sample report:
Copy
cwd: /Users/jeremy/Repos/project/Express-login-demo> use the code-readability-reviewer agent to inspect auth.js and give me a report.• I'll use the code-readability-reviewer agent to analyze the auth.js file and provide you with a readability report.• code-readability-reviewer(Review auth.js readability) Done (2 tool uses · 12.1k tokens · 29.9s)• The code-readability-reviewer agent has analyzed your auth.js file and provided a comprehensive readability report. Here are the key findings:Overall Score: 8/10 - The code is well-structured with good naming conventions and follows Express.js patterns effectively.Main Strengths: - Clear variable names like userQuery, isPasswordValid - Consistent camelCase naming - Proper separation of validation and business logic - Good security practices (parameterized queries, password hashing)Key Areas for Improvement: 1. Extract database error handling – Error codes (e.g., '28P01', '3D000') should be defined as constants. 2. Function length – The main handler is 78 lines and could be broken into smaller, testable functions. 3. Magic strings – Move SQL queries and error code strings into constants.Recommended Priority Fixes: - Define constants for database error codes - Extract the authentication logic into a separate function - Move hardcoded SQL query to a constantThe code follows security best practices but would benefit from better organization and consistent constant definitions for maintainability.
Example Express route (cleaned-up)
Below is a sample Express route that demonstrates the quality improvements an agent might recommend: validation, shorter handler, and clear structure.
Copy
// routes/auth.jsconst express = require('express');const { body, validationResult } = require('express-validator');const pool = require('../config/database'); // assume configured pg poolconst router = express.Router();router.post( '/login', [ body('email').isEmail().withMessage('Please provide a valid email address').normalizeEmail(), body('password').isLength({ min: 8 }).withMessage('Password must be at least 8 characters long') ], async (req, res) => { const errors = validationResult(req); if (!errors.isEmpty()) { return res.status(400).json({ error: 'Validation failed', details: errors.array() }); } try { const { email, password } = req.body; // Extracted authentication logic would go here (e.g., verify user and password) // Example: const user = await findUserByEmail(email); // if (!user || !verifyPassword(password, user.hash)) return res.status(401).send('Unauthorized'); res.status(200).json({ message: 'Login successful' }); } catch (err) { console.error('Auth error:', err); res.status(500).json({ error: 'Internal server error' }); } });module.exports = router;
Typical recommendations from the agent
Define constants for database error codes.
Extract authentication logic to helper functions for testability.
Move hardcoded SQL queries into named constants or query builders.
Add unit tests for extracted functions.
Programmatic use: Python SDK for CI/CD integration
Create and run agents programmatically using the Claude Code SDK. This integrates agents into CI/CD pipelines and automates code-quality checks on pull requests.Example Python SDK usage (async):
Copy
import asynciofrom claude_code_sdk import ClaudeSDKClient, ClaudeCodeOptionsasync def main(): async with ClaudeSDKClient( options=ClaudeCodeOptions( system_prompt="You are an SRE agent. Diagnose incidents step-by-step and propose safe fixes.", allowed_tools=["Read", "Grep", "Bash"], # fine-grained control max_turns=4 ) ) as client: await client.query("API error rates just spiked in payments — investigate and propose fixes.") async for msg in client.receive_response(): if hasattr(msg, "content"): for block in msg.content: if hasattr(block, "text"): print(block.text, end="")asyncio.run(main())
You can commit agent definitions (YAML) into your repository and call them from CI pipelines to enforce automated checks before merge.Example agent definition (commit to repo):
Copy
---name: code-readability-reviewerdescription: Use this agent when you need to evaluate code for readability, variable naming conventions, and overall code clarity.tools: Bash, Glob, Grep, LS, Read, WebFetch, TodoWrite, WebSearch, BashOutput, KillBashmodel: inheritcolor: green---
Supported models and language SDKs
Claude Code supports multiple model families (examples):
Model family
Use case
Claude 4 (Opus, Sonnet variants)
High reasoning, complex tasks
Claude 3.7 (Sonnet variants)
Balanced performance
Claude 3.5 (Haiku variants)
Fast, cost-efficient for simpler tasks
Example listing (illustrative):
Copy
# Claude 4 Models"claude-opus-4-1-20250505""claude-opus-4-1""claude-opus-4-20250514""claude-opus-4-0""claude-sonnet-4-20250514""claude-sonnet-4-0"# Claude 3.7 Models"claude-3-7-sonnet-20250219""claude-3-7-sonnet-latest"# Claude 3.5 Models"claude-3-5-haiku-20241022""claude-3-5-haiku-latest"
There are SDKs and example snippets available for many languages. Two short illustrative examples:
Automating agents in CI/CD is a common pattern: run code-readability and security agents on pull requests, prevent merges when critical issues are found, and surface recommended fixes for developers to review before deployment.
Summary and best practices
CLI: Use the interactive CLI to quickly generate, iterate, and run agents during development.
SDK: Use language SDKs (Python, JS/TS, Go, Ruby, etc.) to integrate agents into CI/CD and automate checks.
Agent files: Commit agent YAMLs to the repository so teams share the same audit and review rules.
Permissions: Grant agents only the specific tools they need (read-only vs edit vs exec) to reduce risk.
Model selection: Balance model capability vs. cost by selecting appropriate Sonnet/Haiku/Opus variants or inheriting the parent conversation model.