Skip to main content
In this lesson we cover Cline’s Plan and Act mode — a two-step, AI-guided workflow for structured development: think first, act second. Use Plan mode to analyze context and propose a sequence of steps without touching files. Switch to Act mode when you want Cline to perform the work (create/edit/delete files, run commands, run tests, or launch processes). Plan mode is ideal for design, architecture, and step-by-step strategies. Act mode is for execution and automation. Many developers use Plan-only workflows to preserve understanding; others prefer Act for speed. Choose the approach that fits your workflow.
Plan mode is for analysis, architecture, and step-by-step strategy (no file changes). Act mode performs the actual modifications and commands when you’re ready.
Plan vs Act — capabilities at a glance
CapabilityPlan modeAct mode
Read repository files and project context
Produce architecture and step-by-step plans
Modify code / create / delete files
Run shell commands, tests, or processes
Launch external tools (browsers, services)
Best forReviewing and designing changesApplying and testing changes automatically
A dark-themed webpage screenshot of product documentation with a left navigation menu and a central infographic titled "Plan vs Act Mode Capabilities" comparing Plan Mode and Act Mode. Below the infographic is a "Workflow Guide" section with numbered steps.
Demo walkthrough — summary and cleaned-up code snippets This walkthrough demonstrates using Plan mode to design a small project, then switching to Act mode to scaffold and implement it. The example builds a “casting number lookup” API:
  • Consumer sends an integer casting number; API returns metadata for that number.
  • Data persisted in SQLite.
  • Implementation in Python + FastAPI.
  • CSV import for bulk data.
  1. Start in Plan mode and provide a clear task prompt. Example goals:
  • Build a FastAPI service to expose casting lookup endpoints.
  • Use SQLAlchemy with SQLite for persistence.
  • Provide a CSV import utility to populate the DB.
  • Add Pydantic schemas, basic validation, and tests.
A typical synthesized plan includes:
  • Create a standard FastAPI project layout.
  • Add SQLite + SQLAlchemy database setup.
  • Build SQLAlchemy models matching the CSV schema.
  • Create import script to ingest CSV into DB.
  • Implement API endpoints for list and single lookup.
  • Add tests and a small runner to execute them.
  1. Review and approve the plan. In Act mode, Cline can create the skeleton and files. For example, the initial shell command to create a typical FastAPI layout:
mkdir -p app/api/endpoints app/db app/models app/schemas app/utils tests examples
touch app/__init__.py app/api/__init__.py app/api/endpoints/__init__.py app/db/__init__.py app/models/__init__.py app/schemas/__init__.py app/utils/__init__.py examples/__init__.py
Below are representative, cleaned-up code snippets that match the plan. Adjust fields and types to match your actual CSV. database.py — SQLite + SQLAlchemy setup
# app/db/database.py
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, declarative_base

# SQLite database URL (relative file)
SQLALCHEMY_DATABASE_URL = "sqlite:///./castings.db"

# Create SQLAlchemy engine (allow multiple threads for simple apps)
engine = create_engine(
    SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False}
)

# Create SessionLocal class
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)

# Base class for declarative models
Base = declarative_base()

# Dependency to get DB session
def get_db():
    db = SessionLocal()
    try:
        yield db
    finally:
        db.close()
models/casting.py — SQLAlchemy model (example fields; adapt to your CSV)
# app/models/casting.py
from sqlalchemy import Column, Integer, String, Float, Text
from app.db.database import Base

class Casting(Base):
    """SQLAlchemy model for casting data."""
    __tablename__ = "castings"

    id = Column(Integer, primary_key=True, index=True)
    casting_number = Column(Integer, index=True, nullable=False)
    name = Column(String, index=True)
    description = Column(Text, nullable=True)
    material = Column(String, nullable=True)
    weight = Column(Float, nullable=True)
    dimensions = Column(String, nullable=True)
    manufacturer = Column(String, nullable=True)
    year_introduced = Column(Integer, nullable=True)

    # Add additional fields to match the CSV structure as needed
schemas/casting.py — Pydantic models (request/response validation)
# app/schemas/casting.py
from pydantic import BaseModel
from typing import Optional

class CastingBase(BaseModel):
    casting_number: int
    name: Optional[str] = None
    description: Optional[str] = None
    material: Optional[str] = None
    weight: Optional[float] = None
    dimensions: Optional[str] = None
    manufacturer: Optional[str] = None
    year_introduced: Optional[int] = None

class CastingCreate(CastingBase):
    pass

class Casting(CastingBase):
    id: int

    # For Pydantic v2 compatibility, use model_config to allow reading from ORM objects
    model_config = {"from_attributes": True}
api endpoint (router) — basic GET with optional pagination
# app/api/endpoints/casting.py
from typing import List
from fastapi import APIRouter, Depends, HTTPException, Query
from sqlalchemy.orm import Session

from app.db.database import get_db
from app.models.casting import Casting as CastingModel
from app.schemas.casting import Casting

router = APIRouter()

@router.get("/", response_model=List[Casting])
def get_castings(skip: int = 0, limit: int = 100, db: Session = Depends(get_db)):
    """
    Retrieve a list of castings with pagination.
    """
    castings = db.query(CastingModel).offset(skip).limit(limit).all()
    return castings

@router.get("/{casting_number}", response_model=Casting)
def get_casting_by_number(casting_number: int, db: Session = Depends(get_db)):
    casting = db.query(CastingModel).filter(CastingModel.casting_number == casting_number).first()
    if not casting:
        raise HTTPException(status_code=404, detail="Casting not found")
    return casting
main.py — application bootstrap
# app/main.py
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware

from app.api.endpoints import casting
from app.db.database import engine
from app.models import casting as casting_models

# Create database tables (ensure models import Base from same module)
casting_models.Base.metadata.create_all(bind=engine)

app = FastAPI(
    title="Casting Number Lookup API",
    description="API for looking up casting numbers and their associated data",
    version="1.0.0",
)

app.include_router(casting.router, prefix="/castings", tags=["castings"])

app.add_middleware(
    CORSMiddleware,
    allow_origins=["*"],
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)
import_data.py — import CSV into SQLite (pandas example)
# app/utils/import_data.py
import pandas as pd
from sqlalchemy.orm import Session
from app.db.database import SessionLocal, engine
from app.models.casting import Casting as CastingModel

def import_csv_to_db(csv_path: str):
    df = pd.read_csv(csv_path)
    # Map DataFrame columns to model fields; adjust columns as needed
    session: Session = SessionLocal()
    try:
        for _, row in df.iterrows():
            casting = CastingModel(
                casting_number=int(row['casting_number']),
                name=row.get('name'),
                description=row.get('description'),
                material=row.get('material'),
                weight=row.get('weight') if not pd.isna(row.get('weight')) else None,
                dimensions=row.get('dimensions'),
                manufacturer=row.get('manufacturer'),
                year_introduced=int(row['year_introduced']) if not pd.isna(row.get('year_introduced')) else None,
            )
            session.add(casting)
        session.commit()
    except Exception:
        session.rollback()
        raise
    finally:
        session.close()

if __name__ == "__main__":
    import sys
    if len(sys.argv) < 2:
        print("Usage: python -m app.utils.import_data path/to/data.csv")
        sys.exit(1)
    import_csv_to_db(sys.argv[1])
run_tests.py — consolidated test runner example
# run_tests.py
import unittest
import sys
import os

# Add the project root to the path for imports
sys.path.append(os.path.dirname(os.path.abspath(__file__)))

from tests.test_api import TestCastingAPI
from tests.test_database import TestDatabase
from tests.test_import_data import TestImportData
from tests.test_main import TestMain
from tests.test_models import TestModels
from tests.test_schemas import TestSchemas

if __name__ == "__main__":
    test_suite = unittest.TestSuite()
    test_suite.addTest(unittest.TestLoader().loadTestsFromTestCase(TestMain))
    test_suite.addTest(unittest.TestLoader().loadTestsFromTestCase(TestDatabase))
    test_suite.addTest(unittest.TestLoader().loadTestsFromTestCase(TestModels))
    test_suite.addTest(unittest.TestLoader().loadTestsFromTestCase(TestSchemas))
    test_suite.addTest(unittest.TestLoader().loadTestsFromTestCase(TestImportData))
    test_suite.addTest(unittest.TestLoader().loadTestsFromTestCase(TestCastingAPI))

    test_result = unittest.TextTestRunner(verbosity=2).run(test_suite)
    sys.exit(not test_result.wasSuccessful())
requirements.txt (example)
fastapi==0.104.1
uvicorn==0.23.2
sqlalchemy==2.0.23
pydantic==2.4.2
python-multipart==0.0.6
pandas==2.1.1
requests==2.31.0
pytest==7.4.3
httpx==0.25.1
Caution: runaway loops and cost When using Act mode iteratively, be mindful of automation loops and token consumption:
  • The assistant may repeatedly create or modify the same files if prompts or tests keep triggering new edits.
  • Large numbers of edits increase context size and token usage, raising costs.
  • Applying changes without reviewing diffs may introduce regressions.
Warning: Act mode can loop (creating tests to test tests, repeatedly editing files). Monitor the sequence of edits, review diffs, and limit or stop the workflow if it becomes repetitive to avoid unnecessary costs and regressions.
Practical tips and workflow preferences
  • Prefer Plan mode when you want to review and understand every change; export the plan and implement it manually or selectively accept Act steps.
  • If using Act mode, inspect diffs, run tests frequently, and approve batches of related changes rather than single-file edits in isolation.
  • Disable intrusive inline suggestions in your editor; trigger the assistant intentionally when ready.
  • If the model repeatedly fails a step, step in manually and consult resources like Stack Overflow or official docs for the specific library.
Wrap-up From a single planning prompt, Cline can scaffold:
  • A FastAPI application with routes and middleware.
  • Database layer using SQLAlchemy + SQLite.
  • Models and Pydantic schemas for validation.
  • A CSV import utility to populate the database.
  • Tests, runners, and CI-friendly scripts.
This lesson showed the distinction between producing a thoughtful design (Plan) and executing it (Act). Use Plan mode to shape architecture and Act mode to accelerate implementation — combining both yields the best balance of control and speed. Links and references To continue: experiment with prompt engineering, refine the CSV-to-model mapping, and extend the casting lookup with search, filtering, and pagination.

Watch Video