Skip to main content
This article shows how we created a concise, implementable technical specification for an Image Optimizer application using BlackboxAI. It explains the prompt we used, reviews the draft the model produced, highlights the manual refinements we applied, and captures a final, simplified specification tailored to our intended scope. Keywords: Image Optimizer, technical specification, system architecture, API specification, data model, BlackboxAI. We began by giving BlackboxAI a focused prompt that prescribed structure, required sections, and quality expectations. The original prompt is included below for reproducibility. Here is the prompt we used:
Create a detailed and professional technical specification document for Image Optimizer. This document should clearly
outline all technical aspects, including system architecture, technologies used, design considerations, and
implementation details, adhering to industry best practices.

Ask me clarifying questions when needed.

Instructions for the Model:

Title Section:

Generate a title page that includes the project name, version, author(s), and date.
Add a brief abstract summarizing the document's purpose.
Table of Contents:

Include a dynamic table of contents that links to sections such as Overview, Requirements, System Design,
Technologies Used, and Testing.
Document Sections:

Overview:

Briefly describe the purpose and scope of the project/feature.
Mention the target audience and use case scenarios.
Requirements:

Clearly define functional and non-functional requirements.
Provide a prioritized list of user stories or use cases.
Include performance benchmarks or any compliance standards.
System Design:

Present the high-level architecture with clear textual descriptions of system flow, components, and deployment considerations.
Detail each system component with its role and interaction within the system.
Technologies Used:

List all programming languages, frameworks, libraries, and tools being used.
Provide reasons for their selection, including benefits and trade-offs.
API Specifications (if applicable):

Document RESTful/GraphQL APIs with examples, endpoints, request/response formats, and error codes.
Highlight security measures (e.g., authentication, encryption).
Data Models and Storage:
Model output produced a full draft. Below are the key areas extracted from that draft, followed by our review and targeted edits so the final spec matches the simpler Image Optimizer we want to build. Requirements and user stories
  • We kept functional items that match our scope (single-file upload, compression/optimization, provide before-and-after size comparison, download optimized file).
  • We removed features outside scope (e.g., batch processing, resizing, multi-format conversion) to avoid over-engineering.
  • Kept a concise set of user stories that reflect the single-file, stateless nature of the service.
Key functional and non-functional requirements (finalized)
Requirement typeDescription
FunctionalSingle-file image upload, optimize image (lossy or lossless per config), return optimized image for download, show original vs optimized size.
Non-functionalOptimize common image formats (JPEG, PNG, WebP). Target average optimization latency <= 1.5s for images <= 2MB. HTTPS required. Minimal persistent storage (optional metadata only).
ExcludedNo batch processing, no user accounts (stateless), no resizing or format conversion by default.
User stories (prioritized)
  1. As a user, I can upload a single image and download an optimized version.
  2. As a user, I can see the original and optimized file sizes and compression ratio.
  3. As an administrator, I can configure the optimization profile (quality vs size trade-off).
A screenshot of a web app page titled "Requirements" showing lists of
functional and non-functional requirements, user stories, and compliance
standards for an image optimization system. The page includes numbered items
like image upload, compression, resizing, batch processing, performance, and
security.
Notes on non-functional requirements
  • Consolidate performance targets to a single, realistic target: average optimization latency <= 1.5 seconds for images up to 2 MB on a single CPU core.
  • Remove unjustified concurrency targets (e.g., 1,000 concurrent users) unless capacity planning requires it.
  • Security: require HTTPS for transport; if authentication is added later, document optional JWT flows.
API specifications and authentication
  • Keep a minimal REST API for upload and download. If authentication is not in scope, present auth information as optional and provide examples for future extension.
  • Document error codes and example responses for clarity.
API endpoints (simplified)
EndpointMethodPurposeRequest / Response
/api/optimizePOSTUpload an image and receive optimized file (stream or link).Request: multipart/form-data file field. Response: optimized image stream or JSON with download link and metadata.
/api/metadata/GET (optional)Return optimization metadata if persisted.Response: JSON
Error responses (examples)
  • 400 Bad Request — invalid file or unsupported format
  • 415 Unsupported Media Type — format not supported
  • 500 Internal Server Error — processing failure
Authentication and security
  • If no auth: mark endpoints public and require HTTPS.
  • If adding auth later: recommend JWT-based bearer tokens and standard OAuth2 flows; include them as optional in the API docs so the structure is ready.
References:
A browser window showing an "API Specifications" documentation page with
sections for Endpoints, Error Codes, and Security Measures. The page lists
POST/GET image endpoints, common HTTP error codes, and security notes like JWT
and
HTTPS.
Data models and storage
  • For a stateless service, avoid user tables and full DB schemas.
  • Keep a minimal Image metadata schema only if you plan to persist results:
Image metadata (optional)
FieldTypeDescription
idstring (UUID)Unique identifier for the optimization result
originalSizeintegerSize in bytes
optimizedSizeintegerSize in bytes
mimeTypestringimage/jpeg, image/png, etc.
optimizationProfilestringe.g., “default”, “high-compression”
createdAttimestampWhen optimization was performed
Storage options
  • Stateless, on-the-fly: no storage required; return optimized image directly in response.
  • Object storage: if persisting results, use cloud blob/object storage (e.g., Amazon S3, Google Cloud Storage, or Azure Blob Storage) to store optimized images and metadata in a lightweight database or a key-value store.
  • Local filesystem: acceptable for single-node or dev deployments.
Choose storage based on expected retention, traffic, and cost.
A screenshot of a documentation webpage in a Chrome browser showing "Data
Models and Storage" and a Database Schema with bullets for User (id, username,
passwordHash, createdAt) and Image (id, userId). The page also shows a top
browser toolbar and a message input bar at the
bottom.
System architecture and components
  • Keep the architecture minimal and clear. The simplified architecture should include:
    • Frontend: simple upload UI and download link, optional client-side validation.
    • Backend API: handles incoming upload, validation, and delegating to the image processing component.
    • Image processing: a stateless service or library performing optimization.
    • Optional object storage: persists optimized images and metadata if needed.
High-level flow (textual)
  1. User uploads image via frontend.
  2. Backend validates and forwards the image to the processing module.
  3. Processing module optimizes the image according to profile and returns an optimized image and metadata.
  4. Backend returns optimized image stream (or a download link if persisted) and metadata (originalSize, optimizedSize).
Deployment considerations
  • Containerize backend and processing components for portability.
  • For scaling: keep processing stateless so you can scale horizontally (multiple workers).
  • For high-throughput needs: add a queue (e.g., RabbitMQ, SQS) and autoscaling workers; for our current scope, this is optional.
Title page and metadata
  • Replace placeholders with concrete metadata before publishing the spec.
Suggested title metadata
  • Project: Image Optimizer
  • Version: 1.0.0
  • Authors: Platform Team
  • Date: 2026-03-01
  • Abstract: A concise technical specification for a stateless Image Optimizer service focusing on single-file uploads, image compression, and metadata reporting. The spec covers architecture, API endpoints, storage options, and testing recommendations.
A screenshot of a Chrome browser showing a document titled "Image Optimizer
Technical Specification Document" with a title page and an abstract. The page
lists project metadata (name, version, authors, date) and has a message input
box at the
bottom.
Practical guidance and refinement workflow
  • Use the AI-generated draft as scaffolding: keep structure, but validate every feature against intended scope.
  • Be explicit in prompts if you want features excluded (for example: “Do not include user authentication or batch processing; support single-file uploads only”).
  • Either regenerate specific sections or edit the draft manually. A hybrid approach works well: ask the model to regenerate only the sections you revised.
Checklist for finalizing the spec
  • Confirm functional requirements match product scope.
  • Set consistent non-functional targets (latency, throughput).
  • Decide on storage and persistence strategy.
  • Lock in API contract (endpoints, request/response formats).
  • Replace placeholders (author, date) and add diagrams if desired.
  • Share spec for team review and sign-off.
Final steps
  • Copy the finalized spec into the project repository (e.g., docs/technical-spec.md).
  • Replace metadata placeholders with the selected values above.
  • Add diagrams exported from your diagram tool (e.g., draw.io, Figma) and reference them in the spec.
  • Run a short review cycle with developers and product owners; update the spec as decisions are made.
Summary
  • BlackboxAI and other LLMs can accelerate the creation of a technical specification by generating a well-structured draft.
  • AI output is a starting point—human review is required to align scope, technology choices, and performance targets.
  • Use the draft to save time on structure and formatting, then prune or extend sections to reflect actual implementation decisions.
AI can quickly create a well-structured technical specification, but always review and refine the output to align it with your exact scope, technology choices, and performance targets.
Links and references

Watch Video