The article discusses extending the functionality of an AWS application using CDKTF, including deploying new stacks and managing resources.
In this final section, we summarize the application created so far and discuss how to extend its functionality. The module has covered several key points including the problem definition of Arthur’s name picker app, the manual deployment of resources to AWS, and the automation improvements using CDKTF. Below is a high-level overview of these concepts.
The creation of a Lambda function construct for deployment using CDKTF.
The packaging approach with CDKTF, highlighting the disadvantages of synthesizing with execSync compared to using a Terraform asset.
We illustrated these concepts with this recap diagram:
Next, we built a Lambda REST API construct that exposes the Lambda function via an API Gateway. We transitioned from local state management to a remote S3 backend to enable collaborative state management. Additionally, we demonstrated how to import Terraform modules into CDKTF for a scalable, automated infrastructure that Arthur can share with his friends.Another diagram recaps these additional topics:
The final application produces a name picker URL that can be accessed (for example, via curl) to retrieve a random family member’s name for washing up or other chores.
To further extend the application, consider the following configuration snippet. This snippet can be adapted to add additional stacks for new business functionalities or support multiple environments:
root in cdk.tf.out/stacks/cdktf-name-picker via ⍟ default on ⍟ (us-east-1)root in cdk.tf.out/stacks/cdktf-name-picker via ⍟ default on ⍟ (us-east-1)root in cdk.tf.out/stacks/cdktf-name-picker via ⍟ default on ⍟ (us-east-1)root in cdk.tf.out/stacks/cdktf-name-picker via ⍟ default on ⍟ (us-east-1)
The diagram below provides an overview of the architecture when adding new stacks or managing different environments:
Imagine Arthur now wants to add a new “week planner” stack. To implement this, create a new stack file (for example, week-planner-stack.ts) in the stacks directory. The diagram below illustrates the creation process:
When deploying multiple stacks, CDKTF prompts you to choose which stack to deploy. You can deploy a specific stack (e.g., yarn deploy cdk tf week planner) or deploy all stacks with a wildcard (e.g., yarn deploy "*") to deploy every stack.For example, to verify both stacks deploy correctly, you might use:
Copy
Ask AI
import { App } from 'cdktf';import { NamePickerStack } from './stacks/NamePickerStack';import { WeekPlannerStack } from './stacks/WeekPlannerStack';const app = new App();new NamePickerStack(app, 'cdktf-name-picker');new NamePickerStack(app, 'cdktf-name-picker-prod');app.synth();
The production stack currently uses the /dev stage in the API URL. This is an issue scheduled for resolution in a later lab question.
Arthur can also split the infrastructure by environment. For instance, you may replicate the name picker stack for a production environment by copying the stack, assigning a unique ID (such as appending a prod suffix), and optionally adjusting resource names. A helper function can aid in ensuring unique naming across stacks:
Copy
Ask AI
import { TerraformStack } from 'cdktf';import { Construct } from 'constructs';export const getConstructName = (scope: Construct, id: string) => `${TerraformStack.of(scope)}-${id}`;
This results in two deployed stacks (one for development and one for production). Remember, even though the production stack shows a /dev stage in the URL, it will be addressed later.
When Arthur is finished with the application, he can destroy the entire infrastructure to avoid unwanted AWS charges. To properly clean up, first destroy the main application stacks, and then the prerequisites:
Copy
Ask AI
yarn destroy "*"yarn destroy:prereq
It is important to destroy the main application before the prerequisites. Destroying the prerequisites first (which includes the S3 backend state) may cause failures in destroying the main application.
For example, the package.json might include the following scripts:
cdktf-name-picker-prod 🐹 aws_api_gateway_rest_api.lambda-rest-api_740DF6EC: Destroying... [id=qLnytck7qa]aws_lambda_function.lambda-function_lambda-function_0ABACFAE: Destruction complete after 0saws_iam_role.lambda-function_lambda-execution-role_B8EC76BB: Destruction complete after 0saws_api_gateway_rest_api.lambda-rest-api_740DF6EC: Still destroying... [id=qLnytck7qa, 10s elapsed]...aws_api_gateway_rest_api.lambda-rest-api_740DF6EC: Destruction complete after 47sDestroy complete! Resources: 11 destroyed.
In some cases, destruction may fail if, for example, an S3 bucket is not empty:
Copy
Ask AI
cdktf-name-picker-prereq module.s3-dynamodb-remote-backend.aws_dynamodb_table.this: Destruction complete after 3scdktf-name-picker-prereqError: deleting S3 Bucket (cdktf-name-picker-prereq-891317002225): operation error S3: DeleteBucket, https://s3.us-east-1.amazonaws.com/cdktf-name-picker-prereq-891317002225:An error occurred (BucketNotEmpty) when calling the DeleteBucket operation: The bucket you tried to delete is not empty. You must delete all versions in the bucket.
In such cases, manually empty the S3 bucket via the AWS console. The diagram below illustrates what you might see:
After emptying the bucket, run the destroy command for the prerequisites again:
Copy
Ask AI
yarn destroy:prereq
Finally, verify that no Lambda functions remain:
Also check S3 and DynamoDB to ensure the state has been completely destroyed.This concludes the final section of our article. In the next module, we will review everything learned in this course and discuss best practices for using CDKTF.