- Kloudnative
- Posts
- 14 Quick Expert Tips for AWS CDK Engineers
14 Quick Expert Tips for AWS CDK Engineers
Practical Tips and Code Snippets for AWS CDK Cloud Engineers
Want to take your AWS CDK skills to the next level? I've got something exciting to share with you - a collection of 15 practical tips that will make your cloud development journey smoother and more efficient. Whether you're wrestling with configuration management, trying to optimize your deployments, or looking to add some automation to your workflow, these tips have got you covered.
We'll explore everything from setting up auto-deletion for S3 buckets to implementing Lambda Powertools, and I'll walk you through each concept with real-world code examples. Think of this as your Swiss Army knife for AWS CDK development - each tip is a different tool that you can pull out exactly when you need it.
Some highlights include:
Making your stack configurations type-safe (because who doesn't love catching errors before they happen?)
Using HotSwap to deploy changes in record time
Keeping your backend-for-frontend architecture clean and maintainable
Setting up automatic documentation with TypeDoc
Implementing security best practices with CDK Nag
I've organized these tips to build on each other naturally, so you can follow along and gradually enhance your CDK projects. Ready to dive in and level up your cloud development game?
Kloudnative is committed to staying free for all our users. We kindly encourage you to explore our sponsors to help support us.
The gold standard of business news
Morning Brew is transforming the way working professionals consume business news.
They skip the jargon and lengthy stories, and instead serve up the news impacting your life and career with a hint of wit and humor. This way, you’ll actually enjoy reading the news—and the information sticks.
Best part? Morning Brew’s newsletter is completely free. Sign up in just 10 seconds and if you realize that you prefer long, dense, and boring business news—you can always go back to it.
☝️ Support Kloudnative by clicking the link above to explore our sponsors!
1. Convict for config!
One simple yet effective approach in AWS CDK apps is to use a lightweight package called ‘convict’ for managing configurations. Instead of scattering process.env.SOMETHING
across your codebase—leading to untyped variables, potential typos, and no central configuration—it’s much cleaner to use convict. It provides a structured way to define your app’s configuration, with sensible defaults and validation.
Here’s an example of how to use convict to wrap your environment variables in a more readable, maintainable way:
const convict = require('convict');
export const config = convict({
stage: {
doc: 'The stage being deployed',
format: String,
default: '',
env: 'STAGE',
},
region: {
doc: 'The region being deployed to',
format: String,
default: 'eu-west-1',
env: 'REGION',
},
tableName: {
doc: 'The table name',
format: String,
default: '',
env: 'TABLE_NAME',
}
}).validate({ allowed: 'strict' });
As convict says, “it gives project collaborators more context on each setting and enables validation and early failures when configuration goes wrong.”
You can then access the configuration in your codebase easily with:
import config from '../config';
...
const tableName = config.get('tableName');
This approach makes your configuration much easier to manage and reduces the chances of errors due to misconfigured environment variables!
2. Use the deployment stage to change properties!
When building and deploying applications, different environments (such as staging, production, or ephemeral environments) often have specific requirements. For example:
In ephemeral environments, you may want to point to a single Amazon OpenSearch cluster, while creating a separate one for staging and production.
For ephemeral environments, you might need to clear all objects from an S3 bucket before deletion, but this should never happen in production!
These are just a couple of scenarios, but you’ll encounter many more specific environment-based requirements as you scale your application. To handle this, type your environments rather than relying on magic strings. This improves maintainability and reduces the chance of errors due to misused environment names.
Define your environment types using enums:
export const enum Region {
dublin = 'eu-west-1',
london = 'eu-west-2',
frankfurt = 'eu-central-1',
}
export const enum Stage {
featureDev = 'featureDev',
staging = 'staging',
prod = 'prod',
develop = 'develop',
}
Now, when implementing logic in your AWS CDK app, you can use these enums to control environment-specific behavior. For example, you can set different removal policies based on the environment stage:
this.table = new Table(this, 'Table', {
removalPolicy:
props.stageName === Stage.prod
? RemovalPolicy.RETAIN
: RemovalPolicy.DESTROY,
partitionKey: {
name: 'id',
type: dynamodb.AttributeType.STRING,
},
});
In the example above, if the stage is production, the DynamoDB table's removal policy is set to retain
. For any other stage, it’s set to destroy
. This pattern allows you to easily manage environment-specific configurations with clear, readable code.
3. S3 bucket auto-delete objects!
When working with AWS CDK apps, especially in ephemeral environments, you might run into issues where an S3 bucket prevents the teardown of your deployment because it contains objects. Fortunately, you can easily manage this with the autoDeleteObjects
property in the S3 bucket configuration.
Here’s how you can set up your S3 bucket to automatically delete objects when the stack is removed:
import { RemovalPolicy } from 'aws-cdk-lib';
new s3.Bucket(scope, 'Bucket', {
blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
encryption: s3.BucketEncryption.S3_MANAGED,
enforceSSL: true,
versioned: true,
removalPolicy: RemovalPolicy.RETAIN,
autoDeleteObjects: true,
});
As noted in the AWS CDK documentation, the autoDeleteObjects
property ensures that all objects in the S3 bucket are deleted when the bucket is removed from the stack or when the stack itself is deleted.
However, be cautious when using this in production, as you don't want to unintentionally delete critical files. To avoid this, you can conditionally set the autoDeleteObjects
property based on the environment stage:
autoDeleteObjects = props.stage === Stage.prod ? false : true,
This ensures that objects are only automatically deleted in non-production environments, preventing accidental data loss in production environments.
4. s3Deploy package to deploy local assets!
When building AWS CDK applications, there are many situations where you’ll need to push local files to an S3 bucket. Some common examples include:
Uploading a built static web app (e.g., React, Vue) to serve it as a web application.
Uploading environment configuration files.
Uploading files for services like Amazon Bedrock Knowledge Base.
The s3Deploy CDK package makes this process straightforward. To get started, import the necessary construct into your stack:
import * as s3deploy from 'aws-cdk-lib/aws-s3-deployment';
Then, implement the deployment like this:
new s3deploy.BucketDeployment(this, 'ClientBucketDeployment', {
sources: [s3deploy.Source.asset(path.join(__dirname, '../../client/'))],
destinationBucket: this.bucket,
});
In this example, we’re deploying a locally built client app (whether it’s React, Vue, or another framework) to an S3 bucket whenever the CDK app is deployed. This approach is clean and simple, making it easy to manage the deployment of static assets or configuration files directly to your S3 bucket.
5. Typed stack config!
As you build and expand your applications, you'll quickly realize that static configuration values need to be passed into your constructs and application code. These values often differ between environments, making it essential to have a centralized, manageable solution.
Instead of hardcoding environment-specific values, you can create a single configuration file that contains all your typed values for each environment (including ephemeral environments). Here's how you can structure this:
import * as dotenv from 'dotenv';
import { Region, Stage } from '../../types';
import { EnvironmentConfig } from '../environment-config';
dotenv.config();
export const environments: Record<Stage, EnvironmentConfig> = {
[Stage.develop]: {
env: {
account: process.env.ACCOUNT || (process.env.CDK_DEFAULT_ACCOUNT as string),
region: process.env.REGION || (process.env.CDK_DEFAULT_REGION as string),
},
stateful: {
bucketName: `your-service-${process.env.PR_NUMBER}-bucket`.toLowerCase(),
},
stateless: {
lambdaMemorySize: parseInt(process.env.LAMBDA_MEM_SIZE || '128'),
},
stageName: process.env.PR_NUMBER || Stage.develop,
},
[Stage.staging]: {
env: {
account: '123456789123',
region: Region.dublin,
},
stateful: {
bucketName: 'your-service-staging-bucket',
},
stateless: {
lambdaMemorySize: 1024,
},
stageName: Stage.staging,
},
[Stage.prod]: {
env: {
account: '123456789123',
region: Region.dublin,
},
stateful: {
bucketName: 'your-service-prod-bucket',
},
stateless: {
lambdaMemorySize: 1024,
},
stageName: Stage.prod,
},
};
With this setup, you can now pull in the correct configuration for each stage when defining your stacks:
import { environments } from '../environments';
new StateStack(this, `Develop-${environments.develop.stageName}`, {
...environments.develop,
});
const featureDevStack = new StatelessStack(this, 'FeatureDev', {
...environments.featureDev,
});
const stagingStack = new StatelessStack(this, 'Staging', {
...environments.staging,
});
const prodStack = new StatelessStack(this, 'Prod', {
...environments.prod,
});
This approach ensures that your environment-specific configurations are centralized, typed, and easily maintainable. Moreover, you can export and share the typed configuration interface to use as your stack properties, ensuring that the properties passed into each stack remain consistent and correctly typed.
constructor(scope: Construct, id: string, props: EnvironmentConfig) {
super(scope, id, props);
// Access environment-specific config from props
}
Tip for Ephemeral Environments: In ephemeral environments, the configuration values can be set through environment variables, either in the pipeline or locally on the engineer’s machine, making it easy to manage dynamic configurations across different stages.
6. Auto-populate a DynamoDB table!
When working with ephemeral environments, it's essential to have test data pre-seeded to simulate real-world scenarios. For example, you may want a feature branch environment to contain orders, products, and statuses, populated automatically when the stack is deployed. This can be achieved by importing JSON data into a DynamoDB table directly from an S3 bucket during deployment.
Here’s how you can easily seed DynamoDB tables with data from an S3 bucket using the importSource
property:
Step-by-Step:
Create the DynamoDB Table
First, create your DynamoDB table in the CDK stack, just as you would normally do.Prepare the Data in S3
Store your pre-populated JSON data in an S3 bucket. The structure of the JSON might look like this:
{"Item":{"PK":{"S":"COMPANY#1"},"SK":{"S":"EMPLOYEE#1#PAYSLIP#1"},"Date":{"S":"2024-05-01"},"Amount":{"N":"1000"},"Type":{"S":"Payslip"}}}
{"Item":{"PK":{"S":"COMPANY#1"},"SK":{"S":"EMPLOYEE#1#PAYSLIP#2"},"Date":{"S":"2024-06-01"},"Amount":{"N":"1100"},"Type":{"S":"Payslip"}}}
{"Item":{"PK":{"S":"COMPANY#2"},"SK":{"S":"EMPLOYEE#2#PAYSLIP#1"},"Date":{"S":"2024-05-01"},"Amount":{"N":"1200"},"Type":{"S":"Payslip"}}}
Use the
importSource
Property
TheimportSource
property of the DynamoDB table construct allows you to specify the location of your data (S3 bucket) and import it automatically when the stack is deployed. Here’s an example:
import { Bucket } from 'aws-cdk-lib/aws-s3';
import { Table } from 'aws-cdk-lib/aws-dynamodb';
import { RemovalPolicy } from 'aws-cdk-lib';
const s3Bucket = new Bucket(this, 'DataBucket');
const dynamoTable = new Table(this, 'MyDynamoDBTable', {
partitionKey: { name: 'PK', type: dynamodb.AttributeType.STRING },
sortKey: { name: 'SK', type: dynamodb.AttributeType.STRING },
removalPolicy: RemovalPolicy.DESTROY, // Set according to environment needs
importSource: s3Bucket.bucketArn, // Link to S3 data source
});
In this setup:
DynamoDB Table: The table is set up with a partition key (
PK
) and sort key (SK
) based on the JSON structure.S3 Data Source: The S3 bucket contains your JSON data and is linked to the
importSource
property.Data Population: During stack deployment, DynamoDB automatically imports the JSON data from the S3 bucket into the table.
This approach is great for ephemeral environments where you want to test with varying data sets. You can pre-seed orders, user records, or any other relevant data before deployment, ensuring your feature branch environments are always populated with relevant and up-to-date data.
Tip: Ensure the importSource
configuration is only used in environments like development or feature branches where data population is safe, and avoid it in production unless required.
7. TypeDoc for autogenerating documentation!
As we build out our AWS CDK applications there is almost certainly a need to autogenerate useful developer documentation, but we need this to be dynamic and regenerated on pre-commit hooks.
To do this we can use the TypeDoc package which allows us to meet the requirements above. To set this up we simply install the package:
npm install --save-dev typedoc
We then create a typedoc.json
config file:
{
"entryPoints": [
"./stateless/src/use-cases/create-customer-account/create-customer-account.ts",
"./stateless/src/use-cases/retrieve-customer-account/retrieve-customer-account.ts",
"./stateless/src/use-cases/upgrade-customer-account/upgrade-customer-account.ts",
"./stateless/src/use-cases/create-customer-playlist/create-customer-playlist.ts"
],
"out": "../docs/documentation",
"theme": "default",
"name": "Your Service",
"includeVersion": true,
"lightHighlightTheme": "light-plus",
"hideGenerator": true,
"exclude": ["**/*+(index|.test|.spec|.e2e).ts"],
"readme": "../docs/DOCS.md",
"disableSources": true,
"excludePrivate": true,
"excludeProtected": true
}
And we add an NPM script to our package.json, something like this:
"docs": "npx typedoc",
8. HotSwap to quickly deploy changes!
During development, particularly in ephemeral environments, you may need to quickly iterate on changes to resources like Lambda functions or Step Functions. A full CloudFormation deployment can be slow, taking minutes instead of seconds, especially when only a small change is made.
In such cases, you can leverage AWS CDK's Hotswap feature for faster deployments.
How It Works:
cdk deploy --hotswap
: This command instructs the AWS CDK CLI to attempt a "hotswap" deployment, which is much faster than the standard CloudFormation deployment.It works by updating only the changed resources directly, such as code changes to a Lambda function or modifications to a Step Function, without triggering a full stack update.
This avoids waiting for CloudFormation to rebuild and redeploy the entire stack, which can save significant time during development.
Key Points:
Code-Only Changes: The tool can detect if only the function code (for example, Lambda) has been changed and will apply those changes directly, skipping the CloudFormation steps.
Automatic Fallback: If the change cannot be hotswapped (such as changes to other types of resources or nested stacks), it will automatically fall back to a full CloudFormation deployment.
Faster Iterations: This is particularly useful when making small, frequent changes during feature development or testing.
Example Command:
cdk deploy --hotswap
When to Use:
When you're just updating a Lambda function's code or minor changes that don't require a full stack update.
When you want to save time and avoid waiting for CloudFormation deployments that take several minutes.
Tip: This feature is most beneficial in ephemeral environments, where frequent changes are needed and rapid testing is important. For production or critical environments, always ensure full CloudFormation deployments for stability.
9. Lambda Powertools and Middy!
When building serverless applications, observability—logging, tracing, and metrics—is essential for understanding performance and diagnosing issues. A combination of Middy and Lambda Powertools (a toolkit from AWS) can significantly improve the monitoring and logging of your Lambda functions, helping you adhere to best practices and increase developer velocity.
Key Features:
Logging: Lambda Powertools provides an enhanced logger with structured logging and context information (such as request IDs).
Tracing: Automatically integrates with AWS X-Ray for detailed tracing of your Lambda functions' execution.
Metrics: Easily add custom CloudWatch metrics that can trigger alarms or be used for reporting purposes.
Middy Middleware: Simplifies adding these observability tools as middleware to your Lambda functions, abstracting away much of the complexity.
Example Code Using Middy and Lambda Powertools:
import { MetricUnit, Metrics } from '@aws-lambda-powertools/metrics';
import { errorHandler } from '@shared';
import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda';
import { Logger } from '@aws-lambda-powertools/logger';
import { injectLambdaContext } from '@aws-lambda-powertools/logger/middleware';
import { logMetrics } from '@aws-lambda-powertools/metrics/middleware';
import { Tracer } from '@aws-lambda-powertools/tracer';
import { captureLambdaHandler } from '@aws-lambda-powertools/tracer/middleware';
import { ValidationError } from '@errors/validation-error';
import middy from '@middy/core';
const tracer = new Tracer();
const metrics = new Metrics();
const logger = new Logger();
export const getOrder = async ({
pathParameters,
}: APIGatewayProxyEvent): Promise<APIGatewayProxyResult> => {
try {
if (!pathParameters?.id) {
throw new ValidationError('id is required');
}
const { id } = pathParameters;
// business logic here
// Adding a custom metric for successful order retrieval
metrics.addMetric('SuccessfulGetOrder', MetricUnit.Count, 1);
return {
statusCode: 200,
body: JSON.stringify({}), // the response
headers: {}
};
} catch (error) {
let errorMessage = 'Unknown error';
if (error instanceof Error) errorMessage = error.message;
logger.error(errorMessage);
// Adding a custom metric for errors
metrics.addMetric('GetOrderError', MetricUnit.Count, 1);
return errorHandler(error);
}
};
// Wrap the handler with Middy to add Powertools middleware
export const handler = middy(getOrder)
.use(injectLambdaContext(logger)) // Injects Lambda context (e.g., request ID, function name)
.use(captureLambdaHandler(tracer)) // Automatically traces Lambda function executions
.use(logMetrics(metrics)); // Log custom CloudWatch metrics
Benefits of Using Middy and Lambda Powertools:
Automatic Tracing: With
captureLambdaHandler
, you get detailed tracing of Lambda executions using AWS X-Ray. This is invaluable for performance monitoring and debugging.Enhanced Logging: The
injectLambdaContext
middleware automatically includes useful context in your logs, such as request IDs and function names, making logs much easier to correlate.Custom Metrics: The
Metrics
class allows you to easily define and log custom metrics. For instance, in the above example, we log successful and error metrics for thegetOrder
function, which can later be used for setting up CloudWatch alarms or reporting.Error Handling: The
errorHandler
helps centralize error handling and ensures consistent responses.Middy Middleware: Middy abstracts away the complexity of setting up these observability tools, so you can focus more on business logic.
When to Use:
Production-grade Lambda functions: If you're building serverless applications where observability is critical, these tools help you stay on top of issues and ensure you can quickly diagnose problems.
Frequent updates or changes: When working in ephemeral environments or during rapid development cycles, the added observability makes it easier to troubleshoot and iterate on Lambda functions quickly.
By integrating Middy and Lambda Powertools, you’ll increase the reliability, observability, and scalability of your serverless applications with minimal configuration effort.
10. Passing stack dependencies!
When working with the AWS Cloud Development Kit (CDK), one of the powerful features is the ability to pass resources or values between different stacks by simply passing them through the constructors in your main app. This allows you to structure your infrastructure code into multiple logical stacks while still ensuring that necessary dependencies (like a DynamoDB table) are shared between them.
Here’s a step-by-step breakdown using a simple example where a stateful stack creates a DynamoDB table and a stateless stack consumes this table.
1. Passing Values Between Stacks
The key to this feature lies in passing resources (like DynamoDB tables, Lambda functions, etc.) from one stack to another through the stack constructor. Here's an example of how to achieve this.
Main App (App Entry Point):
#!/usr/bin/env node
import 'source-map-support/register';
import * as cdk from 'aws-cdk-lib';
import { ApproachOneStatefulStack } from '../stateful/stateful';
import { ApproachOneStatelessStack } from '../stateless/stateless';
const app = new cdk.App();
// Creating the stateful stack which includes a DynamoDB table
const statefulStack = new ApproachOneStatefulStack(app, 'ApproachOneStatefulStack', {});
// Pass the DynamoDB table from the stateful stack to the stateless stack
new ApproachOneStatelessStack(app, 'ApproachOneStatelessStack', {
table: statefulStack.table, // <-- Passing the reference here
});
In this example, the ApproachOneStatefulStack
creates a DynamoDB table and we pass this table reference to the ApproachOneStatelessStack
constructor. This allows us to use the same DynamoDB table in both stacks without having to duplicate the resource creation.
2. Using the Passed Values in the Stateless Stack
In the stateless stack, we define an interface ApproachOneStatelessStackProps
to accept the DynamoDB table as a property. We can then use it inside the stack to grant permissions or set it as an environment variable for a Lambda function.
Stateless Stack (Where DynamoDB Table is Used):
import * as apigw from 'aws-cdk-lib/aws-apigateway';
import * as cdk from 'aws-cdk-lib';
import * as dynamodb from 'aws-cdk-lib/aws-dynamodb';
import * as lambda from 'aws-cdk-lib/aws-lambda';
import * as nodeLambda from 'aws-cdk-lib/aws-lambda-nodejs';
import * as path from 'path';
import { Construct } from 'constructs';
// Defining the props interface to accept a DynamoDB table
export interface ApproachOneStatelessStackProps extends cdk.StackProps {
table: dynamodb.Table;
}
export class ApproachOneStatelessStack extends cdk.Stack {
private table: dynamodb.Table;
constructor(scope: Construct, id: string, props: ApproachOneStatelessStackProps) {
super(scope, id, props);
// Accessing the passed table reference
const { table } = props;
this.table = table;
// Create Lambda function
const createCustomerLambda = new nodeLambda.NodejsFunction(this, 'CreateCustomerLambda', {
functionName: 'create-customer-lambda',
runtime: lambda.Runtime.NODEJS_20_X,
entry: path.join(__dirname, 'src/adapters/primary/create-customer/create-customer.adapter.ts'),
memorySize: 1024,
handler: 'handler',
tracing: lambda.Tracing.ACTIVE,
bundling: {
minify: true,
},
environment: {
TABLE_NAME: this.table.tableName, // Using the table reference in the environment variable
},
});
// Grant write permission to the Lambda function for the DynamoDB table
this.table.grantWriteData(createCustomerLambda);
}
}
Key Takeaways:
Stateful Stack: Contains resources that are typically long-lived, such as DynamoDB tables, S3 buckets, etc. These resources are created in the stack and passed to other stacks as needed.
Stateless Stack: Focuses on the resources that consume or interact with the stateful resources (e.g., Lambda functions using the DynamoDB table). By passing the necessary resources via the constructor, you can reuse existing resources without creating new ones.
CDK Construct Properties: The use of
ApproachOneStatelessStackProps
allows you to define the type of resources that the stateless stack will consume, making the code modular and scalable.
Why This is Important:
Modularity: By splitting the application logic into different stacks, you maintain clear separation of concerns (e.g., infrastructure vs. business logic).
Reusability: You can easily reuse resources across multiple stacks without redundant definitions, reducing the complexity and cost of your infrastructure.
Maintainability: As your infrastructure grows, this pattern allows for cleaner code that’s easier to maintain and update.
This approach gives you the flexibility to manage the complexity of larger AWS CDK applications and ensures that you can pass critical values between stacks efficiently.
11. Jest configs per test type!
When testing AWS CDK applications, it’s essential to employ different test strategies based on the nature of the code and the resources involved. By using Jest, which comes pre-configured with the AWS CDK, you can effectively write and organize various types of tests such as unit tests, integration tests, and end-to-end (e2e) tests.
To ensure maintainability and a clear distinction between different types of tests, it’s useful to create separate Jest configurations for each test type and adopt a consistent naming convention. This helps you manage tests more effectively and makes it easier to run specific types of tests independently.
Types of Tests You Can Perform:
Unit Tests: These test the business logic or individual CDK components. For instance, validating the logic inside your Lambda functions or other infrastructure.
End-to-End (e2e) Tests: These tests cover the entire workflow of your application, interacting with the deployed infrastructure (such as an API Gateway interacting with a Lambda function).
Integration Tests: These tests focus on testing specific parts of the infrastructure, like checking if the Lambda correctly interacts with DynamoDB or other services.
Module Integration Tests: These tests focus on integrating and testing specific AWS services or modules, such as DynamoDB alone or a Lambda with DynamoDB.
Setting Up Jest Configurations for Different Test Types
The most effective approach is to set up separate Jest configurations for each test type, and here’s how you can do it.
Example of Jest Config for End-to-End (e2e) Tests:
// jest.config.e2e.js
module.exports = {
testEnvironment: 'node',
roots: ['<rootDir>'],
testMatch: ['**/*.e2e.ts'],
transform: {
'^.+\\.tsx?$': 'ts-jest', // Use ts-jest for TypeScript files
},
setupFiles: ['dotenv/config'], // Load environment variables from .env file
};
In this example, we’ve configured Jest to only run the tests that match the pattern **/*.e2e.ts
. This configuration is useful for e2e tests, where we validate the full application flow.
npm Scripts to Run e2e Tests:
You can create npm scripts that make it easier to run specific tests by pointing to the right Jest configuration.
{
"scripts": {
"test:e2e": "jest --config ./jest.config.e2e.js --detectOpenHandles --runInBand",
"test:e2e:watch": "jest --config ./jest.config.e2e.js --watch --detectOpenHandles --runInBand"
}
}
test:e2e
: This command will run the e2e tests, ensuring the configuration and any open handles are properly managed.test:e2e:watch
: This command is for running tests in watch mode, ideal for active development, so Jest re-runs tests automatically as you modify them.
Organizing Your Test Files
Now, to organize your tests, you can place them in the tests/e2e
folder and name them with the .e2e.ts
suffix. For example:
tests/
├── e2e/
│ ├── create-order-api-to-dynamodb.e2e.ts
│ └── user-flow-test.e2e.ts
For integration tests, you could name them like create-order-api-to-dynamodb.integration.ts
, and for unit tests, you might use the *.test.ts
suffix, like customer-use-case.test.ts
.
Benefits of This Approach:
Separation of Concerns: By defining separate Jest configurations, you keep different types of tests logically isolated from one another. This ensures that tests don't interfere with each other.
Scalability: As your project grows, adding more tests becomes easier because you can simply add another config file and update the appropriate npm script.
Clear Naming Convention: The naming conventions (
*.test.ts
,*.integration.ts
,*.e2e.ts
) help clearly identify the type of tests and their scope. It becomes immediately obvious whether a test is focused on business logic (unit), infrastructure interactions (integration), or full workflows (e2e).Efficiency in Running Tests: With specific npm scripts for each type of test, you can run only the tests you need. This improves the efficiency of the testing process.
Example File Structure:
project/
├── tests/
│ ├── e2e/
│ │ ├── create-order-api-to-dynamodb.e2e.ts
│ │ └── user-flow-test.e2e.ts
│ ├── integration/
│ │ ├── create-order-api-to-dynamodb.integration.ts
│ │ └── customer-lambda-integration.ts
│ └── unit/
│ ├── customer-use-case.test.ts
│ └── order-service.test.ts
├── jest.config.e2e.js
├── jest.config.integration.js
└── jest.config.unit.js
This setup ensures that your AWS CDK applications are well-tested, modular, and easy to maintain. By separating tests into categories (unit, integration, e2e) and using corresponding Jest configurations, you can improve both the efficiency and the organization of your testing suite. It also allows you to scale your testing infrastructure as your project evolves.
12. Path Aliases for Imports!
As your application grows, your folder structure can become increasingly complex, leading to long and hard-to-manage import paths. For example, instead of having to deal with something like:
import { something } from '../../../../../src/use-cases/something/something';
You can simplify the import paths by using barrel files and TypeScript path aliases. This approach improves the readability and maintainability of your code.
Step 1: Update tsconfig.json
for Path Aliases
To use path aliases in TypeScript, you can modify the tsconfig.json
file to include a baseUrl
and paths
configuration. This allows you to rewrite the complex relative import paths into simple, readable aliases.
Example tsconfig.json
Update:
{
"compilerOptions": {
"baseUrl": ".", // Base directory for resolving non-relative modules
"paths": {
"@adapters/*": ["./stateless/src/adapters/*"],
"@config/*": ["./stateless/src/config/*"],
"@domain/*": ["./stateless/src/domain/*"],
"@entity/*": ["./stateless/src/entity/*"],
"@repositories/*": ["./stateless/src/repositories/*"],
"@errors/*": ["./stateless/src/errors/*"],
"@schemas/*": ["./stateless/src/schemas/*"],
"@shared/*": ["./stateless/src/shared/*"],
"@models/*": ["stateless/src/models/*"],
"@dto/*": ["stateless/src/dto/*"],
"@use-cases/*": ["./stateless/src/use-cases/*"],
"@packages/*": ["./packages/*"],
"@events/*": ["stateless/src/events/*"]
}
}
}
With this configuration in place, instead of writing a long import like this:
import { something } from '../../../../../src/use-cases/something/something';
You can now import it like this:
import { something } from '@use-cases/something';
Step 2: Update Jest Configuration for Path Aliases
If you are using Jest for testing, you'll also need to configure Jest to understand the path aliases you've defined in tsconfig.json
. This ensures that your tests continue to work properly.
In your Jest configuration file (usually jest.config.js
or jest.config.ts
), you can map the path aliases using the moduleNameMapper
option. Here's how you can do it:
Example Jest Config Update:
module.exports = {
// Other Jest config options...
moduleNameMapper: {
'^@adapters/(.*)': '<rootDir>/stateless/src/adapters/$1',
'^@config/(.*)': '<rootDir>/stateless/src/config/$1',
'^@domain/(.*)': '<rootDir>/stateless/src/domain/$1',
'^@entity/(.*)': '<rootDir>/stateless/src/entity/$1',
'^@schemas/(.*)': '<rootDir>/stateless/src/schemas/$1',
'^@shared/(.*)': '<rootDir>/stateless/src/shared/$1',
'^@errors/(.*)': '<rootDir>/stateless/src/errors/$1',
'^@repositories/(.*)': '<rootDir>/stateless/src/repositories/$1',
'^@events/(.*)': '<rootDir>/stateless/src/events/$1',
'^@models/(.*)': '<rootDir>/stateless/src/models/$1',
'^@dto/(.*)': '<rootDir>/stateless/src/dto/$1',
'^@use-cases/(.*)': '<rootDir>/stateless/src/use-cases/$1',
'^@packages/(.*)': '<rootDir>/packages/$1',
}
};
This configuration will allow Jest to resolve the same import aliases that you've set up in TypeScript.
Benefits of This Approach:
Cleaner Import Paths: Your import paths become more readable and easier to manage, especially as your application grows. You avoid deeply nested relative imports that are hard to follow.
Improved Maintainability: With clear and consistent import paths, it's easier to refactor your code or move files around without worrying about updating long relative paths everywhere.
Consistency: By defining the path aliases once in your
tsconfig.json
, you ensure that both your application code and tests follow the same import structure.Easier Collaboration: Developers working on the project can quickly understand the import structure, improving collaboration and reducing errors.
Example Directory Structure:
With path aliases in place, your project structure might look like this:
project/
├── stateless/
│ ├── src/
│ │ ├── use-cases/
│ │ │ ├── something/
│ │ │ │ └── something.ts
│ │ ├── config/
│ │ ├── domain/
│ │ ├── models/
│ │ └── ...
├── packages/
├── tests/
│ ├── unit/
│ ├── integration/
│ └── e2e/
├── tsconfig.json
├── jest.config.js
By using barrel files and TypeScript path aliases, you can significantly improve the readability and organization of your imports. This makes working on large applications more manageable and reduces the complexity of navigating through deeply nested directories. Additionally, updating Jest to support path aliases ensures that your tests remain consistent with the application code, providing a smooth development experience.
13. Keep your client close to the BFF!
One common mistake engineering teams make is having the client application in a separate repository from the Backend for Frontend (BFF) that supports it. This often leads to synchronization issues between the two, such as mismatches in API contracts, types, and configurations, even though both pipelines might be green and functioning independently.
The Solution: Co-locate the Client and BFF in the Same Repository
A more efficient approach is to have both the client application and BFF in the same repository, deploying them together through a single AWS CDK application. This way, you can ensure that the client and BFF stay in sync, reducing the likelihood of API or configuration mismatches.
Suggested AWS CDK Stack Structure:
You can use three stacks in your AWS CDK app to organize your resources and deploy both the client and BFF seamlessly:
Stateful Stack: This stack contains resources that manage state, such as databases and DynamoDB tables.
Stateless Stack: This stack includes resources like API Gateway, Lambda functions, and other stateless services.
Client Stack: The client stack handles the S3 bucket and CloudFront distribution for serving your static client application, along with the configuration needed to host and connect the client.
Step 1: Deploy the Client and BFF Together
By placing both the client app and BFF in the same repo, you can deploy them together automatically. For the client side, you can use the AWS CDK construct s3Deploy
to deploy the client app directly from the same repository.
Example Code to Deploy the Client to S3:
import * as s3deploy from 'aws-cdk-lib/aws-s3-deployment';
import * as path from 'path';
import { CloudFrontWebDistribution } from 'aws-cdk-lib/aws-cloudfront';
const s3Deploy = new s3deploy.BucketDeployment(this, 'ClientBucketDeployment', {
sources: [
s3deploy.Source.asset(path.join(__dirname, '../../web/build')), // Path to your client build folder
s3deploy.Source.jsonData('config.json', {
stage, // e.g., 'dev', 'prod'
domainName, // e.g., 'example.com'
subDomain, // e.g., 'app.example.com'
api: apiUrl, // Dynamically generate API URL or pass a value
}), // Runtime config for client
],
destinationBucket: this.bucket, // S3 bucket to store the client app
distribution: cloudFrontDistribution, // CloudFront distribution for caching
distributionPaths: ['/*'], // Invalidate all paths for the new deployment
});
Step 2: Generate Dynamic Client Configuration
To make sure the client and BFF are always in sync, you can generate a runtime configuration JSON file (like config.json
) that the client app can use. This file could contain information such as:
API URL
Environment Stage (e.g., dev, prod)
Domain or Subdomain for the client
In this case, the config.json
file is uploaded to the S3 bucket alongside the client app build files.
Example config.json
Contents:
{
"stage": "dev",
"domainName": "example.com",
"subDomain": "app.example.com",
"api": "https://api.example.com"
}
Step 3: Increase Confidence in Pipeline Tests
By deploying the client and BFF together in the same pipeline, you ensure that they are never out of sync. This integration also helps with confidence in tests because:
Both components are deployed at the same time, ensuring they are compatible.
Any change in the API or backend logic that impacts the frontend will be caught in the same pipeline, preventing broken API contracts or misconfigurations.
Benefits of This Approach:
Reduced Risk of Synchronization Issues: By deploying both the client and BFF in the same pipeline, you ensure they stay in sync with each other at all times.
Easier Debugging and Troubleshooting: If there are issues in the client application, you can be confident that the backend is also up-to-date and doesn't have any breaking changes. This simplifies troubleshooting.
Increased Developer Confidence: When developers can deploy both the client and BFF simultaneously, they are more confident in the changes they are making. There’s no risk of pushing a frontend update without the corresponding API update.
Simplified Deployment Process: Managing deployments for both the client and BFF in one place (i.e., one repo, one pipeline) is far easier than managing two separate deployments.
By organizing your AWS CDK app with three stacks—Stateful, Stateless, and Client—you can ensure that your client application and backend for frontend (BFF) stay in sync, reducing the risk of API contract mismatches and improving the overall deployment and testing process. This method streamlines the deployment of both components, enhancing developer confidence and providing a smoother CI/CD pipeline.
14. CDK Nag & Custom Aspects!
As your AWS CDK application grows, it's essential to ensure that your constructs and resources comply with industry best practices, standards, and security frameworks. A powerful way to achieve this is by integrating CDK Nag and AWS CDK Aspects into your CDK application. These tools will help you enforce rules and maintain compliance throughout your application lifecycle.
1. Using CDK Nag for Compliance Checks
CDK Nag is a tool that helps validate your AWS CDK application against various rule packs, ensuring that your infrastructure follows established best practices and security guidelines. Some of the rule packs that CDK Nag checks against include:
AWS Solutions: This set of rules ensures that your AWS CDK application follows AWS's best practices for deploying resources.
HIPAA Security: Ensures your infrastructure is compliant with the Health Insurance Portability and Accountability Act (HIPAA) requirements for healthcare data protection.
NIST 800-53 Rev 4 & 5: These are guidelines for information security, focusing on controls for federal information systems and organizations.
PCI DSS 3.2.1: Ensures compliance with the Payment Card Industry Data Security Standard for securing payment card information.
By integrating CDK Nag, you can automatically validate your CDK application against these standards, ensuring that your infrastructure is secure and compliant.
By integrating CDK Nag and AWS CDK Aspects, you can ensure your AWS CDK applications follow security best practices, compliance standards, and organizational policies. CDK Nag helps you validate against well-known frameworks like HIPAA, PCI DSS, and NIST, while CDK Aspects allows you to define and enforce custom rules specific to your application's needs.
This proactive approach to infrastructure security and compliance ensures that your AWS resources are well-managed, secure, and compliant with the necessary standards.
Conclusion
In modern software development, particularly when working with AWS CDK, maintaining clean, manageable, and secure infrastructure is crucial. This article has outlined various best practices and strategies for optimizing your AWS CDK applications as they grow in complexity.
Key takeaways include:
Modularization: By structuring your application in a modular way and using different Jest configurations for unit, integration, and end-to-end (e2e) tests, you can streamline testing processes and keep your codebase maintainable.
Simplifying Import Paths: By leveraging barrel files and TypeScript path aliases, we can avoid complex and cumbersome import paths, making the code easier to read and manage.
Deploying Client and Backend Together: Combining the client application with its backend (BFF) in the same repository ensures that both are tightly coupled, reducing the risk of them getting out of sync. Using AWS CDK’s
s3Deploy
construct, you can deploy both components together seamlessly.Ensuring Compliance and Best Practices: CDK Nag and AWS CDK Aspects help ensure your infrastructure is not only functional but compliant with industry standards like HIPAA, NIST, and PCI DSS. Custom aspects allow you to enforce your organization’s specific rules, further enhancing security and maintainability.
By adopting these strategies, you can enhance the scalability, security, and maintainability of your AWS CDK projects, resulting in more robust applications that are easier to manage, deploy, and comply with security and regulatory standards.