· Joseph · DevOps  · 7 min read

AWS Cloud Development Kit (CDK) project structure

Previously blog I used NodeJs/Typescript as a backend and deployed with AWS Cloud Development Kit (AWS CDK). The same framework, but more complex than the sample, is used on our Firstage. So this post I will show how we structure our AWS CDK project codebase.

Project Structure
project structure

Previously blog I used NodeJs/Typescript as a backend and deployed with AWS Cloud Development Kit (AWS CDK). The same framework, but more complex than the sample, is used on our Firstage. So this post I will show how we structure our AWS CDK project codebase.

Project Structure

project structure

As you saw, we have several folders in the project:

  1. bin: Contains only one file lets cdk deploy know which env should deploy,
  2. events: Saves JSON object which AWS SAM uses,
  3. lib: Configures each service we used on AWS, such as route53, dynamodb, lambda, API gateway, and secret manager
  4. src: Puts all codes like controllers, models, and helpers here.
  5. test: Puts testing codes.
  6. types: Defines types for Typescript.
  7. layer: Installs npm package relative to lambdas.

We move out layer from src because sometimes we need to debug the code on AWS lambda editor. Now let me show some important files and codes.

bin/api.ts

#!/usr/bin/env node
import 'source-map-support/register';
import * as cdk from '@aws-cdk/core';
import { ApiStack } from '../lib/api-stack';

const app = new cdk.App();

// SYNTAX:
// new ApiStack(app, 'THIS_NAME_FOR_CDK_DEPLOY', { stackName: 'THIS_NAME_FOR_AWS_CloudFormation_StackName' });
new ApiStack(app, 'ApiStaging', { stackName: 'ApiStaging' });
new ApiStack(app, 'ApiProduction', { stackName: 'ApiProduction' });

This is an entry point and simply point to lib/api-stack file, and then we can use cdk deploy ApiStaging or cdk deploy ApiProduction to deploy.

lib/api-stack.ts

import * as cdk from '@aws-cdk/core';
import * as lambda from '@aws-cdk/aws-lambda';
import * as elasticsearch from '@aws-cdk/aws-elasticsearch';
import { DynamoEventSource } from '@aws-cdk/aws-lambda-event-sources';
import * as iam from '@aws-cdk/aws-iam';
import events = require('@aws-cdk/aws-events');
import targets = require('@aws-cdk/aws-events-targets');
import type { EnvType } from '../types/api-stack';
import SecretManager from './secret-manager';
import Lambda from './lambda';
import Route53 from './route53';
import HttpApi from './http-api';
import DbConfig from './dynamodb';

const params: EnvType = {
    ApiStaging: {
      ...
    },
    ApiProduction: {
      ...
    }
};

export class ApiStack extends cdk.Stack {
  constructor(scope: cdk.App, id: "ApiStaging" | "ApiProduction", props?: cdk.StackProps) {
    super(scope, id, props);

    const envParams = params[id];
    const { ENV, route53Params, httpApiParams } = envParams;

    const secretParams = SecretManager(this, envParams.secretManager.arn, envParams.secretManager.keys);

    const layerCommon: lambda.ILayerVersion = new lambda.LayerVersion(this, "LayerCommon", {
      compatibleRuntimes: [lambda.Runtime.NODEJS_14_X],
      code: lambda.Code.fromAsset('layer/common'),
    });
    
    const {
      testEvent, test2Event
    } = Lambda(this, { ENV });

    const TestEventLambda = testEvent([layerCommon]);
    const Test2EventLambda = test2Event([layerCommon]);

    Test2EventLambda.addEnvironment('testLambdaArn', TestEventLambda.functionArn);

    [ TestEventLambda, Test2EventLambda ].forEach((lambdaFunc) => {
      const envs: {[key: string]: string} = { secretToken: secretParams.secretToken };
      Object.keys(envs).forEach((key) => {
        lambdaFunc.addEnvironment(key, envs[key]);
      });
    });

    [ TestEventLambda, Test2EventLambda ].forEach((lambdaFunc) => {
      lambdaFunc.addToRolePolicy(new iam.PolicyStatement({
        effect: iam.Effect.ALLOW,
        actions: ['dynamodb:*'],
        resources: ['*']
      }));
    });

    const { domainName } = Route53(this, route53Params);
    
    const eventLambdaMapping = {
      testEvent: TestEventLambda,
      test2Event: Test2EventLambda,
    }
    HttpApi(this, domainName, eventLambdaMapping, httpApiParams);

    const DynamoDB = DbConfig(this, { dynamoDBTablePrefix, ENV });
  }
}

The skeleton is our ApiStack. We defines AWS services to files seperately, and import into this ApiStack. Let’s go through from top to down. First, on lines 8, 15, 28 and 29 of lib/api-stack.ts we define type for our envParams.

types/api-stack.ts

export type ParamsType = {
  ENV: string,
  userPool: string,
  domainEndpoint: string,
  dynamoDBTablePrefix: string,
  route53Params: Route53EnvParam,
  httpApiParams: HttpApiEnvParam,
  secretManager: {
    arn: string,
    keys: string[]
  },
  cognitoParams: {
    clientId: string
  },
  fbPixelID: string,
}

export type EnvType = {
  ApiStaging: ParamsType,
  ApiProduction: ParamsType
}

The Route53EnvParam and HttpApiEnvParam are just some constant strings relatives to Route53 and Api Gateway. We have to mention that secretManager we hard-coded the keys used on AWS Secrets Manager, and lib/secret-manager.ts uses it on the line 36 and 48 of lib/api-stack.ts.

lib/secret-manager.ts

// import * as ssm from '@aws-cdk/aws-ssm';
import * as secretsmanager from '@aws-cdk/aws-secretsmanager';
import * as cdk from '@aws-cdk/core';

const secretManager = (scope: cdk.Construct, arn: string, keys: string[]): { [key: string]: string } => {
  const secret = secretsmanager.Secret.fromSecretCompleteArn(scope, 'ImportedSecret', arn);
  return keys.reduce((params, key) => ({ ...params, [key]: secret.secretValueFromJson(key) }), {});
  /*
  const ssmParams: { [key: string]: string } = {};
  Object.keys(params).forEach(name => {
    const param = ssm.StringParameter.fromStringParameterAttributes(scope, name, {
      parameterName: params[name],
      // 'version' can be specified but is optional.
    });
    ssmParams[name] = param.stringValue;
  });
  return ssmParams
  */
};

export default secretManager;

THe codes we commented are AWS Systems Manager, but now we change to AWS Secrets Manager. As the code above, we reduce simply keys array to key-value object.

But here is an issue when you change value from AWS Secrets Manager, you have to remove the env value from lambda and re-add again.

On the line 33 to 36 of lib/api-stack.ts defines a layer. The following image shows layer folder. layer

Then, we get lambda definitions from lib/lambda.ts, and set environments and policiess to them on line 38 to 55.

lib/lambda.ts

import * as cdk from '@aws-cdk/core';
import * as lambda from '@aws-cdk/aws-lambda';

const lambdas = (scope: cdk.Construct, envParams?: { [key: string]: string }) => {
  const funcDefaultProps = {
    runtime: lambda.Runtime.NODEJS_14_X,
    code: new lambda.AssetCode('src'),
    memorySize: 1536,
    environment: envParams
  };
  return {
    testEvent: (layers: lambda.ILayerVersion[]) => new lambda.Function(scope, 'testEvent', {
      ...funcDefaultProps,
      handler: 'controllers/test/index.handler',
      layers,
      tracing: lambda.Tracing.ACTIVE,
      timeout: cdk.Duration.seconds(10),
    }),
    test2Event: (layers: lambda.ILayerVersion[]) => new lambda.Function(scope, 'test2Event', {
      ...funcDefaultProps,
      handler: 'controllers/test2/index.handler',
      layers,
      tracing: lambda.Tracing.ACTIVE,
      timeout: cdk.Duration.seconds(10),
    }),
  }
};

Lastly, Route53, HttpApi and Dbconfig are the same structure as lib/lambda.ts. We set dns, domain name and ip in lib/route53.ts, authorizer and router in lib/http-api.ts, and table and global secondary key in lib/dynamodb.ts.

lib/route53.ts

import * as cdk from '@aws-cdk/core';
import * as apigw from '@aws-cdk/aws-apigatewayv2';
import * as route53 from '@aws-cdk/aws-route53';
import * as acm from '@aws-cdk/aws-certificatemanager';
import * as targets from '@aws-cdk/aws-route53-targets';
import type { Route53EnvParam } from '../types/api-stack'

const Route53 = (scope: cdk.Construct, envParams: Route53EnvParam) => {
  const { zoneName, certArn, domain, aRecord } = envParams;
  const zone = route53.HostedZone.fromHostedZoneAttributes(scope, 'HostedZone', {
    zoneName,
    hostedZoneId: 'YOUR_ID'
  });
  const domainName = new apigw.DomainName(scope, domain.id, {
    domainName: domain.domainName,
    certificate: acm.Certificate.fromCertificateArn(scope, 'cert', certArn),
  })
  new route53.ARecord(scope, aRecord.alias.id, {
    zone,
    recordName: aRecord.alias.recordName,
    target: route53.RecordTarget.fromAlias(new targets.ApiGatewayv2Domain(domainName)),
  });

  return { domainName };
};

export default Route53

lib/http-api.ts

import * as cdk from '@aws-cdk/core';
import * as apigw from '@aws-cdk/aws-apigatewayv2';
import { CorsHttpMethod } from '@aws-cdk/aws-apigatewayv2/lib/http/api';
import * as apigwIntergration from '@aws-cdk/aws-apigatewayv2-integrations';
import * as authorizers from '@aws-cdk/aws-apigatewayv2-authorizers';
import * as lambda from '@aws-cdk/aws-lambda';
// import * as cognito from '@aws-cdk/aws-cognito';
import type { HttpApiEnvParam } from '../types/api-stack'

const HttpApi = (
  scope: cdk.Construct,
  domainName: apigw.DomainName,
  handlers: { [key: string]: lambda.Function },
  envParams: HttpApiEnvParam
) => {
  const { id, jwtAudience, jwtIssuer, stageId, stageName } = envParams;
  const authorizer = new authorizers.HttpJwtAuthorizer({
    jwtAudience,
    jwtIssuer
  });
  const httpApi = new apigw.HttpApi(scope, id, {
    createDefaultStage: false,
    corsPreflight: {
      allowHeaders: ['Authorization', 'Content-Type'],
      allowMethods: [CorsHttpMethod.GET, CorsHttpMethod.HEAD, CorsHttpMethod.OPTIONS, CorsHttpMethod.POST, CorsHttpMethod.PUT, CorsHttpMethod.DELETE],
      allowOrigins: ['*'],
    },
  })

  httpApi.addStage(stageId, {
    autoDeploy: true,
    domainMapping: {
      domainName,
    },
    stageName: stageName || "dev"
  });
  httpApi.addRoutes({
    path: '/test',
    methods: [apigw.HttpMethod.GET, apigw.HttpMethod.PUT],
    integration: new apigwIntergration.LambdaProxyIntegration({
      handler: handlers["testEvent"],
    }),
  });
  httpApi.addRoutes({
    path: '/test2',
    methods: [apigw.HttpMethod.GET, apigw.HttpMethod.PUT],
    integration: new apigwIntergration.LambdaProxyIntegration({
      handler: handlers["test2Event"],
    }),
  });
};
export default HttpApi;

lib/dynamodb.ts

import * as cdk from '@aws-cdk/core';
import { StreamViewType, Table, AttributeType as DbAttributeType, BillingMode } from '@aws-cdk/aws-dynamodb';

// Please remember to return the Table you new
const DbConfig = (scope: cdk.Construct, { dynamoDBTablePrefix, ENV }: { dynamoDBTablePrefix: string, ENV: string }) => {
  const TestTable = new Table(scope, 'TestTable', {
    partitionKey: { name: 'target', type: DbAttributeType.STRING },
    sortKey: { name: 'id', type: DbAttributeType.STRING },
    // stream: StreamViewType.NEW_IMAGE,
    tableName: `${dynamoDBTablePrefix}_Tests`,
    billingMode: BillingMode.PAY_PER_REQUEST,
    pointInTimeRecovery: ENV === 'production',
  });
  TestTable.addGlobalSecondaryIndex({
    indexName: "TestGSI_target_created_at",
    partitionKey: { name: 'target', type: DbAttributeType.STRING },
    sortKey: {
      name: "created_at",
      type: DbAttributeType.NUMBER
    }
  });
  
  // Get EventSource from table and stream it to lambda
  return { TestTable }
};

That’s our cdk project structure. If you have a long file of ApiStack, you may consider and try our setting.

Back to Blog

Related Posts

View All Posts »
Ruby on Jets - AWS serverless framework for Ruby on Rails

Ruby on Jets - AWS serverless framework for Ruby on Rails

Ruby on Rails (RoR) is my favorite web framework, and today I will share an AWS serverless framework of RoR: Ruby on Jets. I’m not an AWS expert and even have no AWS certifications, and besides, this is my first time to use AWS Lambda, API gateway, dynamodb, and other serverless services. Preparation: Add aws_access_key_id and aws_secret_access_key to ~/.aws/credentials Docker / Docker-compose I use in this demo. Let's look at initial project structure: project-structure.png

Using Firebase and Firestore with NextJS and Docker - Part 1 - Setup firebase in docker

Using Firebase and Firestore with NextJS and Docker - Part 1 - Setup firebase in docker

Last year, I got a case to use firebase and firestore with Next.js. I've been fullstack many years, so I haven't tried to use firebase and firestore. There was a great chance to give it a try. In this article I'll show how to use firebase and firestore in Docker and Next.js. If you don't have backend support, or you don't want to build whole backend, database, and infrastructure, you would probably think this is a useful way.

Switching from vim to Neovim!

Switching from vim to Neovim!

I've been using Vim for a long time and finally switched to Neovim. The initial thought of switching came after the author of VIM passed away in August 2023, as I didn’t have the time to try other editors. After a year, “vibe coding” grew up, so I started thinking how to integrate AI into my editor and surveying how to use AI in Vim, which led to this journey. TOC Main differences Tab vs. Buffer: ref: https://www.reddit.com/r/neovim/comments/13iy0p0/why_some_people_use_buffers_for_tabs_and_not_vim/ In vim, I used vsp or tabnew to open or edit a file and mapped tabprevious command with Tab key to navigate between tabs. However, in Neovim, I used buffer instead of tabs and mapped BufferLineCyclePrev with Shift + h for switching buffers. coc vs. native lsp: I configured many coc settings to support TypeScript and JavaScript language servers, including linting and Prettier on save, go-to definition or reference, and codelens. After using Neovim, I converted all of these settings to nvim-lspconfig and mason, among others. Lua supports: Although I’m not familiar with Lua, it allows me to write more readable configuration files using modules, functions, and tables (objects) for Neovim. AI supports: Vibe coding! I found many plugins to integrate LLMs and finally selected three to use: avante.nvim Let your Neovim be a Cursor AI IDE – it's undergoing rapid iterations and many exciting features will be added successively; it updates almost every day! CodeCompanion CodeCompanion can customize workflows and prompts, and its Action Palette is a really useful tool. gp.nvim GP means GPT Prompt, and it's an Neovim AI plugin. It helps me write prompts through Neovim's cmdline. I usually switch between these plugins, and I'm still thinking about my 'vibe-way' to use them. Because of supporting Ollama LLM, all of them can be used offline. For now, I've used Neovim for three months, and got to know Neovim. In my experience, Neovim is really more productive than vim. Reference AI in Neovim How to set up Neovim for coding React ZazenCodes Neovim Rewrite by Avante + Ollama gemma3:4b ------ For a long time, I relied on Vim for my coding. However, after Bram Moolenaar’s passing in August 2023 – a significant influence in the Vim community – I decided it was time for a change. I began exploring alternative editors, ultimately settling on Neovim. This journey wasn’t just about switching editors; it was about integrating AI into my workflow and redefining my coding experience. Key Differences: Tabs vs. Buffers One of the first things I noticed was the shift from tabs to buffers in Neovim. In Vim, I frequently used vsp or tabnew to open or edit files, navigating between tabs with the Tab key and Tabprevious command. Neovim, however, utilizes buffers, offering a more streamlined approach. I configured BufferLineCyclePrev with Shift + h for seamless buffer switching, alongside nvim-tree/nvim-web-devicons and akinsho/bufferline.nvim. Leveraging Language Servers with coc and nvim-lspconfig I configured many coc settings to support TypeScript and JavaScript language servers, including linting and Prettier on save, go-to definition or reference, and codelens. Recognizing the power of language servers, I then converted all of these settings to nvim-lspconfig and mason.nvim, streamlining my development environment. Lua Configuration for Readability Although I’m relatively new to Lua, it allows me to write more readable configuration files using modules, functions, and tables (objects) for Neovim. Here’s a snippet of my configuration: AI-Powered Productivity with avante.nvim, CodeCompanion, and gp.nvim To truly elevate my coding experience, I integrated several AI plugins. I selected: avante.nvim**: This plugin transforms Neovim into a Cursor AI IDE, undergoing rapid iterations and adding exciting new features daily. CodeCompanion**: This plugin allows for customizable workflows and prompts, with a particularly useful Action Palette. gp.nvim**: (GPT Prompt) – This plugin helps me write prompts through Neovim’s command line interface, leveraging folke/noice.nvim. Because of supporting Ollama LLM, all of these plugins can be used offline. I’m still experimenting with how to best utilize these plugins – a “vibe-way” to coding! Resources for Further Exploration AI in Neovim How to set up Neovim for coding React ZazenCodes Neovim