Skip to content

Instantly share code, notes, and snippets.

@agmcleod
Created March 10, 2022 16:58
Show Gist options
  • Save agmcleod/8e95190f586373ff779267d73c7655b8 to your computer and use it in GitHub Desktop.
Save agmcleod/8e95190f586373ff779267d73c7655b8 to your computer and use it in GitHub Desktop.
NestJS notes

NestJS

For getting started with NestJS I highly recommend looking over the official documentation. The overview section & fundamentals covers a lot of great information as to how one builds a backend application with NestJS. https://docs.nestjs.com/first-steps.

It will also cover adding dependencies for various features like configuration, typeorm, graphql via apollo, etc.

When setting up the project via nest new <project-name> it will create a starting project, along with git repository. If your git is configured to have the default branch of master, be sure to change the branch of the nest project to something more inclusive like main or dev. Also be sure to update your git configuration to use this by default

git config --global init.defaultBranch main

Linting

By default nest will generate a project including prettier & eslint configs. If you wish to turn off required semicolons, update .prettierrc

{
  "semi": false, // new line
  // etc
}

And add it as an eslint rule to the .eslintrc.js

{
  rules: {
    // ...
    semi: 'off',
  },
  // ...
}

Docker

Create a few files that we can populate for different environments:

docker-compose.yml
docker-compose.test.yml
Dockerfile
Dockerfile.dev
Dockerfile.test
.env

Setup the docker-compose.yml file to use development:

version: '3.7'
services:
  api:
    build:
      context: .
      dockerfile: Dockerfile.dev
    restart: on-failure
    depends_on:
      - db
    environment:
      - TYPEORM_DATABASE
      - TYPEORM_HOST
      - TYPEORM_PORT
      - TYPEORM_ENTITIES
      - TYPEORM_USERNAME
    ports:
      - 8080:8080
    volumes:
      - .:/api
      - /api/dist
      - /api/node_modules

  db:
    # Uses default port 5432
    image: postgres:12-alpine
    environment:
      POSTGRES_HOST_AUTH_METHOD: trust
	  POSTGRES_DB: api_dev
    ports:
      - 5432:5432
    restart: always

We setup the api & db containers. Using port 8080 for the api, and setting up the variables needed for a database connection. The ones listed are specific to TypeORM, but you can adjust those to use with another database library, or use a connection string. The dist & node_modules are setup as local volumes only, so build data & node_modules wont be shared with the host system. The TypeORM section will show populating the .env file with the values for the docker-compose.yml file.

Since we've set the port to 8080, be sure to update the port specified in src/main.ts.

To ensure our dist & node_modules folders from the host machine don't copy into the container, create a .dockerignore file and add those.

dist/*
node_modules/

Populate the Dockerfile.dev

FROM node:16
WORKDIR /api
COPY . /api
EXPOSE 8080
CMD yarn && yarn start:dev

The test files are pretty similar to dev, but instead do not expose the database or API to the host machine.

version: '3.7'
services:
  test_api:
    build:
      context: .
      dockerfile: Dockerfile.test
    depends_on:
      - test_db
    stdin_open: true
    environment:
      - TYPEORM_ENTITIES=src/**/*.entity{.ts,.js}
      - TYPEORM_DATABASE=qa_api_test
      - TYPEORM_HOST=test_db
      - TYPEORM_PORT=5432
      - TYPEORM_USERNAME=postgres
      - TYPEORM_PASSWORD=password
      - JWT_SIGN_KEY=8b37634be1b439466ad28cd4078a402f6f7a39ae127f0c98ab8f50498435ad75
      - TYPEORM_LOGGING=false
    volumes:
      - .:/api
      - /api/dist
      - /api/node_modules
    networks:
      - test_app_net

  test_db:
    # Uses default port 5432
    image: postgres:12-alpine
    restart: always
    environment:
      POSTGRES_USER: postgres
      POSTGRES_DB: app_api_test
      PGDATA: /var/lib/postgresql/data/pgdata
      POSTGRES_HOST_AUTH_METHOD: trust
    networks:
      - test_app_net

networks:
  test_app_net:
    name: 'test_app_net'

The test_api is setup to always be running, so you can run tests as you go.

Then populate Dockerfile.test

FROM node:16
WORKDIR /api
COPY . /api

To run the tests, dependencies will need to be installed. Recommended option is to use a Makefile to have a short command to run:

SHELL := /bin/bash

test:
	docker compose -f docker-compose.test.yml exec test_api bash -c "yarn && yarn test:e2e"

.PHONY: test

So once you start the test container, you can then run make test whenever you make code changes.

Finally the main Dockerfile

FROM node:16

WORKDIR /src/app

COPY . .

RUN yarn
ENV PATH /api/node_modules/.bin:$PATH
RUN yarn build

EXPOSE 8080

CMD [ "sh", "-c", "node ./env.js && yarn start:prod" ]

This is for running in a production environment. The env.js file pulls the environment variables from the cloud into a .env file that the application can read to inject into the environment.

Config

Have a look at the details page on the official documentation: https://docs.nestjs.com/techniques/configuration. The default module ConfigModule.forRoot() will load values injected into the .env file, as well as process.env, and make the values available via the ConfigService. The TypeORM section below goes into some additional configuration to work with TypeORM.

TypeORM

To work with the database, NestJS provides a package for TypeORM. You could also use any other database library like knex or sequelize if preferred. This section will cover setting up TypeORM with NestJS.

The default options from NestJS's documentation at the time of writing are:

{
  "type": "mysql",
  "host": "localhost",
  "port": 3306,
  "username": "root",
  "password": "root",
  "database": "test",
  "entities": ["dist/**/*.entity{.ts,.js}"],
  "synchronize": true
}

They show an example of setting up the configuration by passing it to the typeorm module in the typescript file. Because we want to make use of the typeorm-cli, we need to use a JS file for the config instead.

Note that the above config defaults to synchronize: true. This is a useful feature in local development, but not so much for deploying. It reads the entity files and determines how the schema should look. So it will update/add/delete columns as you make changes to the entities. This means if synchronize: true was set in our hosted environments, we could lose data. Since we deploy to our staging environments on pull request merge, using migrations is the better approach. So let's setup TypeORM to use migrations instead.

Create a file ormconfig.js in the root of the project (same folder as package.json), and set it up as follows (add options to this as per the needs of your project):

module.exports = {
  type: 'postgres',
  host: 'localhost',
  port: 5432,
  database: process.env.TYPEORM_DATABASE,
  entities: [process.env.TYPEORM_ENTITIES],
  migrations: ['dist/migration/*.js'],
  cli: {
    migrationsDir: 'migration',
  },
  synchronize: process.env.NODE_ENV === 'test',
}

Because it is a JS file, we can use process.env to include environment variables. The typeorm-cli will automatically parse & add environment variables it looks for to the process.env. Look at https://typeorm.io/#/using-ormconfig/using-environment-variables to see a full list. Note that for the synchronize option, we specify true for test environment. This is so you don't need to worry about migrations when it comes to running the tests.

To setup the env variables for the above config, create a .env file and a .env.test file.

# .env
TYPEORM_DATABASE=todonest
TYPEORM_ENTITIES=dist/**/*.entity{.ts,.js}
# .env.test
TYPEORM_DATABASE=todonest-test
TYPEORM_ENTITIES=src/**/*.entity{.ts,.js}

Specify separate database names, so the test database synchronize operations wont touch your local development data. The entities path is also different for each environment as the CLI will need to use the compiled files, where as the test runtime with synchronize uses the typescript entities. Note that any TYPEORM values you define in .env will be also read for the test environment. So if you need something in the .env overridden for tests, define the override in .env.test. The above example just specifys the database name, and relies on defaults to connect.

Next open up the app.module.ts file, and setup the modules for typeorm as well as nestjs config (be sure to add the @nestjs/config dependency to your project):

import { TypeOrmModule } from '@nestjs/typeorm'
import { ConfigModule } from '@nestjs/config'

@Module({
  imports: [
    TypeOrmModule.forRoot(),
    ConfigModule.forRoot({
      envFilePath: [`.env.${process.env.NODE_ENV}`, '.env'],
    }),
    // etc
  ],
  // etc
})
export class AppModule {}

The NODE_ENV one needs to be first, so its arguments take precedence over the .env default. Note that this helps to ensure from the NestJS application, it loads either the local or test db. The typeorm-cli in this setup will always parse the .env file, and then use the ormconfig.js. Meaning the typeorm-cli will not touch the test database.

Provided you have databases created, and you have created at least one entity file, you should be good to start. Create a build of the server via yarn build, and once it's done run yarn typeorm migration:generate -n InitialMigration. This will create a new migration to create the table(s) for your entities. Then run yarn typeorm migration:run to have it run the migration files against your development database.

For running tests that use your db, you don't need to run any additional commands, the sychronize option will do the work for you.

As you make changes to the entity files or create new ones, you'll need to use the generate command again to create the migration files.

Model relationships

The docs have a good example on setting this up: https://typeorm.io/#/many-to-one-one-to-many-relations. However if you want to access the foreign key column in your typescript code, you need to define that as well. By default it will expect the column to be <model>Id for the given relationship.

@ManyToOne(() => User, user => user.photos)

When using the typeorm cli to generate a migration, it will create a migration file with the field userId as the foreign key. If you wish to use it in the typescript code, add it as a field to the entity:

@Column('int')
userId: number

@ManyToOne(() => User, (user) => user.questions)
@Field(() => User)
user: User

KnexJS

If you prefer to use Knex over TypeORM, it's pretty straight forward to setup, nestjs doesn't have any specific packages for it however.

Add the knex dependency:

yarn add knex

Create a file knexfile.js in the root of the repository, so the knex command line can access the db.

module.exports = {
  client: 'pg',
  connection: {
    host: process.env.DATABASE_HOST,
    user: process.env.DATABASE_USERNAME,
    database: process.env.DATABASE_NAME,
  },
}

Update the docker-compose.yml and docker-compose.test.yml files to use the new environment variables. Be sure to delete any TYPEORM ones left over as they are no longer needed.

Provided you setup the ConfigModule as per the config section of this document, knex should have access to the environment variables in the application code. Create a knex module & service:

nest g module knex
nest g service knex

This will create three files src/knex/knex.module.ts, src/knex/knex.service.ts and the test file src/knex/knex.service.spec.ts for the service. You can delete the test file.

Populate the service with the following:

import { Injectable } from '@nestjs/common'
import { ConfigService } from '@nestjs/config'
import knex, { Knex } from 'knex'

@Injectable()
export class KnexService {
  knex: Knex

  constructor(private config: ConfigService) {
    this.knex = knex({
      client: 'pg',
      connection: {
        host: config.get('DATABASE_HOST'),
        user: config.get('DATABASE_USERNAME'),
        database: config.get('DATABASE_NAME'),
      },
    })
  }

  async beforeApplicationShutdown() {
    await this.knex.destroy()
  }
}

The module file can define the service and import the config module

import { Module, Global } from '@nestjs/common'
import { ConfigModule } from '@nestjs/config'

import { KnexService } from './knex.service'

@Global()
@Module({
  imports: [ConfigModule],
  providers: [KnexService],
  exports: [KnexService],
})
export class KnexModule {}

The generator command will add the KnexModule to AppModule for you.

In your application code, you can then query the database as you need by using KnexService. For example if your application has a table called users, create a UsersService that returns the results or creates records.

import { Injectable } from '@nestjs/common'

import { KnexService } from '../knex.service'
import { User } from './dtos/user.dto'

@Injectable()
export class UsersService {
  constructor(
    private knexService: KnexService,
  ) {}

  async getUsers(): Promise<User[]> {
	return this.knexService.knex('users')
  }

  async registerUser(username: string, password: string): Promise<number[]> {
    return this.knexService.knex('users').insert({
      username,
	  // not secure in this case, normally would use bcrypt here, just an example
      password,
    })
  }
}

The ./dtos/users.dto file exports a class with some properties to use for type definitions

export class User {
  id: number
  username: string
  password: string
}

Production Build

Update the package.json script to reference the folder path.

"start:prod": "node dist/src/main",

That way the build & start:prod commands in the Dockerfile will work correctly.

Project structure

If you've done Angular development, how one organizes code will feel familiar. Under the src folder, you can create folders for different domains of the application. The sample directory in the NestJS repo shows a number of examples of this.

The easiest way to create files in a structure that matches NestJS's recommendations is to use the nest generate command (nest g for short). Here's how files will get laid out. A folder called users for example could contain:

  • Entity
    • Where the typeorm entity is defined
  • DTOs
    • Definitions of types used in request parameters, responses
    • Can create a sub folder to contain DTOs
  • Service
    • Where queries using the entity are defined.
    • Can also define 3rd party API calls, or calls to other services
  • Controller
    • The methods in here are decorated with the appropriate http verb, and then call the service to retrieve/send data.
  • Module, where all of the above get imported into, and the app.module then includes the user module.

Example: https://github.com/nestjs/nest/tree/master/sample/05-sql-typeorm/src/users

Issue with not seeing changes

If you have a controller such as:

import { Controller, Get } from '@nestjs/common'
import { AppService } from './app.service'

@Controller()
export class AppController {
  constructor(private readonly appService: AppService) {}

  @Get()
  getHello(): string {
	  console.log('test') // new line
    return this.appService.getHello()
  }
}

Where you just added the console.log(), but you're not seeing it. Double check the output code in the dist folder. If the change is not in the compiled JS file, try cleaning the dist folder and restarting your dev server with yarn predbuild && yarn start:dev

E2E tests

For testing backend APIs, you can add your controller & service tests to get into the nitty gritty, but integration tests can be nice to ensure your data layer and everything in between is working.

Files for testing end to end go under the test directory. NestJS will generate an app.e2e-spec.ts file you can use as a reference. The idea is to start up the app server and then query the endpoints via supertest. With some additional work, you can also have it work with the database. Create a new file in the test folder called helpers.ts.

// test/helpers.ts
import { INestApplication } from '@nestjs/common'
import { Test, TestingModule } from '@nestjs/testing'
import { Connection } from 'typeorm'

import { AppModule } from '../src/app.module'
import { User } from '../src/users/user.entity'
import { TodoItem } from '../src/todo-items/todo-item.entity'

export async function cleanDb(connection: Connection) {
  // clear each table of records
  await connection.createQueryBuilder().delete().from(TodoItem).execute()
  await connection.createQueryBuilder().delete().from(User).execute()
}

export async function createTestingModule(): Promise<{
  app: INestApplication
  connection: Connection
}> {
  const moduleFixture: TestingModule = await Test.createTestingModule({
    imports: [AppModule],
  }).compile()

  const app = moduleFixture.createNestApplication()
  await app.init()

  const connection = app.get(Connection)

  return {
    app,
    connection,
  }
}

This defines a couple helper functions. One for emptying the database after tests run, and another for creating the test server. You can use these in a test suite like so\

// test/app.e2e-spec.ts

import { INestApplication } from '@nestjs/common'
import * as request from 'supertest'
import { Connection } from 'typeorm'

import { createTestingModule, cleanDb } from './helpers'

describe('AppController (e2e)', () => {
  let app: INestApplication
  let connection: Connection

  beforeAll(async () => {
    const ref = await createTestingModule()
    app = ref.app
    connection = ref.connection
  })

  beforeEach(async () => {
    await cleanDb(connection)
  })

  afterAll(async () => {
    await app.close()
  })
  
  it('test the thing', async () => {
  })
})

The idea is for the test suite we start up the server once, and when it's complete we close it. Before each test case the db is cleared. This is to ensure data from other tests in the db do not pollute the running test case.

In the test itself, you can then use the connection object to create records.

Running the tests

If you're using the database, having the tests run in parallel will be an issue. Add the flag --runInBand to the script in package.json, so the tests run sequentially.

"test:e2e": "jest --config ./test/jest-e2e.json --runInBand",

JS file warnings

When using jest 27.3 over 27.2, warnings would output for js files in the dist folder. We can tell jest typescript transform to ignore this. Update the jest section in the package.json file, and add it to the test/jest-e2e.json file:

"transformIgnorePatterns": ["/node_modules", "\\.pnp\\.[^\\/]+$", "/dist"]

The value there uses items from the default option specified in the docs, and adds /dist.

It may also warn about any js files you have in your code base. You can either update the transform option in the two jest configs to exclude js files:

"transform": {
  "^.+\\.ts$": "ts-jest"
},

Or you can update tsconfig, and add option to the compilersOptions:

allowJs: true

Validating Inputs

When you define a DTO, NestJS has good examples on how to setup validations: https://docs.nestjs.com/techniques/validation, as well as the details of the class-validator package to know what options are available: https://www.npmjs.com/package/class-validator. This is great for simple sync validations, but gets a little more complicated for async/custom validations.

If you need to do async validations against the database, or custom ones that class-validator doesn't provide, you'll need to create a custom validator class & set it up for dependency injection.

In main.ts add:

import { useContainer } from 'class-validator' // <-- new import

async function bootstrap() {
  const app = await NestFactory.create(AppModule)
  app.useGlobalPipes(new ValidationPipe())
  useContainer(app.select(AppModule), { fallbackOnErrors: true }) // <-- new line
  await app.listen(3000)
}

Create a new file to setup the validator

// src/<module-name>/validators/validate-unique-username.ts
import { Injectable } from '@nestjs/common'
import {
  registerDecorator,
  ValidationOptions,
  ValidatorConstraint,
  ValidatorConstraintInterface,
} from 'class-validator'

import { UsersService } from '../users.service'

@Injectable()
@ValidatorConstraint({ async: true })
export class ValidateUniqueUsernameConstraint
  implements ValidatorConstraintInterface
{
  constructor(private usersService: UsersService) {}

  async validate(userName: string) {
    const user = await this.usersService.findByUsername(userName)

    return !user
  }
}

export function ValidateUniqueUsername(validationOptions?: ValidationOptions) {
  return function (object: any, propertyName: string) {
    registerDecorator({
      target: object.constructor,
      propertyName: propertyName,
      options: validationOptions,
      constraints: [],
      validator: ValidateUniqueUsernameConstraint,
    })
  }
}

Then add it as a decorator to the property in the DTO:

import { IsNotEmpty } from 'class-validator'

import { ValidateUniqueUsername } from '../validators/validate-unique-username'

export class NewUser {
  @IsNotEmpty()
  @ValidateUniqueUsername({
    message: 'User $value already exists. Choose another name.',
  })
  username: string
  @IsNotEmpty()
  password: string
}

And add the constraint class to the module that has a DTO using the validation:

import { ValidateUniqueUsernameConstraint } from './validators/validate-unique-username'

@Module({
  providers: [
    ValidateUniqueUsernameConstraint,
		// ... other providers
  ],
  controllers: [UsersController],
  exports: [UsersService],
})
export class UsersModule {}

This is a fair bit of code for doing a unique name check. You could also add a simple function to the controller for the endpoint, and call it from the endpoint's method. However this methodology will allow you to keep error messages consistent if you leverage class-validator for synchronous validations.

GraphQL

Can use decorators on your data models/typeorm entities to define fields. Create resolvers separate from controllers that use nestjs graphql decorators. This creates the queries for the graphql service. https://docs.nestjs.com/graphql/resolvers

For authentication you can continue to use the guards setup for authentication detailed in nestjs's documentation. It has some extra details as to how to inject the current user into the request details for a graphql query. https://docs.nestjs.com/security/authentication#graphql

Be sure to look at using https://github.com/graphql/dataloader to optimize fetching data by individual IDs. GraphQL with resolving child records can lead to a lot of n+1 queries when you're fetching full lists of data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment