Skip to content

Instantly share code, notes, and snippets.

@ezequieltejada
Last active June 22, 2024 13:53
Show Gist options
  • Save ezequieltejada/0669b85d90641d778b10e4743b00dcc7 to your computer and use it in GitHub Desktop.
Save ezequieltejada/0669b85d90641d778b10e4743b00dcc7 to your computer and use it in GitHub Desktop.
Angular Architecture... so far.

App Engine Architecture (so far...)

I'm going to try to explain how I approach a new Angular Project

Clean Repo

The first step is to create a repository where our code will live. Github, Gitlab, Wherever and clone it blank.

Docker 🐳

Then we must have the project within a Docker Container. How do I do that?

VSCode has the extension ms-vscode-remote.remote-containers which allows us to work seamlessly with code inside a Docker container. Once we install the extension we'll have the option to "Add development container configuration files..."

Then we have to choose the container configuration definition, which is the base image for our container. We can choose to have only the OS or we can have some tools already installed and prepared by Microsoft. In our case we are going to use "Node.js & Typescript". After that, it will ask us for node's version, and it depends if we are going to work with firebase functions or not (Check the docs) Then we can add extra features such as "github cli", "Azure CLI" even a desktop environment with "Fluxbox". We are going to pass this step.

Now we are going to have 2 new files. devcontainer.json and Dockerfile.

DevContainer

devcontainer.json is the file that VSCode uses to store container's extra configuration on top of Docker's.

We are going to change the "name" to our project's name. Then, we are going to add the extensions we are going to use in our project in the "extensions" section. I usually use:

  • dbaeumer.vscode-eslint
  • eamodio.gitlens
  • mhutchie.git-graph
  • oouo-diogo-perdigao.docthis
  • vivaxy.vscode-conventional-commits
  • Git User Profiles Config
  • MS-vsliveshare.vsliveshare-pack (Can be recommended inside .vscode/extensions.json [recommendations])
  • obenjiro.arrr (optional)
  • mutantdino.resourcemonitor (optional)
"customizations": {
  "vscode": {
    "extensions": [
      "dbaeumer.vscode-eslint",
      "mhutchie.git-graph",
      "oouo-diogo-perdigao.docthis",
      "vivaxy.vscode-conventional-commits",
      "obenjiro.arrr",
      "usernamehw.errorlens",
      "mutantdino.resourcemonitor",
      "GitHub.copilot",
      "anaer.git-user-profiles-config"
    ]
  }
},

Also, very important is to add "shutdownAction": "stopContainer", in this file so it shutsdown the container once the project (window) is closed.

To force Yarn to install packages everytime the container is built, we must add to our .devcontainer.json

"postCreateCommand": "yarn --cwd ${containerWorkspaceFolder}/frontend install --non-interactive --silent && yarn --cwd ${containerWorkspaceFolder}/functions install --non-interactive --silent",

SSH

To bring the ssh keys from the WSL to the container, just add this property (mounts).

"mounts": [
  "source=${localEnv:HOME}${localEnv:USERPROFILE}/.ssh,target=/home/node/.ssh,type=bind,consistency=cached"
],

Dockerfile

Dockerfile is the base configuration for the container.

These two lines installs Google Cloud's CLI.


RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
    && apt-get -y install --no-install-recommends apt-transport-https ca-certificates gnupg 

RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | tee /usr/share/keyrings/cloud.google.gpg && apt-get update -y && apt-get install google-cloud-sdk -y

Now we are ready for the first build. It will take a while, but then it will be faster because each step on the building process is cached.

Here we only set the packages we want to have installed globally inside the container such as Angular CLI, Firebase Tools, dotenv, cross-var.

I suggest fixing Angular CLI's version to avoid having a newer CLI compiling code created with an older version.

RUN su node -c "npm install --location=global yarn"
RUN su node -l -c "yarn add -g @angular/cli firebase-tools cross-var dotenv-cli localtunnel"

Environments

Usually we work in, at least, 2 environments. And me want our application to work on either of them.

So we create a folder called "environments" in the project's root and inside of them create a JSON file for each environment we have. Then, in the package.json file we add this for each environment. replacing {{projectId}} with the actual Google Cloud's project's id.

"deploy:dev": "gcloud config set project {{projectId}} && cp ./environments/dev.json env.json && gcloud app deploy",

"deploy:uat": "gcloud config set project {{projectId}} && cp ./environments/uat.json env.json && gcloud app deploy",

"deploy:prod": "gcloud config set project {{projectId}} && cp ./environments/prod.json env.json && gcloud app deploy",

So when we run npm run deploy:dev it sets the project to the development environment, copies the environment variables we need and then deploys the App Engine.

It's very important to know which env files check in version control.

Alternative (best, but to be tested) approach.

Use Secret Manager

  1. Create a Secret: firebase functions:secrets:set SECRET_NAME

  2. Enter a value for SECRET_NAME. The CLI echoes a success message and warns that you must deploy functions for the change to take effect.

  3. Before deploying, make sure your functions code allows the function to access the secret using the runWith parameter:

exports.processPayment = functions
  // Make the secret available to this function
  .runWith({ secrets: ["SECRET_NAME"] })
  .onCall((data, context) => {
    const myBillingService = initializeBillingService(
      // reference the secret value
      process.env.SECRET_NAME
    );
    // Process the payment
  });
  1. Deploy Cloud Functions: firebase deploy --only functions

Managing secrets

Use the Firebase CLI to manage your secrets. While managing secrets this way, keep in mind that some CLI changes require you to modify and/or redeploy associated functions. Specifically:

Whenever you set a new value for a secret, you must redeploy all functions that reference that secret for them to pick up the latest value.
If you delete a secret, make sure that none of your deployed functions references that secret. Functions that use a secret value that has been deleted will fail silently.

Here's a summary of the Firebase CLI commands for secret management:

# Change the value of an existing secret
firebase functions:secrets:set SECRET_NAME

# View the value of a secret
functions:secrets:access SECRET_NAME

# Destroy a secret
functions:secrets:destroy SECRET_NAME

# View all secret versions and their state
functions:secrets:get SECRET_NAME

# Automatically clean up all secrets that aren't referenced by any of your functions
functions:secrets:prune

How secrets are billed

Secret Manager allows 6 active secret versions at no cost. This means that you can have 6 secrets per month in a Firebase project at no cost.

By default, the Firebase CLI attempts to automatically destroy unused secret versions where appropriate, such as when you deploy functions with a new version of the secret. Also, you can actively clean up unused secrets using functions:secrets:destroy and functions:secrets:prune.

Secret Manager allows 10,000 unbilled monthly access operations on a secret. Function instances read only the secrets specified in their runWith parameter every time they cold start. If you have a lot of function instances reading a lot of secrets, your project may exceed this allowance, at which point you'll be charged $0.03 per 10,000 access operations.

For more information, see Secret Manager Pricing.


Generate keys

When developing locally, you will need to have the keys of your account to make the app work on your behalf. So you can add the following script in package.json to generate it everytime you need it.

We must replace {{projectId}} with the actual Google Cloud's project's id.

"generateKeys": "gcloud iam service-accounts keys create GOOGLE_CREDENTIALS.json --iam-account={{projectId}}@appspot.gserviceaccount.com && export GOOGLE_APPLICATION_CREDENTIALS=\"GOOGLE_CREDENTIALS.json\""

It's very important to add GOOGLE_CREDENTIALS.json to your .gitignore file.

It's recomended to use DotEnv Vault for env variables handling.

New project

When starting a project we have to create a new project in dotenv-vault running the command npx dotenv-vault new {{project name}} it may asks us to login npx dotenv-vault login

Pulling and Updating env files

To get the latest version of the .env file the command is npx dotenv-vault pull [ENVIRONMENT=development] To push the latest version of the .env file the command is npx dotenv-vault push [ENVIRONMENT=development]

Automatic ChangeLog

It would be awesome if we could bump versions and have the changelog generated in one command, right?! This is what we will achieve here thanks to Standard Version.

We just have to install it as a dev dependency: yarn add --dev standard-version, add "release": "standard-version" and then create a npm script with the command "release": "standard-version && git push origin main --follow-tags"

Detailed steps

  1. Install standard-version and commitlint yarn add --dev standard-version @commitlint/{cli,config-conventional}
  2. (optional) if you don't have husky installed, yet (check Git Hooks and Husky), run: npx husky-init && npm install && npx husky add .husky/commit-msg 'npx commitlint --edit "$1"'
  3. Create a .commitlintrc.json file with this content (to follow the rules from config-conventional)
{
  "extends": ["@commitlint/config-conventional"]
}
  1. Make a git commit with an invalid format git commit -m "invalid message" to make sure it displays an error.
  2. Correct the message and try again git commit -m "feat: initial feature commit"
  3. Inside of our scripts section in package.json we should add:
 "scripts": {
    ...
    "release": "standard-version",
    "release:minor": "standard-version --release-as minor",
    "release:patch": "standard-version --release-as patch",
    "release:major": "standard-version --release-as major"
    ...
  },
  1. To configure what's inside the Changelog, we create a .versionrc.json or in a standard-version section in package.json with this content: (Be sure to replace hash, previousTag and currentTag)
{
  "types": [
    {"type": "feat", "section": "Features"},
    {"type": "fix", "section": "Bug Fixes"},
    {"type": "chore", "hidden": true},
    {"type": "docs", "hidden": true},
    {"type": "style", "hidden": true},
    {"type": "refactor", "hidden": true},
    {"type": "perf", "hidden": true},
    {"type": "test", "hidden": true}
  ],
  "commitUrlFormat": "https://github.com/mokkapps/changelog-generator-demo/commits/{{hash}}",
  "compareUrlFormat": "https://github.com/mokkapps/changelog-generator-demo/compare/{{previousTag}}...{{currentTag}}"
}
  1. In order to create a first release, we now run: yarn release -- --first-release

Git Hooks and Husky

In order to keep a good code quality, one of the things we can do is to use a linter. In Angular, we use ESLint (After v12 we can use ng add @angular-eslint/schematics) - GitHub Repo - and we usually run it with ng lint or npm run lint. The thing is that it would be great if the linter could run automatically before we push code to the repository. That's where Git Hooks and Husky enter.

Git Hooks (Docs)

We can run scripts on certain steps of our development cycle, before a commit, after a push, when a Pull Request is created, for example. This is what we want if we'd like to lint the code before push.

Husky (Repo)

Husky is a package that helps us manage git hooks. Before Husky, we had to know where to create the scripts, but know everything is handled by Husky.

I recommend checking Husky's Usage section in case it changes but at the time of writing this, these are the steps to create and configure a pre-hook that lints our code before each push.

  1. We set the "prepare" script to install husky and run it with npm run prepare
  2. We add the hook with the command npx husky add .husky/pre-push "npm run lint"
  3. Commit and push.

Now, every time we want to push code, it will run the linter. It's very important to have the linter well configured.

you-dont-need

There is a repo that helps us stick with native solutions. It has an eslint plugin that we can configure to avoid falling in the temptation.

https://github.com/you-dont-need/You-Dont-Need-Lodash-Underscore#eslint-plugin

Just use Yarn

In order to force using Yarn, we must first create a .npmrc file with this content

//.npmrc file

engine-strict = true

Then in the package.json file make this change:

//package.json
{ 
  ...
  "engines": {
    "npm": "please-use-yarn",
    "yarn": ">= 1.19.1"
  },
  ...
}

Angular and Firebase? Architecture (so far...)

I'm going to try to explain how I approach a new Angular Project

Clean Repo

The first step is to create a repository where our code will live. Github, Gitlab, Wherever and clone it blank.

Docker 🐳

Then we must have the project within a Docker Container. How do I do that?

VSCode has the extension ms-vscode-remote.remote-containers which allows us to work seamlessly with code inside a Docker container. Once we install the extension we'll have the option to "Add development container configuration files..."

Then we have to choose the container configuration definition, which is the base image for our container. We can choose to have only the OS or we can have some tools already installed and prepared by Microsoft. In our case we are going to use "Node.js & Typescript". After that, it will ask us for node's version, and it depends if we are going to work with firebase functions or not (Check the docs) Then we can add extra features such as "github cli", "Azure CLI" even a desktop environment with "Fluxbox". We are going to pass this step.

Now we are going to have 2 new files. devcontainer.json and Dockerfile.

DevContainer

devcontainer.json is the file that VSCode uses to store container's extra configuration on top of Docker's.

We are going to change the "name" to our project's name. Then, we are going to add the extensions we are going to use in our project in the "extensions" section. I usually use:

  • dbaeumer.vscode-eslint
  • eamodio.gitlens
  • mhutchie.git-graph
  • syler.sass-indented
  • oouo-diogo-perdigao.docthis
  • vivaxy.vscode-conventional-commits
  • johnpapa.angular-essentials
  • cyrilletuzi.angular-schematics
  • MS-vsliveshare.vsliveshare-pack (Can be recommended inside .vscode/extensions.json [recommendations])
  • obenjiro.arrr (optional)
  • mutantdino.resourcemonitor (optional)
  • GitHub.copilot
  • GitHub.copilot-chat
  • Git User Profiles Config
"customizations": {
		"vscode": {
			"extensions": [
				"dbaeumer.vscode-eslint",
				"eamodio.gitlens",
				"mhutchie.git-graph",
				"syler.sass-indented",
				"oouo-diogo-perdigao.docthis",
				"vivaxy.vscode-conventional-commits",
				"johnpapa.angular-essentials",
				"cyrilletuzi.angular-schematics",
				"MS-vsliveshare.vsliveshare-pack",
				"obenjiro.arrr",
				"usernamehw.errorlens",
				"mutantdino.resourcemonitor",
        "GitHub.copilot",
				"GitHub.copilot-chat",
        "anaer.git-user-profiles-config"
			  ]
		}
	},

Also, very important is to add "shutdownAction": "stopContainer", in this file so it shutsdown the container once the project (window) is closed.

To force Yarn to install packages everytime the container is built, we must add to our .devcontainer.json

"postCreateCommand": "yarn --cwd ${containerWorkspaceFolder}/frontend install --non-interactive --silent && yarn --cwd ${containerWorkspaceFolder}/functions install --non-interactive --silent",

SSH

To bring the ssh keys from the WSL to the container, just add this property (mounts).

"mounts": [
  "source=${localEnv:HOME}${localEnv:USERPROFILE}/.ssh,target=/home/node/.ssh,type=bind,consistency=cached"
],

Dockerfile

Dockerfile is the base configuration for the container.

Here we only set the packages we want to have installed globally inside the container such as Angular CLI, Firebase Tools, dotenv, cross-var.

I suggest fixing Angular CLI's version to avoid having a newer CLI compiling code created with an older version.

RUN su node -c "npm install --location=global @angular/cli@13.2.3 firebase-tools cross-var dotenv-cli"

Now we are ready for the first build. It will take a while, but then it will be faster because each step on the building process is cached.

Since we want to use YARN as our package manager, we also set it in the Dockerfile RUN su node -c "ng config --global cli.packageManager yarn"

Angular

Most of this part is based on Tomas Trajan's post I suggest following him.

Let's initialize our project, ng new... , have Core, Shared and Features Modules.

Core Module

This will host all the services that will run as singletons in the application such as Auth Services.

What I liked from Tomas's post is what he has implemented inside Module's class:

export class CoreModule {
  /* make sure CoreModule is imported only by one NgModule the AppModule */
  constructor (
    @Optional() @SkipSelf() parentModule: CoreModule
  ) {
    if (parentModule) {
      throw new Error('CoreModule is already loaded. Import only in AppModule');
    }
  }
}

Shared Module

Here we'll have all the "dumb" components and pipes. These components don't import/inject services from core or other features in their constructors. They only receive and send data throught @Inputs/@Outputs. Also, we can import and export modules from 3rd party libraries such as Angular Material.

To simplify the importing of services or other things from the Core or Shared modules, we are going to create an index.ts file where the content looks like this:

export * from './core.module';
export * from './auth/auth.service';
export * from './user/user.service';
export * from './some-singleton-service/some-singleton.service';

So then we can import from '@app/core' instead of @app/core/some-package/some-singleton.service

If there is a use case where you need to share things between features modules but not all the app, it's ok to have a shared feature module. With the time and experience you will know when to implement it.

TS Aliases

Talking about importing, it's a good time to start implementing some TS Aliases to avoid long paths when importing something. So we can use import { SomeService } from "@app/core" instead of import { SomeService } from '../../../core/subpackage1/subpackage2/some.service'

To achieve this, we have to go to the tsconfig.json file we have to add baseUrl and paths properties under compilerOptions

{
  "compilerOptions": {
    "...": "reduced for brevity",
    
    "baseUrl": "src",
    "paths": {
      "@app/*": ["app/*"],
      "@env/*": ["environments/*"]
    }
  }
}

Sass

Tomas suggests adding stylePreprocessorOptions with includePaths and an array of paths as value. This allow the editor to find imported symbols and help us with the code completion.

{
  "apps": [
    {
      "...": "reduced for brevity",
      
      "stylePreprocessorOptions": {
        "includePaths": ["./", "./themes"]
      }
    }
  ]
}

In this case, a "themes" folder is added because it seems that is a good practice to have a folder for each Angular Material Theme.

It's recomended to use DotEnv Vault for env variables handling.

New project

When starting a project we have to create a new project in dotenv-vault running the command npx dotenv-vault new {{project name}} it may asks us to login npx dotenv-vault login

Pulling and Updating env files

To get the latest version of the .env file the command is npx dotenv-vault pull [ENVIRONMENT=development] To push the latest version of the .env file the command is npx dotenv-vault push [ENVIRONMENT=development]

Automatic ChangeLog

It would be awesome if we could bump versions and have the changelog generated in one command, right?! This is what we will achieve here thanks to Standard Version.

We just have to install it as a dev dependency: yarn add --dev standard-version, add "release": "standard-version" and then create a npm script with the command "release": "standard-version && git push origin main --follow-tags"

Detailed steps

  1. Install standard-version and commitlint yarn add --dev standard-version @commitlint/{cli,config-conventional}
  2. (optional) if you don't have husky installed, yet (check Git Hooks and Husky), run: npx husky-init && npm install && npx husky add .husky/commit-msg 'npx --no commitlint --edit "$1"'
  3. Create a .commitlintrc.json file with this content (to follow the rules from config-conventional)
{
  "extends": ["@commitlint/config-conventional"]
}
  1. Make a git commit with an invalid format git commit -m "invalid message" to make sure it displays an error.
  2. Correct the message and try again git commit -m "feat: initial feature commit"
  3. Inside of our scripts section in package.json we should add:
 "scripts": {
    ...
    "release": "standard-version",
    "release:minor": "standard-version --release-as minor",
    "release:patch": "standard-version --release-as patch",
    "release:major": "standard-version --release-as major"
    ...
  },
  1. To configure what's inside the Changelog, we create a .versionrc.json or in a standard-version section in package.json with this content: (Be sure to replace hash, previousTag and currentTag)
{
    "types": [
      {"type": "feat", "section": "Features"},
      {"type": "fix", "section": "Bug Fixes"},
      {"type": "chore", "hidden": true},
      {"type": "docs", "hidden": true},
      {"type": "style", "hidden": true},
      {"type": "refactor", "hidden": true},
      {"type": "perf", "hidden": true},
      {"type": "test", "hidden": true}
    ],
    "commitUrlFormat": "https://github.com/mokkapps/changelog-generator-demo/commits/{{hash}}",
    "compareUrlFormat": "https://github.com/mokkapps/changelog-generator-demo/compare/{{previousTag}}...{{currentTag}}"
  }
  1. In order to create a first release, we now run: yarn release -- --first-release

Git Hooks and Husky

In order to keep a good code quality, one of the things we can do is to use a linter. In Angular, we use ESLint (After v12 we can use ng add @angular-eslint/schematics) - GitHub Repo - and we usually run it with ng lint or npm run lint. The thing is that it would be great if the linter could run automatically before we push code to the repository. That's where Git Hooks and Husky enter.

Git Hooks (Docs)

We can run scripts on certain steps of our development cycle, before a commit, after a push, when a Pull Request is created, for example. This is what we want if we'd like to lint the code before push.

Husky (Repo)

Husky is a package that helps us manage git hooks. Before Husky, we had to know where to create the scripts, but know everything is handled by Husky.

I recommend checking Husky's Usage section in case it changes but at the time of writing this, these are the steps to create and configure a pre-hook that lints our code before each push.

  1. We set the "prepare" script to install husky and run it with npm run prepare
  2. We add the hook with the command npx husky add .husky/pre-push "npm run lint"
  3. Commit and push.

Now, every time we want to push code, it will run the linter. It's very important to have the linter well configured.

Just use Yarn

In order to force using Yarn, we must first create a .npmrc file with this content

//.npmrc file

engine-strict = true

Then in the package.json file make this change:

//package.json
{ 
  ...
  "engines": {
    "npm": "please-use-yarn",
    "yarn": ">= 1.19.1"
  },
  ...
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment