I'm going to try to explain how I approach a new Angular Project
The first step is to create a repository where our code will live. Github, Gitlab, Wherever and clone it blank.
Then we must have the project within a Docker Container. How do I do that?
VSCode has the extension ms-vscode-remote.remote-containers which allows us to work seamlessly with code inside a Docker container. Once we install the extension we'll have the option to "Add development container configuration files..."
Then we have to choose the container configuration definition, which is the base image for our container. We can choose to have only the OS or we can have some tools already installed and prepared by Microsoft. In our case we are going to use "Node.js & Typescript". After that, it will ask us for node's version, and it depends if we are going to work with firebase functions or not (Check the docs) Then we can add extra features such as "github cli", "Azure CLI" even a desktop environment with "Fluxbox". We are going to pass this step.
Now we are going to have 2 new files. devcontainer.json and Dockerfile.
devcontainer.json is the file that VSCode uses to store container's extra configuration on top of Docker's.
We are going to change the "name" to our project's name. Then, we are going to add the extensions we are going to use in our project in the "extensions" section. I usually use:
- dbaeumer.vscode-eslint
- eamodio.gitlens
- mhutchie.git-graph
- oouo-diogo-perdigao.docthis
- vivaxy.vscode-conventional-commits
- Git User Profiles Config
- MS-vsliveshare.vsliveshare-pack (Can be recommended inside .vscode/extensions.json [recommendations])
- obenjiro.arrr (optional)
- mutantdino.resourcemonitor (optional)
"customizations": {
"vscode": {
"extensions": [
"dbaeumer.vscode-eslint",
"mhutchie.git-graph",
"oouo-diogo-perdigao.docthis",
"vivaxy.vscode-conventional-commits",
"obenjiro.arrr",
"usernamehw.errorlens",
"mutantdino.resourcemonitor",
"GitHub.copilot",
"anaer.git-user-profiles-config"
]
}
},
Also, very important is to add "shutdownAction": "stopContainer",
in this file so it shutsdown the container once the project (window) is closed.
To force Yarn to install packages everytime the container is built, we must add to our .devcontainer.json
"postCreateCommand": "yarn --cwd ${containerWorkspaceFolder}/frontend install --non-interactive --silent && yarn --cwd ${containerWorkspaceFolder}/functions install --non-interactive --silent",
To bring the ssh keys from the WSL to the container, just add this property (mounts).
"mounts": [
"source=${localEnv:HOME}${localEnv:USERPROFILE}/.ssh,target=/home/node/.ssh,type=bind,consistency=cached"
],
Dockerfile is the base configuration for the container.
These two lines installs Google Cloud's CLI.
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
&& apt-get -y install --no-install-recommends apt-transport-https ca-certificates gnupg
RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | tee /usr/share/keyrings/cloud.google.gpg && apt-get update -y && apt-get install google-cloud-sdk -y
Now we are ready for the first build. It will take a while, but then it will be faster because each step on the building process is cached.
Here we only set the packages we want to have installed globally inside the container such as Angular CLI, Firebase Tools, dotenv, cross-var.
I suggest fixing Angular CLI's version to avoid having a newer CLI compiling code created with an older version.
RUN su node -c "npm install --location=global yarn"
RUN su node -l -c "yarn add -g @angular/cli firebase-tools cross-var dotenv-cli localtunnel"
Usually we work in, at least, 2 environments. And me want our application to work on either of them.
So we create a folder called "environments" in the project's root and inside of them create a JSON file for each environment we have. Then, in the package.json file we add this for each environment. replacing {{projectId}} with the actual Google Cloud's project's id.
"deploy:dev": "gcloud config set project {{projectId}} && cp ./environments/dev.json env.json && gcloud app deploy",
"deploy:uat": "gcloud config set project {{projectId}} && cp ./environments/uat.json env.json && gcloud app deploy",
"deploy:prod": "gcloud config set project {{projectId}} && cp ./environments/prod.json env.json && gcloud app deploy",
So when we run npm run deploy:dev
it sets the project to the development environment, copies the environment variables we need and then deploys the App Engine.
It's very important to know which env files check in version control.
Use Secret Manager
-
Create a Secret:
firebase functions:secrets:set SECRET_NAME
-
Enter a value for SECRET_NAME. The CLI echoes a success message and warns that you must deploy functions for the change to take effect.
-
Before deploying, make sure your functions code allows the function to access the secret using the runWith parameter:
exports.processPayment = functions
// Make the secret available to this function
.runWith({ secrets: ["SECRET_NAME"] })
.onCall((data, context) => {
const myBillingService = initializeBillingService(
// reference the secret value
process.env.SECRET_NAME
);
// Process the payment
});
- Deploy Cloud Functions:
firebase deploy --only functions
Use the Firebase CLI to manage your secrets. While managing secrets this way, keep in mind that some CLI changes require you to modify and/or redeploy associated functions. Specifically:
Whenever you set a new value for a secret, you must redeploy all functions that reference that secret for them to pick up the latest value.
If you delete a secret, make sure that none of your deployed functions references that secret. Functions that use a secret value that has been deleted will fail silently.
Here's a summary of the Firebase CLI commands for secret management:
# Change the value of an existing secret
firebase functions:secrets:set SECRET_NAME
# View the value of a secret
functions:secrets:access SECRET_NAME
# Destroy a secret
functions:secrets:destroy SECRET_NAME
# View all secret versions and their state
functions:secrets:get SECRET_NAME
# Automatically clean up all secrets that aren't referenced by any of your functions
functions:secrets:prune
Secret Manager allows 6 active secret versions at no cost. This means that you can have 6 secrets per month in a Firebase project at no cost.
By default, the Firebase CLI attempts to automatically destroy unused secret versions where appropriate, such as when you deploy functions with a new version of the secret. Also, you can actively clean up unused secrets using functions:secrets:destroy and functions:secrets:prune.
Secret Manager allows 10,000 unbilled monthly access operations on a secret. Function instances read only the secrets specified in their runWith parameter every time they cold start. If you have a lot of function instances reading a lot of secrets, your project may exceed this allowance, at which point you'll be charged $0.03 per 10,000 access operations.
For more information, see Secret Manager Pricing.
When developing locally, you will need to have the keys of your account to make the app work on your behalf. So you can add the following script in package.json to generate it everytime you need it.
We must replace {{projectId}} with the actual Google Cloud's project's id.
"generateKeys": "gcloud iam service-accounts keys create GOOGLE_CREDENTIALS.json --iam-account={{projectId}}@appspot.gserviceaccount.com && export GOOGLE_APPLICATION_CREDENTIALS=\"GOOGLE_CREDENTIALS.json\""
It's very important to add GOOGLE_CREDENTIALS.json to your .gitignore file.
DotEnv Vault - https://www.dotenv.org/
It's recomended to use DotEnv Vault for env variables handling.
When starting a project we have to create a new project in dotenv-vault running the command npx dotenv-vault new {{project name}}
it may asks us to login npx dotenv-vault login
To get the latest version of the .env file the command is npx dotenv-vault pull [ENVIRONMENT=development]
To push the latest version of the .env file the command is npx dotenv-vault push [ENVIRONMENT=development]
It would be awesome if we could bump versions and have the changelog generated in one command, right?! This is what we will achieve here thanks to Standard Version.
We just have to install it as a dev dependency: yarn add --dev standard-version
, add "release": "standard-version"
and then create a npm script with the command "release": "standard-version && git push origin main --follow-tags"
- Install standard-version and commitlint
yarn add --dev standard-version @commitlint/{cli,config-conventional}
- (optional) if you don't have husky installed, yet (check Git Hooks and Husky), run:
npx husky-init && npm install && npx husky add .husky/commit-msg 'npx commitlint --edit "$1"'
- Create a
.commitlintrc.json
file with this content (to follow the rules from config-conventional)
{
"extends": ["@commitlint/config-conventional"]
}
- Make a git commit with an invalid format
git commit -m "invalid message"
to make sure it displays an error. - Correct the message and try again
git commit -m "feat: initial feature commit"
- Inside of our scripts section in package.json we should add:
"scripts": {
...
"release": "standard-version",
"release:minor": "standard-version --release-as minor",
"release:patch": "standard-version --release-as patch",
"release:major": "standard-version --release-as major"
...
},
- To configure what's inside the Changelog, we create a
.versionrc.json
or in a standard-version section in package.json with this content: (Be sure to replace hash, previousTag and currentTag)
{
"types": [
{"type": "feat", "section": "Features"},
{"type": "fix", "section": "Bug Fixes"},
{"type": "chore", "hidden": true},
{"type": "docs", "hidden": true},
{"type": "style", "hidden": true},
{"type": "refactor", "hidden": true},
{"type": "perf", "hidden": true},
{"type": "test", "hidden": true}
],
"commitUrlFormat": "https://github.com/mokkapps/changelog-generator-demo/commits/{{hash}}",
"compareUrlFormat": "https://github.com/mokkapps/changelog-generator-demo/compare/{{previousTag}}...{{currentTag}}"
}
- In order to create a first release, we now run:
yarn release -- --first-release
In order to keep a good code quality, one of the things we can do is to use a linter. In Angular, we use ESLint (After v12 we can use ng add @angular-eslint/schematics
) - GitHub Repo - and we usually run it with ng lint
or npm run lint
.
The thing is that it would be great if the linter could run automatically before we push code to the repository. That's where Git Hooks and Husky enter.
Git Hooks (Docs)
We can run scripts on certain steps of our development cycle, before a commit, after a push, when a Pull Request is created, for example. This is what we want if we'd like to lint the code before push.
Husky (Repo)
Husky is a package that helps us manage git hooks. Before Husky, we had to know where to create the scripts, but know everything is handled by Husky.
I recommend checking Husky's Usage section in case it changes but at the time of writing this, these are the steps to create and configure a pre-hook that lints our code before each push.
- We set the "prepare" script to install husky and run it with
npm run prepare
- We add the hook with the command
npx husky add .husky/pre-push "npm run lint"
- Commit and push.
Now, every time we want to push code, it will run the linter. It's very important to have the linter well configured.
There is a repo that helps us stick with native solutions. It has an eslint plugin that we can configure to avoid falling in the temptation.
https://github.com/you-dont-need/You-Dont-Need-Lodash-Underscore#eslint-plugin
In order to force using Yarn, we must first create a .npmrc
file with this content
//.npmrc file
engine-strict = true
Then in the package.json file make this change:
//package.json
{
...
"engines": {
"npm": "please-use-yarn",
"yarn": ">= 1.19.1"
},
...
}