Skip to content

Instantly share code, notes, and snippets.

@djheru
Last active February 13, 2024 22:55
Show Gist options
  • Save djheru/174f7b2209ec0894950e to your computer and use it in GitHub Desktop.
Save djheru/174f7b2209ec0894950e to your computer and use it in GitHub Desktop.
how to do things
# Find out what resource owns an IP address
aws ec2 describe-network-interfaces \
  --filters Name=addresses.private-ip-address,Values=10.80.37.127
# Docker login to AWS ECR
aws ecr get-login-password \
  | docker login \
  -u AWS --password-stdin \
  "https://$(aws sts get-caller-identity --query 'Account' --output text).dkr.ecr.us-east-1.amazonaws.com"
# Adjust ECS autoscaling
aws application-autoscaling register-scalable-target \
  --service-namespace ecs \
  --resource-id service/patient-api-prod-service-cluster/patient-api-prod-service-ecs \
  --scalable-dimension ecs:service:DesiredCount \
  --min-capacity 1 \
  --max-capacity 4  # or whatever maximum you want

aws elbv2 modify-load-balancer-attributes \
  --load-balancer-arn arn:aws:elasticloadbalancing:us-east-1:564068623223:loadbalancer/app/patient-api-prod-service-lb/eccd32c598f97435 \
  --attributes Key=idle_timeout.timeout_seconds,Value=240

aws ecs update-service \
    --cluster patient-api-prod-service-cluster \
    --service patient-api-prod-service-ecs \
    --desired-count 1
    
# Immediately delete 
aws secretsmanager delete-secret \
    --secret-id SOMECOOLSECRET-sxKAh7tcTyxy \
    --force-delete-without-recovery --region us-east-1
#!/usr/bin/env bash
GRN=''
RED=''
XCOLOR=''
echo "${GRN}Downloading new ELH Swagger specification${XGRN}"
curl https://10.1.100.68/swagger/v1/swagger.json -k -o ./scripts/swagger.json &>/dev/null
if [ ! -s ./scripts/swagger.json ]; then
echo "${RED}Swagger file could not be downloaded successfully${XCOLOR}"
exit 1
fi
echo "${GRN}✓ Swagger spec downloaded successfully, generating new ELH API client${XCOLOR}"
docker run --rm -v "${PWD}:/oapigen" openapitools/openapi-generator-cli generate \
-i "/oapigen/scripts/swagger.json" \
-g "typescript-axios" \
-o "/oapigen/scripts/elh-api-client" \
--skip-validate-spec &>/dev/null
if [ ! -d ./scripts/elh-api-client ]; then
echo "${RED}API client could not be generated successfully${XCOLOR}"
exit 1
fi
echo "${GRN}✓ ELH API client generated successfully, replacing existing code${XCOLOR}"
rm -rf ./src/elh-api-client
mv ./scripts/elh-api-client ./src/elh-api-client
rm ./scripts/swagger.json
if [ ! -d ./src/elh-api-client ]; then
echo "${RED}API client could not be moved to the './src' directory successfully${XCOLOR}"
exit 1
fi
echo "${GRN}✓ ELH API client updated successfully. Commit the new changes and you should be good to go!${XCOLOR}"
aws cognito-idp admin-create-user \
    --user-pool-id <userpool_id> \
    --username <email>

aws cognito-idp admin-set-user-password \
  --user-pool-id '<userpool_id>' \
  --username <email> \
  --password '<password>' \
  --permanent

aws cognito-idp initiate-auth \
    --auth-flow USER_PASSWORD_AUTH \
    --client-id <client_id> \
    --auth-parameters USERNAME=<email>,PASSWORD='<password>'

Docker cleanup

# Command substitution to list all containers
$(docker container ls -a -q)

# Stop all containers
docker container stop $(docker container ls -a -q)

# Remove all containers
docker container rm $(docker container ls -a -q)

# Remove all images 
docker image rm $(docker image ls -a -q)

# Same for volumes and networks

# Remove everything (everything)
docker system prune -af --volumes

Markdown

Separate/detach from terminal

  • sudo apt-get install screen
  • screen - initiate a screen session. Looks like a regular terminal prompt
  • Press CTRL + A then ? to see all commands and params
  • To use:
    • screen
    • ./somecommand.sh - The long-running command you want to detach
    • Press CTRL + A then d to detach
      • This will provide the screen session ID, which you will need if you have more than one session
    • screen -r to reattach
    • screen -ls - If you have more than one session
      • screen -r $SESSION_ID to reattach to the correct sesssion
    • screen -X -S $SESSION_ID quit - To kill a single session
    • CTRL + A then K to kill the screen

Run command repeatedly

watch yourcommand.sh

  • -b - Beep if non-zero exit
  • -c - Interpret ANSI color sequences
  • -d - Highlight changes in command output
  • -n - Interval in seconds

Cron

  • Edit Crontab: crontab -e
  • 6 fields separated by whitespace
  • minute hour DoM Month (Jan=1)
  • Cron Diagram

See environment vars inside a built docker image

docker inspect -f "{{ .Config.Env }}" imagename

Regex guid

export const GUID_REGEX = /^[a-f0-9]{8}(?:-[a-f0-9]{4}){3}-[a-f0-9]{12}$/i;

Regex match string only if not preceded by another string

\w*(?<!dontwant)dowant

Merge arrays

var merged = [].concat.apply([], arrays);

Capitalize words

const capitalize = str => str.replace(/(^|\s|-)\S/g, l => l.toUpperCase())

Search for string in files recursively

grep -rnw './path/to/search' -e 'string to find'

find ./path -maxdepth 2 -type f -exec grep -l 'string to find' {} \;

JS Get deeply nested property:

var level3 = (((test || {}).level1 || {}).level2 || {}).level3;

Random number between 2 numbers

(min,max) => Math.floor(Math.random()*(max-min+1)+min);

Set up ssh config

# Example setup
Host myserver
  #User djheru
  HostName example.com
  Port 4567
  
Host host2
  #User djheru
  HostName example2.com
  #Port 2222
  
Host host2
  HostName example3.com
  User pdamra
  #Port 2222
  
Host host*
  Port 2222
  
Host *
  User djheru
  LogLevel INFO
  VisualHostKey yes
  ForwardAgent yes
  

scp things

download a file from remote server

scp your_username@remotehost.com:foobar.txt /some/local/directory

Copy the file "foobar.txt" from the local host to a remote host

scp foobar.txt your_username@remotehost.edu:/some/remote/directory

Git things

Better log format

git log --pretty=format:"%h%x09%an%x09%ad%x09%s"
# format something like
45160be Philip Damra    Tue Jul 9 12:19:48 2019 -0400   Merge branch 'hotfix/2.1.4'

Log diff between two branches

git log master..release/1.2.1

delete remote branch

git push origin --delete <branch>

rebase with branch, taking their changes in case of conflict (good for dist conflicts)

git rebase -s recursive -X theirs ${branch}

Undo the last commit, returning changed files to "unstaged"

git reset HEAD~

Revert merge commit

git revert <sha> -m 1

Mount an ISO

sudo su
mkdir -p /mnt/disk
mount -o loop diskisofile.iso /mnt/disk
cd /mnt/disk
ls -laht

Unmount the iso

first list them

df -H

unmount

umount /mnt/disk

use -f if necessary

remove spaces from dir and filenames

find -name "* *" -type d | rename 's/ /_/g'
find -name "* *" -type f | rename 's/ /_/g'

switch between intel and nvida gpu

sudo prime-select intel
sudo prime-select nvidia

disable usb auto suspend

for i in /sys/bus/usb/devices/*/power/autosuspend;
do echo 2 > $i;
done

for foo in /sys/bus/usb/devices/*/power/level;
do echo on > $foo;
done

Recursively run command in subdirectories

# '{}' is the name of the subdirectory
find . -type d -exec touch ./'{}'/README.md \;
const {
CloudWatchLogsClient,
DescribeLogGroupsCommand,
StartQueryCommand,
GetQueryResultsCommand,
} = require("@aws-sdk/client-cloudwatch-logs");
const fs = require("fs");
const { promisify } = require("util");
const sleep = promisify(setTimeout);
const env = "prod";
const days = 14;
const region = "us-east-1"; // Specify your AWS region
const logsClient = new CloudWatchLogsClient({ region });
async function runQuery(logGroupName) {
const oneDayAgo = Date.now() - 24 * 60 * 60 * 1000 * days;
const params = {
logGroupName,
startTime: Math.floor(oneDayAgo / 1000),
endTime: Math.floor(Date.now() / 1000),
queryString:
'filter @type = "REPORT" | stats max(@memorySize / 1000 / 1000) as provisionedMemoryMB, min(@maxMemoryUsed / 1000 / 1000) as smallestMemoryRequestMB, avg(@maxMemoryUsed / 1000 / 1000) as avgMemoryUsedMB, max(@maxMemoryUsed / 1000 / 1000) as maxMemoryUsedMB, provisionedMemoryMB - maxMemoryUsedMB as overProvisionedMB',
};
const command = new StartQueryCommand(params);
const { queryId } = await logsClient.send(command);
return queryId;
}
async function getQueryResults(queryId) {
while (true) {
const results = await logsClient.send(
new GetQueryResultsCommand({ queryId })
);
if (results.status === "Complete") {
return results.results;
}
await sleep(5000); // Wait for 5 seconds before checking again
}
}
async function processLogGroup(logGroupName) {
try {
await new Promise((r) => setTimeout(r, 1000));
const queryId = await runQuery(logGroupName);
const results = await getQueryResults(queryId);
if (results.length === 0) {
return false;
}
const data = results[0].map((field) => field.value);
return [logGroupName.replace("/aws/lambda/", ""), ...data];
} catch (error) {
console.error(`Error processing log group ${logGroupName}:`, error);
return [logGroupName, "Error", "Error", "Error", "Error", "Error"];
}
}
async function getLogGroupNames() {
let nextToken;
let logGroupNames = [];
do {
const response = await logsClient.send(
new DescribeLogGroupsCommand({
logGroupNamePattern: "/aws/lambda/",
nextToken,
})
);
response.logGroups
.filter(({ logGroupName }) =>
logGroupName.match(new RegExp(`^(?=.*-${env}-)[a-z0-9-\/]*$`))
)
.forEach((group) => {
console.log(group.logGroupName);
logGroupNames.push(group.logGroupName);
});
nextToken = response.nextToken;
await new Promise((r) => setTimeout(r, 1000));
} while (nextToken);
return logGroupNames;
}
function groupArray(data, groupSize = 30) {
const result = [];
for (let i = 0; i < data.length; i += groupSize) {
result.push(data.slice(i, i + groupSize));
}
return result;
}
async function main() {
const logGroupNames = await getLogGroupNames();
console.log(`Number of log groups: ${logGroupNames.length}`);
// Have to send query in batches of no more than 30
const groupedLogGroupNames = groupArray(logGroupNames, 30);
console.log("Number of batches: %s", groupedLogGroupNames.length);
const resultArray = [];
for (let i = 0; i < groupedLogGroupNames.length; i++) {
const logGroupBatchNames = groupedLogGroupNames[i];
console.log("Processing batch %s", i + 1);
const promises = logGroupBatchNames.map((logGroupName) =>
processLogGroup(logGroupName)
);
const results = (await Promise.all(promises)).filter(Boolean);
resultArray.push(...results);
}
const headers = [
"Log Group",
"Provisioned Memory (MB)",
"Smallest Memory Request (MB)",
"Average Memory Used (MB)",
"Maximum Memory Used (MB)",
"Over Provisioned (MB)",
];
const csvContent = [headers, ...resultArray]
.map((r) => r.join(","))
.join("\n");
fs.writeFileSync("lambda_memory_report.csv", csvContent);
console.log("Report generated: lambda_memory_report.csv");
}
main().catch(console.error);
{
"version": "0.2.0",
"configurations": [
{
"type": "node",
"request": "attach",
"name": "Debug: core-admin",
"remoteRoot": "/usr/src/app",
"localRoot": "${workspaceFolder}",
"protocol": "inspector",
"port": 9229,
"restart": true,
"address": "0.0.0.0",
"outFiles": ["${workspaceFolder}/dist/**/*.js"],
"skipFiles": ["<node_internals>/**"],
"sourceMaps": true
}
]
}
const AWS = require('aws-sdk');
const s3 = new AWS.S3({ apiVersion: '2006-03-01', region: 'us-east-1' });
const fs = require('fs');
// If you have s3 objects like `dir/subdir/user123/things/stuff.json` and you want to list each user
const params = {
Bucket: 'mybucket',
Prefix: 'dir/subdir/',
Delimiter: '/',
};
async function listAllFolders(bucket, prefix) {
let isTruncated = true;
let marker;
let folders = [];
while (isTruncated) {
let params = {
Bucket: bucket,
Prefix: prefix,
Delimiter: '/',
ContinuationToken: marker,
};
try {
const response = await s3.listObjectsV2(params).promise();
response.CommonPrefixes.forEach(({ Prefix }) => {
// Remove the prefix and trailing slash
const folder = Prefix.replace(prefix, '').slice(0, -1);
console.log(folder);
folders.push(folder);
});
marker = response.NextContinuationToken;
isTruncated = response.IsTruncated;
} catch (error) {
console.log('Error with S3 listObjectsV2:', error);
isTruncated = false;
}
}
// Remove duplicates
folders = [...new Set(folders)];
return folders;
}
function saveToCSV(folders, filename) {
const csv = folders.join('\n');
fs.writeFile(filename, csv, (err) => {
if (err) {
console.log('Error writing to csv file', err);
} else {
console.log(`saved as ${filename}`);
}
});
}
listAllFolders(params.Bucket, params.Prefix).then((folders) => {
console.log(folders); // Should log ['user123', 'user234', 'user345', ...]
saveToCSV(folders, 'patient-ids.PROD.txt');
});

MySQL things

Installation

Docker

  • Create a volume to hold the data
docker create \
    -v /var/lib/mysql \
    --name mysql-data \
    busybox
  • Run the container, using the volume, restarting the container on reboots
docker run \
    --name mysql \
    --restart unless-stopped \
    --volumes-from mysql-data -d \
    -p 3306:3306 \
    -e MYSQL_ROOT_PASSWORD=passwd \
    mysql:5

Linux

apt-get update
apt-get install mysql-server
# Set password for root user (or not)
service mysql status
service mysql start
# etc

macOS

Connecting

mysql CLI tool from linked Docker container

docker exec -it mysql bash
> mysql --user=root -p
# or
docker run -it \
    --link my-local-mysql:mysql \
    --rm mysql sh \
    -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'

Remote/Client Connection

Backup/Restore

From docker

# Copies the backup to the host
docker exec -it \
    mysql /usr/bin/mysqldump \
    -u root --password \
    some_application | grep -v "Using a password" > backup.sql

To Docker

# Copies file from host to docker
cat backup.sql | docker exec -i mysql /usr/bin/mysql -u root --password=passwd some_application

export a mysql database, zipping it

mysqldump \
-h somehost \
-u someuser \
-B somedb \
--single-transaction \
--order-by-primary \
-p | gzip > somefilename.sql.gz #or | mysql -u someuser -p -D otherdb

import a mysql db dump

mysql -u root -p
set global net_buffer_length=1000000; --Set network buffer length to a large byte number
set global max_allowed_packet=1000000000; --Set maximum allowed packet size to a large byte number
SET foreign_key_checks = 0; --Disable foreign key checking to avoid delays,errors and unwanted behaviour
source somefilename.sql --Import your sql dump file
SET foreign_key_checks = 1; --Remember to enable foreign key checks when procedure is complete!

Install mysql unattended

echo "mysql-server-5.7 mysql-server/root_password password root" | debconf-set-selections && \
  echo "mysql-server-5.7 mysql-server/root_password_again password root" | debconf-set-selections && \
  apt-get -y install mysql-server-5.7

update mysql password

mysqladmin -u root -p password thenewpasswordoblank

#check limits for a user for things like open files

su - someuser -c 'ulimit -aHS' -s '/bin/bash'

Remove all node modules recursively

find . -name 'node_modules' -type d -prune -exec rm -rf '{}' +

How To Postgres

Installation

# Create a named volume container for the data 
docker create \
    -v /var/lib/postgresql/data \
    --name postgres-data \
    busybox
    
# The postgres container using the volume, exposing the port to the host
docker run \
    --name postgres \
    --restart unless-stopped \
    -p 5432:5432 \
    -e POSTGRES_PASSWORD=passwd \
    -d --volumes-from postgres-data \
    postgres:9.6

Backup/Restore

# Backup 
docker exec postgres /bin/bash \
  -c "export PGPASSWORD=passwd \
      && /usr/bin/pg_dump -U postgres northwind" \
  | gzip -9 > northwind.backup.sql.gz
  
# Restore
gunzip < northwind.backup.sql.gz | \
    docker exec -i postgres /bin/bash \
    -c "export PGPASSWORD=passwd && \
    /usr/bin/psql -U postgres northwind2"
    
# pgdump
export ENV=dev && \
pg_dump \
  -h ims-api-$ENV.db.dealerinspire.com \
  -U ims_api_$ENV \
  --clean \
  --quote-all-identifiers \
  --schema=public \
  --inserts \
  --no-privileges \
  --no-acl \
  --file=ims_api_$ENV.sql \
  ims_api_$ENV
  
# Restore
export ENV=dev && \ 
psql \
  -h ims-api-node-$ENV.db.di-metal.net \
  -U ims_api_$ENV < ims_api_$ENV.sql

Simple SELECT Statements

  • Press f5 to execute queries in the query tool
-- hello queries
SELECT * FROM customers;

-- select columns to display
SELECT 
	categoryname, description
FROM categories;

-- Distinct values
SELECT DISTINCT region FROM suppliers;

-- count
SELECT COUNT(*) from orders;
SELECT COUNT(DISTINCT productid) from order_details;

-- Combining fields in SELECT
SELECT customerid, shippeddate - orderdate as shiptime from orders;
SELECT unitprice * quantity as total, orderid from order_details;

SELECT with WHERE

-- WHERE with text fields
SELECT * FROM suppliers WHERE city='Berlin';

-- WHERE with numeric fields
SELECT COUNT(*) FROM orders WHERE employeeid = 3;
SELECT COUNT(*) FROM order_details WHERE freight > 250;

-- WHERE with DATE fields
SELECT COUNT(*) FROM orders WHERE shippeddate < '1997-06-05';

-- WHERE with AND 
SELECT COUNT(*) FROM orders WHERE shipcountry='Germany' AND freight > 100;
SELECT DISTINCT customerid FROM orders WHERE shipvia = 2 AND shipcountry = 'Brazil'; 

-- WHERE with OR
SELECT COUNT(*) FROM customers WHERE country = 'USA' OR country='Canada';
SELECT COUNT(*) FROM suppliers WHERE country='Germany' OR country='Spain';
SELECT COUNT(*) FROM orders WHERE shipcountry='USA' OR shipcountry='Brazil' OR shipcountry='Argentina';

-- WHERE with NOT
SELECT COUNT(*) FROM suppliers WHERE NOT country = 'USA';

-- COMBINE WHERE AND OR NOT
SELECT COUNT(*) 
FROM orders
WHERE shipcountry = 'Germany' AND (freight < 50 OR freight > 175);

SELECT COUNT(*) FROM orders WHERE
(shipcountry = 'Canada' OR shipcountry='Spain') AND shippeddate > '1997-05-01';

-- WHERE with BETWEEN
SELECT COUNT(*) from order_details WHERE unitprice BETWEEN 10 AND 20;
SELECT COUNT(*) FROM orders WHERE shippeddate BETWEEN '1996-06-01' AND '1996-09-30';

-- WHERE IN
SELECT COUNT(*) FROM suppliers WHERE country IN ('Germany', 'France', 'Spain', 'Italy');
SELECT COUNT(*) FROM products WHERE categoryid IN (1, 4, 6, 7);

SELECT with ORDERBY

SELECT DISTINCT companyname from shippers ORDER BY companyname ASC;
SELECT DISTINCT country, city FROM suppliers ORDER BY country ASC, city ASC;
SELECT productname, unitprice FROM products ORDER BY price DESC, productname ASC;

SELECT with MIN and MAX

SELECT MIN(orderdate) from orders WHERE shipcountry = 'Italy';
SELECT MAX(shippeddate) from orders WHERE shipcountry = 'France';
SELECT MAX(shippeddate - orderdate) FROM orders WHERE shipcountry = 'France';

SELECT with AVG and SUM

SELECT AVG(freight) FROM orders WHERE shipcountry = 'Brazil';
SELECT SUM(quantity) from order_details WHERE productid = 14;
SELECT AVG(quantity) FROM order_details WHERE productid = 35;

SELECT with LIKE

-- % is wildcard for zero or more of any characters
SELECT companyname, contactname from customers WHERE contactname LIKE 'D%';

-- _ is wildcard for one character
SELECT companyname from suppliers WHERE companyname LIKE '_or%';

SELECT companyname from customers where companyname LIKE '%er';

SELECT with aliAS-ed column names

SELECT unitprice * quantity AS TotalSpent FROM order_details ORDER BY TotalSpent DESC;
SELECT unitprice * unitsinstock AS InventoryOnHand, productname FROM products ORDER BY InventoryOnHand DESC;

SELECT with LIMIT

SELECT orderid, unitprice * quantity AS Total FROM order_details ORDER BY Total DESC LIMIT 3;
SELECT productname, unitprice * unitsinstock AS Total FROM products ORDER BY Total ASC LIMIT 2;

SELECT with NULL values

SELECT COUNT(*) FROM customers where region is NULL;
SELECT COUNT(*) FROM suppliers where region is NOT NULL;
SELECT COUNT(*) FROM orders WHERE shipregion IS NULL;

INNER JOIN, a.k.a. JOIN

SELECT companyname, orderdate, shipcountry
FROM customers
JOIN orders ON orders.customerid = customers.customerid;

SELECT CONCAT(firstname, ' ', lastname) as name, orderdate
FROM orders
JOIN employees ON orders.employeeid = employees.employeeid;

SELECT productid, companyname, unitprice, unitsinstock
FROM products
JOIN suppliers ON products.supplierid = suppliers.supplierid;

INNER JOIN for more than 2 tables

SELECT companyname, orderdate, productname, order_details.unitprice, quantity, categoryname
FROM customers
INNER JOIN orders ON orders.customerid = customers.customerid
INNER JOIN order_details ON orders.orderid = order_details.orderid
INNER JOIN products ON products.productid = order_details.productid
INNER JOIN categories ON categories.categoryid = products.categoryid
WHERE categoryname = 'Seafood' AND (order_details.unitprice * quantity) >= 500;

LEFT JOIN

-- All records in the first (left) table and any matching records in the second right table
SELECT companyname, orderid
FROM customers
LEFT JOIN orders ON orders.customerid = customers.customerid
WHERE orderid IS null;

SELECT productname, orderid
FROM products
LEFT JOIN order_details ON order_details.productid = products.productid;

RIGHT JOIN

-- All records in the second (right) table and any matching records in the first(left) table
SELECT companyname, orderid
FROM orders
RIGHT JOIN customers ON orders.customerid = customers.customerid
WHERE orderid IS null;

SELECT customercustomerdemo.customerid, customertypeid, companyname
FROM customercustomerdemo 
RIGHT JOIN customers ON customers.customerid = customercustomerdemo.customerid;

FULL JOIN

-- All records from both tables
SELECT companyname, orderid
FROM orders 
FULL JOIN customers ON customers.customerid = orders.customerid
WHERE companyname IS NULL or orderID is null;

SELECT productname, categoryname
FROM products
FULL JOIN categories ON categories.categoryid = products.categoryid;

SELF JOIN

-- connect a table back to itself
SELECT C1.companyname, C2.companyname, C1.city
FROM customers C1, customers C2
WHERE C1.city=C2.city AND C1.customerid > C2.customerid;

SELECT s1.companyname, s2.companyname, s1.country
FROM suppliers s1, suppliers s2
WHERE s1.country = s2.country AND s1.supplierid > s2.supplierid;

GROUP BY

-- customers in a country
SELECT country, COUNT(*) as customercount
FROM customers
GROUP BY country
ORDER BY customercount DESC;

-- with JOIN, products in a category
SELECT categoryname, COUNT(*) as categorycount
FROM categories
JOIN products ON products.categoryid=categories.categoryid
GROUP BY categoryname
ORDER BY categorycount DESC;

-- with other aggregate functions
-- Average quantity of a product ordered in order details
SELECT productname, ROUND(AVG(quantity)) as productquantity
FROM products
JOIN order_details ON order_details.productid = products.productid
GROUP BY productname
ORDER BY productquantity DESC;

-- suppliers in each country
SELECT country, COUNT(*) as suppliercount
FROM suppliers
GROUP BY country
ORDER BY suppliercount DESC;

-- total value of each product sold in 1997
SELECT productname, ROUND(SUM(quantity * order_details.unitprice)) as orderdeettotal
FROM order_details
JOIN products ON products.productid = order_details.productid
JOIN orders ON orders.orderid = order_details.orderid
WHERE orderdate BETWEEN '1997-01-01' AND '1997-12-31'
GROUP BY productname
ORDER BY orderdeettotal DESC;

GROUP BY with HAVING

  • WHERE filters records before grouping
  • HAVING filters records after grouping
-- products selling less than 2000
SELECT  productname, ROUND(SUM(quantity * order_details.unitprice)) as orderdeettotal
FROM order_details
JOIN products ON products.productid = order_details.productid
JOIN orders ON orders.orderid = order_details.orderid
GROUP BY productname
HAVING ROUND(SUM(quantity * order_details.unitprice)) > 2000
ORDER BY orderdeettotal DESC;

-- customers bought more than 5000
SELECT companyname, ROUND(SUM(quantity * order_details.unitprice)) as total
FROM customers
JOIN orders ON orders.customerid = customers.customerid
JOIN order_details ON order_details.orderid = orders.orderid
GROUP BY companyname
HAVING ROUND(SUM(quantity * order_details.unitprice)) > 5000
ORDER BY total DESC;
ORDER BY orderdeettotal DESC;

-- customers bought more than 5000 in the first 6 months
SELECT companyname, ROUND(SUM(quantity * order_details.unitprice)) as total
FROM customers
JOIN orders ON orders.customerid = customers.customerid
JOIN order_details ON order_details.orderid = orders.orderid
WHERE orders.orderdate BETWEEN '1997-01-01' AND '1997-06-30'
GROUP BY companyname
HAVING ROUND(SUM(quantity * order_details.unitprice)) > 5000
ORDER BY total DESC;

UNION

  • Combines the results of two or more queries
  • Must have the same number of columns
  • Types of columns must match
-- List of all supplier and company names
SELECT companyname 
FROM customers 
UNION
SELECT companyname
FROM suppliers;

SELECT country 
FROM customers
UNION 
SELECT country 
FROM suppliers
ORDER BY country ASC

-- cities of customers and suppliers
SELECT city 
FROM customers 
UNION
SELECT city
FROM suppliers
ORDER BY city ASC;
-- VS with UNION ALL
SELECT city 
FROM customers 
UNION ALL 
SELECT city
FROM suppliers
ORDER BY city ASC;

Subquery with EXISTS

  • SELECT records from a table WHERE conditions based on records in a different table
-- get customers with orders placed within a certain date
SELECT companyname 
FROM customers
WHERE EXISTS (
  SELECT customerid 
  FROM orders 
  WHERE orders.customerid=customers.customerid 
    AND orderdate BETWEEN '1997-04-01' AND '1997-04-30'
);

-- get customers that DID NOT have an order in a certain date
SELECT companyname 
FROM customers
WHERE NOT EXISTS (
  SELECT customerid 
  FROM orders 
  WHERE orders.customerid=customers.customerid 
    AND orderdate BETWEEN '1997-04-01' AND '1997-04-30'
);

-- What products did not have an order in april 1997
SELECT productname
FROM products
WHERE NOT EXISTS (
  SELECT productid
  FROM order_details
  JOIN orders ON orders.orderid = order_details.orderid
  WHERE order_details.productid = products.productid
    AND orderdate BETWEEN '1997-04-01' AND '1997-04-30' 
);

-- All suppliers with a product that costs more than $200

SELECT companyname
FROM suppliers
WHERE EXISTS (
  SELECT supplierid
  FROM products 
  WHERE products.supplierid = suppliers.supplierid AND unitprice > 200
);

Subquery with ANY

-- Find customers with order detail with more than 50 items of a single product
SELECT companyname 
FROM customers
WHERE customerid = ANY (
  SELECT customerid
  FROM orders 
  JOIN order_details ON order_details.orderid = orders.orderid 
  WHERE quantity > 50
);

-- Find suppliers that had an order with 1 item
SELECT companyname
FROM suppliers 
WHERE supplierid = ANY (
  SELECT supplierid 
  FROM products
  JOIN order_details ON order_details.productid = products.productid
  WHERE quantity = 1
);

Subquery with ALL

-- products that had order amounts that were higher than the average of all products
SELECT DISTINCT productname
FROM products 
JOIN order_details ON order_details.productid = products.productid
WHERE (order_details.unitprice * order_details.quantity) > ALL (
  SELECT AVG(order_details.unitprice * order_details.quantity) FROM order_details GROUP BY productid
);

-- DISTINCT customers that ordered more of one item than the average order amount per item of all customers
SELECT DISTINCT companyname
FROM customers
JOIN orders ON orders.customerid = customers.customerid
JOIN order_details ON order_details.orderid = orders.orderid
WHERE (order_details.unitprice * order_details.quantity) > ALL (
  SELECT AVG(order_details.unitprice * quantity)
  FROM order_details
  JOIN orders on orders.orderid = order_details.orderid
  GROUP BY customerid  
);

Subquery with IN

-- customers in the same countries as suppliers
SELECT companyname 
FROM customers
WHERE country IN (
  SELECT country from suppliers
);

-- suppliers in the same city as a customer
SELECT companyname 
FROM suppliers
WHERE city IN (
  SELECT city from customers
);

INSERT

INSERT INTO orders
(customerid, employeeid, orderdate, requireddate,  shipvia, freight, 
  shipname, shipaddress, shipcity, shippostalcode, shipcountry)
VALUES ('VINET', 4, '2017-09-05', '2017-09-25', 3, 43.4, 
  'Vins et alcools Chevalier', '59 rue de l'' Abbaye', 'Reims', '51100', 'France');
  
INSERT INTO order_details
(orderid, productid, unitprice, quantity, discount)
VALUES (
  11078, 11, 14, 20, 0
);

UPDATE

UPDATE orders 
SET requireddate = '2017-09-20', freight=50
WHERE orderid = 11078;

UPDATE order_details
SET quantity=40, discount=.05
WHERE orderid=11078 AND productid=11;

DELETE

DELETE FROM order_details
WHERE orderid=11078 AND productid=11;

DELETE FROM orders WHERE orderid=11078;

SELECT INTO

SELECT * INTO suppliers_northamerica
FROM suppliers 
WHERE country IN ('USA', 'CANADA');

SELECT * INTO orders_1997
FROM orders
WHERE orderdate BETWEEN '1997-01-01' AND '1997-12-31';

INSERT INTO SELECT

INSERT INTO suppliers_northamerica
SELECT * FROM suppliers
WHERE country IN ('Argentina', 'Brazil');

INSERT INTO orders_1997
SELECT * FROM orders
WHERE orderdate BETWEEN '1996-12-01' AND '1996-12-31';

CREATE DATABASE

CREATE DATABASE northwind2;

DROP DATABASE

DROP DATABASE northwind2;

CREATE TABLE

  • Data Types: https://www.postgresql.org/docs/9.6/static/datatype.html
  • Integers
    • smallint
      • 2 bytes
      • -32768 to 32768
    • integer
      • 4 bytes
      • -2147483648 to 2147483648
    • bigint
      • 8 bytes
      • -9223372036854775807 to 9223372036854775807
    • smallserial - smallint that auto-incrementing
    • serial - integer that auto incrementing
    • bigserial - large auto-incrementing
  • Arbitrary precision
    • Up to 131072 digits before the decimal, 16383 after the decimal
    • decimal
    • numeric
    • NUMERIC(total_digits, digits_in_fraction)
  • Floating point
    • real
      • 4 bytes
      • variable precision
      • 6 decimal precision
    • double
      • 8 bytes
    • 15 decimal precision
  • Character
    • character varying(n), varchar(n)
      • varying length, with limit
    • character(n) char(n)
      • fixed length
    • text
      • variable unlimited length
  • Date/Time
    • timestamp
      • 8 bytes
      • date and time
      • 4713BC to 294276AD
    • date
      • 4 bytes
      • no time
      • 4713BC to 294276AD
    • time
      • 8 bytes
      • no date
      • 00:00:00 to 24:00:00
    • interval
      • 16 bytes
      • Time interval
      • -178,000,000 years to 178,000,000 years
  • Boolean
    • boolean
      • 1 byte
CREATE TABLE subscribers (
  firstname varchar(200),
  lastname varchar(200),
  email varchar(200),
  signupdate timestamp,
  frequency integer,
  iscustomer boolean
);

CREATE TABLE returns (
  recordid serial,
  customerid char(5),
  datereturned TIMESTAMP,
  productid integer,
  quantity smallint,
  orderid integer
);

ALTER TABLE

ALTER TABLE, Rename field

ALTER TABLE subscribers 
RENAME firstname TO first_name;

ALTER TABLE returns
RENAME datereturned to date_returned;

ALTER TABLE, rename table

ALTER TABLE subscribers
RENAME TO email_subscribers;

ALTER TABLE, change field type

ALTER TABLE email_subscribers
ALTER COLUMN email SET DATA TYPE varchar(255);

ALTER TABLE returns
ALTER COLUMN quantity SET DATA TYPE int;

CREATE Index

CREATE INDEX email_email_subscribers
ON email_subscribers (email);

CREATE INDEX customerid_orderid_returns
ON returns(customerid, orderid);

DROP INDEX

DROP INDEX email_email_subscribers;

DROP INDEX customerid_orderid_returns;

DROP TABLE

DROP TABLE returns;

Constraints

  • NOT NULL - Field must have a value
  • UNIQUE - Field value must not be the same in a different record
  • PRIMARY KEY - Must have value and be unique, identifier for the record
  • FOREIGN KEY - All values must exist in another table
  • CHECK - Checks that all values meet condition
  • DEFAULT - If no value is provided, value is set to the default

NOT NULL

DROP TABLE IF EXISTS practices;
CREATE TABLE practices (
  practiceid integer NOT NULL,
  practice_field VARCHAR(50) NOT NULL
);

-- ALTER TABLE syntax
ALTER TABLE products
ALTER unitprice set NOT NULL;

UNIQUE

DROP TABLE IF EXISTS practices;
CREATE TABLE practices (
  practiceid integer UNIQUE,
  practice_field VARCHAR(50) NOT NULL 
);

CREATE TABLE pets (
    petid integer UNIQUE,
    name VARCHAR(25) NOT NULL
);

ALTER TABLE region
ADD CONSTRAINT regiondescription_region UNIQUE (regiondescription);

ALTER TABLE shippers
ADD CONSTRAINT companyname_shippers UNIQUE (companyname);

PRIMARY KEY

DROP TABLE IF EXISTS practices;
CREATE TABLE practices (
  practiceid integer PRIMARY KEY,
  name varchar(25) NOT NULL
);

ALTER TABLE practices
DROP CONSTRAINT practices_pkey -- called [table name]_pkey by default

ALTER TABLE practices
ADD PRIMARY KEY (practiceid);

FOREIGN KEY

DROP TABLE IF EXISTS practices;
CREATE TABLE practices (
    practiceid integer PRIMARY KEY,
    practicefield VARCHAR(48) NOT NULL,
    employeeid integer NOT NULL,
    FOREIGN KEY (employeeid) REFERENCES employees (employeeid)
);

ALTER TABLE practices
DROP CONSTRAINT practices_employeeid_fkey;

ALTER TABLE practices
ADD CONSTRAINT practices_employeeid_fkey
FOREIGN KEY (employeeid) REFERENCES employees (employeeid);

CHECK

DROP TABLE IF EXISTS practices;
CREATE TABLE practices (
    practiceid integer PRIMARY KEY,
    practicefield VARCHAR(48) NOT NULL,
    employeeid integer NOT NULL,
    cost INTEGER CONSTRAINT practices_cost CHECK (cost >= 0 AND cost <= 1000),
    FOREIGN KEY (employeeid) REFERENCES employees (employeeid)
);

ALTER TABLE orders 
ADD CONSTRAINT orders_freight CHECK (freight > 0);

DEFAULT

DROP TABLE IF EXISTS practices;
CREATE TABLE practices (
    practiceid integer PRIMARY KEY,
    practicefield VARCHAR(48) NOT NULL,
    employeeid integer NOT NULL,
    cost INTEGER DEFAULT 50 CONSTRAINT practices_cost CHECK (cost >= 0 AND cost <= 1000),
    FOREIGN KEY (employeeid) REFERENCES employees (employeeid)
);

ALTER TABLE products 
ALTER COLUMN reorderlevel
SET DEFAULT 5;


docker exec postgres /bin/bash \
  -c "export PGPASSWORD=passwd \
      && /usr/bin/pg_dump -U postgres northwind" \
  | gzip -9 > northwind.backup.sql.gz


gunzip < northwind.backup.sql.gz | \
    docker exec -i postgres /bin/bash \
    -c "export PGPASSWORD=passwd && \
    /usr/bin/psql -U postgres northwind2"
    

-- list users
SELECT u.usename AS "User Name" FROM pg_catalog.pg_user u;

-- READONLY
CREATE ROLE readonly;
GRANT CONNECT ON DATABASE ims_api_prod TO readonly;
GRANT USAGE ON SCHEMA public TO readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO readonly;

-- READ/WRITE
CREATE ROLE readwrite;
GRANT CONNECT ON DATABASE ims_api_prod TO readwrite;
GRANT USAGE ON SCHEMA public TO readwrite;
GRANT USAGE, CREATE ON SCHEMA public TO readwrite;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO readwrite;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO readwrite;
GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO readwrite;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT USAGE ON SEQUENCES TO readwrite;

-- READ/WRITE USER
CREATE USER arumery WITH PASSWORD 'somepass';
GRANT readwrite TO arumery;

Output table info


SELECT 
    c.relname, a.attname AS column_name,
    pg_catalog.format_type(a.atttypid, a.atttypmod) as type,
    case 
      when a.attnotnull
    then 'NOT NULL' 
    else 'NULL' 
    END as not_null 
  FROM pg_class c,
   pg_attribute a,
   pg_type t
   WHERE c.relname = 'prototype_vps_sync_archive'
   AND a.attnum > 0
   AND a.attrelid = c.oid
   AND a.atttypid = t.oid
 ORDER BY a.attnum

Table Sizes

SELECT nspname || '.' || relname AS "relation",
   pg_size_pretty(pg_total_relation_size(C.oid)) AS "total_size"
 FROM pg_class C
 LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
 WHERE nspname NOT IN ('pg_catalog', 'information_schema')
   AND C.relkind <> 'i'
   AND nspname !~ '^pg_toast'
 ORDER BY pg_total_relation_size(C.oid) DESC
 LIMIT 5;

DISTINCT with COUNT()

SELECT
  name,
  COUNT(name) AS name_count
FROM
  public.dealer_option
GROUP BY
  name;

Steps:

  1. (IN MAIN ACCOUNT) Route53 HZ created automatically on purchase of domain name example.com
  2. (IN MAIN ACCOUNT) Create A record for example.com pointing somewhere (maybe 127.0.0.1)
  3. (IN DEV ACCOUNT) Create HZ for dev.example.com
  4. (IN DEV ACCOUNT) Create A record for dev.example.com pointing somewhere (maybe 127.0.0.1)
  5. (IN MAIN ACCOUNT) Create NS record for dev.example.com
  • Copy the NS record from the HZ in the DEV ACCOUNT
name: Manual trigger for Terraform deployment

on:
  workflow_dispatch:
    inputs:
      logLevel:
        description: 'Log level'
        required: true
        default: 'warning'
      tags:
        description: 'Test scenario tags'

jobs:
  terraform:
    name: 'Terraform deployment'
    runs-on: ubuntu-latest

    steps:
      - name: Checkout
        uses: actions/checkout@v2

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
          role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
          role-duration-seconds: 1200
          aws-region: ${{ secrets.AWS_REGION }}

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v1
        with:
          terraform_version: 1.0.5

      - name: Terraform Init
        run: terraform init

      - name: Terraform Plan
        run: terraform plan -var="log_level=${{ github.event.inputs.logLevel }}"

      - name: Terraform Apply
        run: terraform apply -auto-approve -var="log_level=${{ github.event.inputs.logLevel }}"

      - name: Terraform output
        run: terraform output
import { Injectable } from '@nestjs/common';
import axios from 'axios';

@Injectable()
export class GithubActionsService {
  private readonly owner = 'github_owner'; // replace with your GitHub username or organization
  private readonly repo = 'github_repo'; // replace with your GitHub repository
  private readonly workflow_id = 'workflow_id.yml'; // replace with your GitHub workflow file name
  private readonly token = 'your_github_token'; // replace with your GitHub token

  public async triggerWorkflow(inputs: { [key: string]: string }): Promise<void> {
    const url = `https://api.github.com/repos/${this.owner}/${this.repo}/actions/workflows/${this.workflow_id}/dispatches`;

    const headers = {
      Authorization: `token ${this.token}`,
      Accept: 'application/vnd.github.v3+json',
    };

    const data = {
      ref: 'master', // branch on which workflow needs to be run
      inputs, // inputs for the workflow
    };

    try {
      const response = await axios.post(url, data, { headers });
      console.log('Workflow dispatch response:', response.status, response.statusText);
    } catch (error) {
      console.error('Failed to trigger workflow:', error.response?.data || error.message);
    }
  }
}

ESLint and Prettier

npm i --save-dev \
  eslint \           
  @typescript-eslint/parser \
  @typescript-eslint/eslint-plugin \
  prettier \
  eslint-config-prettier \
  eslint-plugin-prettier

.eslintrc

{
  "parser": "@typescript-eslint/parser",
  "parserOptions": {
    "ecmaVersion": 2020,
    "sourceType": "module"
  },
  "extends": [
    "plugin:@typescript-eslint/recommended",
    "prettier/@typescript-eslint",
    "plugin:prettier/recommended"
  ],
  "rules": {
    // Place to specify ESLint rules. Can be used to overwrite rules specified from the extended configs
    // e.g. "@typescript-eslint/explicit-function-return-type": "off",
  }
}

.prettierrc

{
  "semi": true,
  "trailingComma": "es5",
  "singleQuote": true,
  "printWidth": 90,
  "tabWidth": 2
}
const isRejected = (
input: PromiseSettledResult<unknown>
): input is PromiseRejectedResult => input.status === 'rejected';
const isFulfilled = <T>(
input: PromiseSettledResult<T>
): input is PromiseFulfilledResult<T> => input.status === 'fulfilled';
const arrayOfPromises = [1, 2, 3].map(num => iReturnAPromise(num));
const arrayOfResults = await Promise.allSettled(arrayOfPromises);
const successful = arrayOfResults.filter(isFulfilled);
const failed = arrayOfResults.filter(isRejected);
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment