Skip to content

Instantly share code, notes, and snippets.

@JaosnHsieh
Last active December 8, 2022 10:19
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save JaosnHsieh/a3dd22506729e2f140a6981099118662 to your computer and use it in GitHub Desktop.
Save JaosnHsieh/a3dd22506729e2f140a6981099118662 to your computer and use it in GitHub Desktop.
2022 Some notes, code snippets, configurations, findings in open souce code.

about-yarn-upgrade.md

Recently I deleted yarn.lock file manually on the monorepo and there are tons of dependencies compatibility issue.

And I found yarn upgrade and yarn outdated are useful commands.

Turned out yarn install or just yarn would only update yarn.lock whenever dependencies in package.json changed. It won't update all packages to the latest stable version ( a.b.c, fixed a, and latest b and c ).

We can and should manually upgrade by yarn upgrade or npm upgrade.

This command updates dependencies to their latest version based on the version range specified in the package.json file. The yarn.lock file will be recreated as well.

Timing for me to use some yarn commands

yarn install or yarn -> on local dev

yarn install --frozen-lockfile --prefer-offline -> on ci

yarn outdated or yarn upgrade -> when we have time and wanna improve code quality by updating dependencies to latest stable version

references

https://classic.yarnpkg.com/lang/en/docs/cli/upgrade/

https://docs.npmjs.com/updating-packages-downloaded-from-the-registry

Auto HTTPS

Sometimes I need a quick https setup for development environment.

Caddy used to be my go-to but I expereienced ssl3 error on v2.x a few times and found no way to solve it.

I have just tried this docker-nginx-auto-ssl in docker-compose.yml and it worked in 1 min.

docker-compose.yml

# docker-compose.yml
version: '2'
services:
  nginx:
    image: valian/docker-nginx-auto-ssl
    restart: on-failure
    volumes:
      - ssl_data:/etc/resty-auto-ssl
    environment:
      ALLOWED_DOMAINS: 'example.com'
      SITES: 'example.com=localhost:3001'
    network_mode: 'host'

volumes:
  ssl_data:
~

Build pthom/hello_imgui.git

requirements

Docker version 20.10.7, build f0df350

Steps

$ docker run --rm -it --user 0 -p 5901:5901 -p 6901:6901 consol/debian-xfce-vnc

open browser and navigate to http://localhost:6901/?password=vncpassword

open Terminal app and install tools

apt update && apt install -y git build-essential cmake
apt install libglfw3-dev libxinerama-dev libxcursor-dev libxi-dev libx11-dev

Download pthom/hello_imgui.git

cd ~
git clone https://github.com/pthom/hello_imgui.git

Build linux desktop app and run

cd ~/hello_imgui/
cd _example_integration
mkdir build && cd build
cmake .. && make hello_world .. && ./hello_world

Yayyy~~ I can see hello world application running in the linux now

Build web app

Build by following this project's README

pthom/imgui_manual

Install emscripten

~/hello_imgui/tools/emscripten/install_emscripten.sh
source ~/emsdk/emsdk_env.sh

Build web app(emscripten)

cd ~/hello_imgui/src/hello_imgui_demos/_example_integration

mkdir build_emscripten && cd build_emscripten

emcmake cmake .. -DHELLOIMGUI_USE_SDL_OPENGL3=ON ..

./_deps/hello_imgui-src/tools/sdl_download.sh

emcmake cmake .. -DHELLOIMGUI_USE_SDL_OPENGL3=ON .. #again

After emcmake cmake .. -DHELLOIMGUI_USE_SDL_OPENGL3=ON ...

the Error shows Could NOT find X11 as below

Screenshot 2022-12-08 at 5 08 03 pm

Client side aws s3 upload via node.js sdk

/**
 * 2022.08.03 example to upload files from client side to s3 ( backblaze b2 ) via pre-signed url
 *  "aws-sdk": "^2.1187.0",
    "axios": "^0.27.2"
    requirements:
    1. create bucket first and set it to public ( set up backblaze b2 via https://gist.github.com/JaosnHsieh/a3dd22506729e2f140a6981099118662#file-upload-to-backblaze-b2-aws-sdk-s3-md )
    2. prepare a png file '1659498406019.png' for upload
    3. nodejs v16.14.0
    
 */

const AWS = require('aws-sdk');
const axios = require('axios');
const fs = require('fs');

(async function main() {
  var credentials = new AWS.SharedIniFileCredentials({ profile: 'b2' });
  AWS.config.credentials = credentials;
  var ep = new AWS.Endpoint('s3.us-west-002.backblazeb2.com');
  var s3 = new AWS.S3({ endpoint: ep, signatureVersion: 'v4' });

  const { Buckets } = await s3.listBuckets().promise();
  const myImagesBucket = Buckets[1];
  const myFileName = '1659498406019.png';

  const url = s3.getSignedUrl('putObject', {
    Bucket: myImagesBucket.Name,
    Key: myFileName,
    ContentType: 'application/octet-stream',
    //set to public,  https://secure.backblaze.com/b2_buckets.htm -> Bucket Settings -> Files in Bucket are: "Public"
    ACL: 'public-read',
    Expires: 600, // 10 minutes
  });

  axios({
    method: 'put',
    url,
    data: fs.readFileSync(myFileName),
    headers: {
      'Content-Type': 'application/octet-stream',
      //set to public,  https://secure.backblaze.com/b2_buckets.htm -> Bucket Settings -> Files in Bucket are: "Public"
      'x-amz-acl': 'public-read',
    },
  })
    .then((result) => {
      console.log('result', result.data);
    })
    .catch((err) => {
      console.log('err', err);
    });
  console.log(`url`, url);
})();

/**
 * references:
 * https://aws.amazon.com/blogs/compute/uploading-to-amazon-s3-directly-from-a-web-or-mobile-application/
 * https://stackoverflow.com/a/66469650/6414615
 * https://github.com/odysseyscience/react-s3-uploader/issues/106#issue-201680825
 * https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingAWSSDK.html
 */

To type class's consturctor method in typescript is hard

Below is the current simpliest method I found on outdated Typescript's document

interface ClockConstructor {
  new (hour: number, minute: number): ClockInterface;
}

interface ClockInterface {
  tick(): void;
}

const Clock: ClockConstructor = class Clock implements ClockInterface {
  constructor(h: number, m: number) {}
  tick() {
    console.log("beep beep");
  }
};

let clock = new Clock(12, 17);
clock.tick();

2022.11.18 macos c++ gcc, debug to find which line caused the "segment fault"

gist: https://gist.github.com/JaosnHsieh/69ae210db4f43f133a08724882d6adc9

main.cpp

#include <iostream>
#include <vector>
using namespace std;

int main(){
  vector<vector<int>> v;
  v[2][2];
  return 0;
}

Steps

1. gcc -lstdc++ -std="c++11" ./main.cpp -o main -g  
##compile with -g for debug

2. lldb --file ./main. 
## use lldb instead of gdb ( tried gdb and found no way to get it right )

3. (in lldb repl)
  1. r
  2. bt

after "bt", we can see which lines caused the error


messages from lldb , we can see main.cpp:7:3 listed

lldb --file ./main
(lldb) target create "./main"
Current executable set to '/Users/jh/git/cpp-test/main' (x86_64).
(lldb) 
Current executable set to '/Users/jh/git/cpp-test/main' (x86_64).
(lldb) r
Process 67186 launched: '/Users/jh/git/cpp-test/main' (x86_64)
Process 67186 stopped
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x30)
    frame #0: 0x0000000100003430 main`std::__1::vector<int, std::__1::allocator<int> >::operator[](this=0x0000000000000030 size=0, __n=2) at vector:1572:18
   1569 vector<_Tp, _Allocator>::operator[](size_type __n) _NOEXCEPT
   1570 {
   1571     _LIBCPP_ASSERT(__n < size(), "vector[] index out of bounds");
-> 1572     return this->__begin_[__n];
   1573 }
   1574
   1575 template <class _Tp, class _Allocator>
Target 1: (main) stopped.
(lldb) bt
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x30)
  * frame #0: 0x0000000100003430 main`std::__1::vector<int, std::__1::allocator<int> >::operator[](this=0x0000000000000030 size=0, __n=2) at vector:1572:18
    frame #1: 0x00000001000033c3 main`main at main.cpp:7:3
    frame #2: 0x00007ff812929310 dyld`start + 2432

from:

https://stackoverflow.com/a/63943980/6414615

https://stackoverflow.com/a/2876374/6414615

tools version

gcc -v
Apple clang version 14.0.0 (clang-1400.0.29.102)
Target: x86_64-apple-darwin22.1.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
 lldb --version
lldb-1400.0.30.3
Apple Swift version 5.7 (swiftlang-5.7.0.127.4 clang-1400.0.29.50)
macos 13.0.1

gist: https://gist.github.com/JaosnHsieh/69ae210db4f43f133a08724882d6adc9

flutent-json-schema-pattern-properties.md

Pattern properties in json schema is useful when validating random amount of objects key.

for example, a users object contains 0 to n users data key by user id.

data

{
   "userid1":{...},
   "userid100":{...}
}

Runnable example

test.js

const fjs = require('fluent-json-schema');
const Ajv = require('ajv');

const schema = fjs
  .object()
  .additionalProperties(false)
  //   .prop('prop')
  .patternProperties({ '^fo.*$': fjs.string() });

console.log('schema.valueOf()', schema.valueOf());
const ajv = new Ajv({ allErrors: true });
const validate = ajv.compile(schema.valueOf());

const data = {
  foo: 'pass',
};

let valid = validate(data);
console.log({ valid });
console.log('validate.errors', validate.errors);

const data2 = {
  test: 'test',
};

let valid2 = validate(data2);
console.log({ valid2 });
console.log('validate.errors', validate.errors);

versions

node v16.14.0

"ajv": "^8.11.0"

"fluent-json-schema": "^3.1.0"

references:

https://stackoverflow.com/a/30809082/6414615

https://github.com/fastify/fluent-json-schema/blob/master/src/ObjectSchema.test.js#L421-L435

HTML print exact A4 page

real testing

Requirements: print 1 page only and hlaf of the page should be the same content

The jsfiddle that fits my needs. /40 below, adding 8mm on box2 manually.

HTML

<div class="box1">
  box1
</div>

<div class="box2">
  box2
</div>

css

  @page {
    margin: 0;
    size: A4;
  }

  body {
    -webkit-print-color-adjust: exact;
  }
  

  html,
  body {
    margin: 0;
    height: 297mm;
    width: 210mm;
    position: relative;
  }


  .box1 {
    margin-top:10mm;
    box-sizing: border-box;
    margin-left: 5mm;
    border-bottom: 1px solid red;
    position: absolute;
    top: 0;
    /* background-color: grey; */
    width: 200mm;
  }

  .box2 {
    margin-top:18mm;
    box-sizing: border-box;
    margin-left: 5mm;
    border-bottom: 1px solid blue;
    position: absolute;
    top: 50%;
    /* background-color: grey; */
    width: 200mm;
  }
2022.06.20

print test 4

https://jsfiddle.net/jasonHsieh/tsnv714e/40/ -> works ! manually add 8 mm margin top on box 2

https://jsfiddle.net/jasonHsieh/tsnv714e/40/ -> works ! manually add 8 mm margin top on box 2


print test 3

https://jsfiddle.net/jasonHsieh/tsnv714e/32/ -> works and default 1 page but box2 margin top has missing 6mm shorter after printing

print test 2

https://jsfiddle.net/jasonHsieh/tsnv714e/20/ -> works but default 2 pages

https://jsfiddle.net/jasonHsieh/tsnv714e/20/ -> works but default 2 pages



https://jsfiddle.net/jasonHsieh/tsnv714e/25/ -> wrong middle height size


based on:

jsfiddle http://jsfiddle.net/mturjak/2wk6Q/1949/

backup jsfiddle http://jsfiddle.net/jasonHsieh/op6uwscg/

from:

https://stackoverflow.com/a/16650459/6414615

How to create an empty webview app in iOS swift langauge for testing

XCode v13.4

Click menu File->Project->App-> Language "Switft", Interface "Storyboard" -> create

Edit ViewController.swift file to below

////
////  ViewController.swift
////  test-app-3
////
////  Created by jh on 7/9/2022.
////
//
//import UIKit
//
//class ViewController: UIViewController {
//
//    override func viewDidLoad() {
//        super.viewDidLoad()
//        // Do any additional setup after loading the view.
//    }
//
//
//}
//

import UIKit
import WebKit
class ViewController: UIViewController, WKUIDelegate {
    
    var webView: WKWebView!
    
    
    override func loadView() {
        let webConfiguration = WKWebViewConfiguration()
        webConfiguration.allowsInlineMediaPlayback=true // for <video> to play not black
        webView = WKWebView(frame: .zero, configuration: webConfiguration)
        webView.uiDelegate = self
        view = webView
    }
    override func viewDidLoad() {
        super.viewDidLoad()
        
        let myURL = URL(string:"https://apple.com")
        let myRequest = URLRequest(url: myURL!)
        webView.load(myRequest)
    }

}

Permission

Append below to info.plist

	<key>NSMicrophoneUsageDescription</key>
	<string>Flutter requires access to microphone.</string>
	<key>NSCameraUsageDescription</key>
	<string>Flutter requires access to camera.</string>

or

In XCode -> Project -> Info -> Custom iOS Target Properties

add

  1. Privacy - Camera Usage Description

  2. Privacy - Microphone Usage Description

Run a pm2 on Macos Monterey(12.3 (21E230)) as alternative of crontab

Macos crontab -e won't work out of box, it would encoutner permission issue as this.

Instead, use node.js pm2 is eaiser

pm2 start handle-cron.js --no-autorestart --instances 1 --cron "0 * * * *"

or

pm2 start "python3 your-script.py" --no-autorestart --instances 1 --cron "0 * * * *"

Then pm2 save

And set pm2 startup to start pm2 after restart

from:

https://stackoverflow.com/a/69848825/6414615

Markdown as static html doc

I like this solution because it doesn't have to install any tools other than browsers.

Workflow

  1. Write, edit markdown in github, gitlab issues
  2. Copy the markdown raw text and save to xxx.md
  3. Create index.html and copy below code
  4. Might have to fix images assets from github / gitlab manually
  5. Rsync deploy to the static server rsync -avz ./ xx@xxx:/any/path/ --delete

index.html

<!DOCTYPE html>
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
    <script
      src="https://cdnjs.cloudflare.com/ajax/libs/showdown/2.1.0/showdown.min.js"
      integrity="sha512-LhccdVNGe2QMEfI3x4DVV3ckMRe36TfydKss6mJpdHjNFiV07dFpS2xzeZedptKZrwxfICJpez09iNioiSZ3hA=="
      crossorigin="anonymous"
      referrerpolicy="no-referrer"
    ></script>
    <title>
      My title
    </title>
    <style type="text/css">
      #markdown-contents {
        width: 100%;
        padding: 0 10px;
      }
    </style>
  </head>
  <body>
    <div id="markdown-contents"></div>
    <script type="text/javascript">
      fetch('./xxx.md').then(async res => {
        const text = await res.text();
        const converter = new showdown.Converter();
        const html = converter.makeHtml(text);
        document.getElementById('markdown-contents').innerHTML = html;
      });
    </script>
  </body>
</html>

references: https://gist.github.com/mde/967610

Inspect memory usage on node.js process

There are 2 methods I found on the internet to inspect the memory usage on a node.js process.

  1. Maximum resident set size (kbytes) by executing nodejs script ( ubuntu 18 works )
  2. process.memoryUsage() in node.js API

Commands to try

  1. /usr/bin/time -pv node --max-old-space-size=200 ./test.js
  2. /usr/bin/time -pv node --max-old-space-size=4096 ./test.js

test.js

const memoryLeakAllocations = [];

const field = 'heapUsed';
const allocationStep = 10000 * 1024; // 10MB

const TIME_INTERVAL_IN_MSEC = 40;

setInterval(() => {
  const allocation = allocateMemory(allocationStep);

  memoryLeakAllocations.push(allocation);

  const mu = process.memoryUsage();
  // # bytes / KB / MB / GB
  const gbNow = mu[field] / 1024 / 1024 / 1024;
  const gbRounded = Math.round(gbNow * 100) / 100;

  console.log(`Heap allocated ${gbRounded} GB`);
}, TIME_INTERVAL_IN_MSEC);

function allocateMemory(size) {
  // Simulate allocation of bytes
  const numbers = size / 8;
  const arr = [];
  arr.length = numbers;
  for (let i = 0; i < numbers; i++) {
    arr[i] = i;
  }
  return arr;
}

references:

https://nodejs.org/en/docs/guides/backpressuring-in-streams/

https://blog.appsignal.com/2021/12/08/nodejs-memory-limits-what-you-should-know.html

nginx-serving-static-files-directory

/usr/local/etc/nginx.conf or /etc/nginx/nginx.conf

Below config would serve /somewhere directory files

.....
location / {
            autoindex on;
            alias /somewhere;
            # root /somewhere;
             
}
.....

add basic auth

generate basic auth file

htpasswd -bdc basic-auth-file admin 12345

cat basic-auth-file

nginx config

location / {
            auth_basic            "Restricted Area for Private Use Only";
            auth_basic_user_file  /basic-auth-file;
            autoindex on;
            root /somewhere;
        }

daemon off for debugging

add below to top of nginx.conf

daemon off;
error_log /dev/stdout info;

now run nginx would run without daemon

from: https://stackoverflow.com/a/23328458/6414615 https://docs.nginx.com/nginx/admin-guide/web-server/serving-static-content/ https://stackoverflow.com/questions/9454150/nginx-password-protected-dir-with-autoindex-feature https://www.jianshu.com/p/1c0691c9ad3c

QGroundControl OSS has a very clear how to get tile tile count and tile urls code

Useful referecen to know how to get map tile image urls:

lon2tileX and lat2tileY https://wiki.openstreetmap.org/wiki/Slippy_map_tilenames#ECMAScript_(JavaScript/ActionScript,_etc.)

Map Tile Count implementation https://github.com/mavlink/qgroundcontrol/blob/master/src/QtLocationPlugin/MapProvider.cpp

Another way to get tile urls by using leaflet utilites https://github.com/allartk/leaflet.offline/blob/46e4a55805f7379d55fd6f45b0b768c3ee8d6729/src/TileLayerOffline.ts#L66-L96

[Use leaflet.offline with mapbox might get wrong tile urls, to fix it, we need manually add zoom level offset back on map.project method](map tile urls are wrong when using mapbox provodier #213)

/****************************************************************************
 *
 * (c) 2009-2020 QGROUNDCONTROL PROJECT <http://www.qgroundcontrol.org>
 *
 * QGroundControl is licensed according to the terms in the file
 * COPYING.md in the root of the source code directory.
 *
 ****************************************************************************/

#include <QNetworkAccessManager>
#include <QNetworkRequest>

#include "MapProvider.h"

MapProvider::MapProvider(const QString &referrer, const QString &imageFormat,
                         const quint32 averageSize, const QGeoMapType::MapStyle mapType, QObject* parent)
    : QObject(parent)
    , _referrer(referrer)
    , _imageFormat(imageFormat)
    , _averageSize(averageSize)
    , _mapType(mapType)
{
    const QStringList langs = QLocale::system().uiLanguages();
    if (langs.length() > 0) {
        _language = langs[0];
    }
}

QNetworkRequest MapProvider::getTileURL(const int x, const int y, const int zoom, QNetworkAccessManager* networkManager) {
    //-- Build URL
    QNetworkRequest request;
    const QString url = _getURL(x, y, zoom, networkManager);
    if (url.isEmpty()) {
        return request;
    }
    request.setUrl(QUrl(url));
    request.setRawHeader(QByteArrayLiteral("Accept"), QByteArrayLiteral("*/*"));
    request.setRawHeader(QByteArrayLiteral("Referrer"), _referrer.toUtf8());
    request.setRawHeader(QByteArrayLiteral("User-Agent"), _userAgent);
    return request;
}

QString MapProvider::getImageFormat(const QByteArray& image) const {
    QString format;
    if (image.size() > 2) {
        if (image.startsWith(reinterpret_cast<const char*>(pngSignature)))
            format = QStringLiteral("png");
        else if (image.startsWith(reinterpret_cast<const char*>(jpegSignature)))
            format = QStringLiteral("jpg");
        else if (image.startsWith(reinterpret_cast<const char*>(gifSignature)))
            format = QStringLiteral("gif");
        else {
            return _imageFormat;
        }
    }
    return format;
}

QString MapProvider::_tileXYToQuadKey(const int tileX, const int tileY, const int levelOfDetail) const {
    QString quadKey;
    for (int i = levelOfDetail; i > 0; i--) {
        char digit = '0';
        const int  mask  = 1 << (i - 1);
        if ((tileX & mask) != 0) {
            digit++;
        }
        if ((tileY & mask) != 0) {
            digit++;
            digit++;
        }
        quadKey.append(digit);
    }
    return quadKey;
}

int MapProvider::_getServerNum(const int x, const int y, const int max) const {
    return (x + 2 * y) % max;
}

int MapProvider::long2tileX(const double lon, const int z) const {
    return static_cast<int>(floor((lon + 180.0) / 360.0 * pow(2.0, z)));
}

//-----------------------------------------------------------------------------
int MapProvider::lat2tileY(const double lat, const int z) const {
    return static_cast<int>(floor(
        (1.0 -
         log(tan(lat * M_PI / 180.0) + 1.0 / cos(lat * M_PI / 180.0)) / M_PI) /
        2.0 * pow(2.0, z)));
}

QGCTileSet MapProvider::getTileCount(const int zoom, const double topleftLon,
                                     const double topleftLat, const double bottomRightLon,
                                     const double bottomRightLat) const {
    QGCTileSet set;
    set.tileX0 = long2tileX(topleftLon, zoom);
    set.tileY0 = lat2tileY(topleftLat, zoom);
    set.tileX1 = long2tileX(bottomRightLon, zoom);
    set.tileY1 = lat2tileY(bottomRightLat, zoom);

    set.tileCount = (static_cast<quint64>(set.tileX1) -
                     static_cast<quint64>(set.tileX0) + 1) *
                    (static_cast<quint64>(set.tileY1) -
                     static_cast<quint64>(set.tileY0) + 1);

    set.tileSize = getAverageSize() * set.tileCount;
    return set;
}

reference:

https://github.com/mavlink/qgroundcontrol/blob/master/src/QtLocationPlugin/MapProvider.cpp

pm2-json-patch-via-websocket.md

I've just found pm2 v5 changlog has a interesting claim bandwidth reduction reduced by 80%, thanks for a json patch differential system.

After checking, they use websocket to apply the json object changes with fast-json-patch, it's a simple and effective change.

const jsonPatch = require('fast-json-patch');

const current = { a: 1, b: 2, c: 3 };
const next = { a: 1, b: 2, c: 4 };

console.log(`$ current`, current);
console.log(`$ next`, next);

const patch = jsonPatch.compare(current, next);

console.log(`$ patch`, patch);

//
const updated = jsonPatch.applyPatch(current, patch).newDocument;

console.log(`$ updated`, updated);

references:

https://github.com/keymetrics/pm2-io-agent/blob/b4f28ce72e1c6497e1d952e5108d4b61c69cface/src/transporters/WebsocketTransport.js#L139

https://github.com/keymetrics/pm2-io-agent/blob/b4f28ce72e1c6497e1d952e5108d4b61c69cface/test/integrations/websocket.json-patch.mocha.js#L112

rsync nested node_modules folders

Useful for javscript/typescript monorepo to copy previous installed node_modules.

My use case is on gitlab-runner shell executor to faster yarn install without messing with problematic gitlab-runner cache setting.

rsync -a -m --include="*/" --include="**/node_modules/**" --exclude="*" ./ ~/git/test/rsync-test

Simple logging Api in nginx

A HTTP POST API /log for hobby apps.

  1. log rotate by /etc/logrotate.d/nginx without extra configuring
  2. api limit by limit_req_zone
  3. app client might add a sample rate to prevent abuse requests such as sentry javascript sdk sample rate 0 to 1

/etc/nginx/sites-enable/default.conf


.....

limit_req_zone $binary_remote_addr zone=mytracking:10m rate=10r/s;

log_format my-tracking-format escape=json '$remote_addr - $remote_user [$time_local] '
                       '"$request" $status $bytes_sent '
                       '"$http_referer" "$http_user_agent" "$request_body"';

server {
  listen 80 default_server;
  
.....

location = /log {
  limit_req zone=mytracking burst=20 nodelay;
  access_log /mnt/logs/nginx/my-tracking.access.log my-tracking-format;
  proxy_pass http://localhost:80/empty;
}

location = /empty {
  #add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
  #add_header 'Access-Control-Allow-Origin' '*';
  #add_header 'Access-Control-Allow-Headers' 'content-type';
  # optional CORS for browser client, refers to https://serverfault.com/a/716283/446305
  access_log off;
  return 200 'logged';
}

.....
}

server {
  listen 443 ssl;
  
  .....
  
  location = /log {
    limit_req zone=mytracking burst=20 nodelay;
    access_log /mnt/logs/nginx/my-tracking.access.log my-tracking-format;
    proxy_pass http://localhost:80/empty;
  }
  .....
}

version: nginx version: nginx/1.14.0 (Ubuntu)

Test

request

curl -d '{"key1":"value1", "key2":"value2"}' -H "Content-Type: application/json" -X POST https://yourserver.com/log

checking log

tail -f /var/log/nginx/my-tracking.access.log

references:

https://stackoverflow.com/questions/17609472/really-logging-the-post-request-body-instead-of-with-nginx

https://stackoverflow.com/a/14034744/6414615

typescript noUncheckedIndexedAccess config

// noUncheckedIndexedAccess is on
const arr =[1];

// b is number | undefined
const b = arr[1];


// if noUncheckedIndexedAccess is off
const arr =[1];

// b is number
const b = arr[1];

from: Illegal array indices in http://web.mit.edu/6.031/www/sp22/classes/06-specifications/

Upload to AWS s3 alternative backblaze b2 storage (cheaper)

Use '@aws-sdk/client-s3' to prevent next.js building error: heap out of memory

awk-sdk is 76M whereas @aws-sdk/client-s3 is 4.6M only

"aws-sdk": "^2.1204.0", "@aws-sdk/client-s3": "^3.159.0",

AWS_ACCESS_KEY_ID=your_key_id AWS_SECRET_ACCESS_KEY=your_access_key npx ts-node ./run.ts

run.ts

import { S3 } from '@aws-sdk/client-s3';
import fs from 'node:fs';

(async () => {
  const s3 = new S3({
    region: 'us-east-1',
    endpoint: 'https://s3.us-west-002.xxxxxxxxxxx.com',
  });
  const bucketName = 'my-bucket-name';
  const file = await fs.promises.readFile('file/path');
  await s3.putObject({
    Bucket: bucketName,
    Key: 'file-key',
    Body: file,
  });
})();

refernece:

aws/aws-sdk-js#1877 (comment)

https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/loading-node-credentials-environment.html

Use @awk-sdk

  1. go b2-cloud-storage, manually create bucket and create an app key for it
  2. save key id and access key to ~/.aws/credentials ~/aws/config
[b2]
aws_access_key_id = xxxxxxxxxx
aws_secret_access_key = xxxxxxxxxx
  1. replace bucket name and run below code ( $ yarn add aws-sdk )
const AWS = require('aws-sdk');

(async function main() {
  var credentials = new AWS.SharedIniFileCredentials({ profile: 'b2' });
  AWS.config.credentials = credentials;
  var ep = new AWS.Endpoint('s3.us-west-002.backblazeb2.com');
  var s3 = new AWS.S3({ endpoint: ep });

  const { Buckets } = await s3.listBuckets().promise();

  console.log('Buckets', Buckets);

  const bucket = Buckets.find(b => b.Name === 'xxx-my-bucket');

  console.log('bucket', bucket);

  const upload = await s3
    .upload({
      Bucket: bucket.Name,
      Key: 'test-file',
      Body: require('fs').readFileSync('/Users/jh/git/tbs-all-in-one/s3-test/index.js'),
    })
    .promise();

  console.log(`$ upload`, upload);
})();

references:

https://help.backblaze.com/hc/en-us/articles/360046980734-How-to-use-the-AWS-SDK-for-JavaScript-with-B2

UTF8 compaitble atob btoa

Built-in functions in browser and node.js v16 atob and btoa are not utf8 compatible.

And the btoa npm module is not throwing the error as browser, node.js does.

reproduce.js

const utf8Str = `style=“color:white”`;
const encodedStr = btoa(utf8Str);
const decodedStr = atob(encodedStr);
console.log(decodedStr === utf8Str);
VM111:2 Uncaught DOMException: Failed to execute 'btoa' on 'Window': The string to be encoded contains characters outside of the Latin1 range.
    at <anonymous>:2:20

Solution:

https://stackoverflow.com/a/30106551/6414615

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment