Skip to content

Instantly share code, notes, and snippets.

@syusui-s
Last active December 4, 2024 12:46
Show Gist options
  • Save syusui-s/3fd2352e1773ced36c0882194d6509fd to your computer and use it in GitHub Desktop.
Save syusui-s/3fd2352e1773ced36c0882194d6509fd to your computer and use it in GitHub Desktop.
node_modules/
package-lock.json

Bloom count

概要

Bloom count は Linear Counting と Bloom filterから着想を得た、集合の要素数をカウントするための確率的アルゴリズムです。 Nostrでイベントの数をうまく集計するために考案しました。

Nostrでフィルタに合致するイベントの件数を得るには、次のことを考慮する必要があります。

  • 常にすべてのリレーに書き込めるとは限らない。
  • あるリレーを昔は使っていたが、今は使っていないかもしれない。

このため、NIP-45のCOUNTではリレーにより数値が異なることがあります。

フィルタに合致するイベントの件数を精度良く得るための手法があれば、より正確に投稿数を表示できます。

Bloom countは、上記の課題を克服するべく考案されました。

Bloom countは、Bloom filterのようにビットフラグ列に1をセットすることで要素の存在を記録します。 ビットフラグ列をOR演算で統合することで、異なるリレーから得た結果を統合できるようになっています。

ハッシュアルゴリズムについて

サンプルではXXHash32を使っていますが、NostrのイベントID(512ビット)をそのまま使っても問題はないでしょう。

Proof of Workの仕様(NIP-13)とは競合しません。

精度

パラメータについて

シード値とビット長をパラメータとします。

以下のパラメータはそこまで有用でないことが分かっています。

  • 沢山の異なるシードを使うこと
    • シードによっては精度の高い結果を得られる可能性がありますが、何度も計算する必要があります。
  • ビットフラグ列のビット長を素数にすること
    • 素数にすることだけでは高い精度を得ることはできません

課題と対処

  • 件数が増えるほど長いビット長が必要になる
    • ある一定のビットが立ったところでサーバが元の2倍のメモリ
  • サービス拒否(DoS)攻撃の可能性について
    • メモリ確保
      • リレーはリクエストを受け取るたびに数KBの領域を確保しなければならない。
      • ただ、大して大きい量ではないため、ほとんど問題にならないはず。
      • 大量のリクエストを受け取ることでメモリを枯渇させ、リレーをダウンさせる可能性がある。
      • リレーが一度に実行できるCOUNTの数を制限することで、
    • 計算量
      • イベントのIDを用いることで都度
    • 特定のリクエスト
  • クライアントの通信量の増加
    • 帯域が小さい場合や通信量で課金される場合、レスポンスサイズが大きくなる
  • 件数の偽装
    • 利用者側で行える偽装
      • 本来リアクションやリポストはnpub/nsecを作ることでいくらでも作り出せることに注意
      • 投稿時にハッシュ値が衝突するように調整すれば件数を少なく見積もらせることができる
        • そういった偽装をすることに対するメリットは何もない
    • リレー側で行える検証
      • 悪意のあるリレーは応答結果を偽装するかもしれない
        • 主義や主張に基づいて、特定の投稿について多くのビットが立った応答を返すかもしれない。
        • 実際にREQを行ったときの件数とCOUNTの差が大きければ怪しいとわかるが、その比較を行うにはREQを行わなくてはいけない
        • ユーザはそのリレーの利用をやめるべきだが、クライアントでは判別がつかない
  • パフォーマンス

Improvement of NIP-45 Event Counts by Linear Counting

Abstract

NIP-45 Event Count10 doesn't provide the correct number of events if different events exist in different relays.

Introduction

NIP-45 Event Count enables clients to obtain the number of events matched to filters. This can be used for counting followers, the number of reposts and the number of reactions. However, NIP-45 has a problem: results may vary by relay.

For example, if relay X has events A and B, relay Y has events B and C, and we send a COUNT query with filters matches those events to relay X and Y, the both of COUNT results would be 2. This can happen in several situations: the user successfully publish an event to some relays but failed for other relays; the user publish to the set of relays (kind:30002) rather than the usual relay list (kind:10002); or the user start or stop to use some relays.

One solution is just to include event ids in the COUNT response20. However, if there are a lot of matched events, the list of ids would be large after all. If there are 1000 events and each id is 64 letters in hex, the size of the list is about 64KB. And, if we receive this from 5 relays, the total transferred bytes is about 256KB. To use first 8 letters of id like previous tag in NIP-2930 will reduce the size to one-eighth but the size is still big.

Other solution is Data vending machines (DVM, NIP-90)40. DVM allows user to ask some provider to process the user's request. The DVM job to count events is already defined50. This will be the one of the most efficient way to count events correctly because the result has only number without extra data. The provider can collect the data frequently from relays around the world and count in advance of the request. However, providers may choose to collect data at the time the request is published, then it may take some time to get the result and it will have a negative impact on the user experience. The user may need to choose which provider to trust and use before use.

Currently, we need to use REQ to count events if we want to know almost exact amount. This means the client will receives a lot of events from relays. If every single event has 350 bytes, there are 1000 events and the client request to 5 relays, the amount of data the client will receive would be 1.75 MB.

In this paper, we propose a method to solve this problem using linear counting algorithm. The results of linear counting from multiple relays can be integrated. The linear counting provides good estimated counts with a small amount of data.

Linear Counting

Linear counting60 is a probabilistic algorithm to count unique items. The core idea is to use bitmap (bitset) and hash function.

The following describes how to add an item:

  1. calculate the hash value of the data to be added with hash function.
  2. calculate the remainder of the hash value divided by the bitmap length.
  3. bit in that position is set to 1.

To estimate the number of items, use following formula60:

- m * log2(1 - n/m)

m is the size of bitset and n is the number of bits set to 1.

The algorithm has : To get the

The remainder values The remainder values can be the same for different data, namely hash collision.

The algorithm has :

  • merge-able
    • If we want combine two results, we just need to calculate bit-or of two bit-sets.

Use linear counting in COUNT

To apply linear counting to COUNT,

id is a SHA-256 hash value of content and considered as unique, this can be used for linear counting. Note that Proof of Work [NIP-13]

Proposal

Request

This proposal introduces these special prefixes:

size prefix bitset size id bits
0 lc0' 128 bytes (1024 bits) 10
1 lc1' 256 bytes (2048 bits) 11
2 lc2' 512 bytes (4096 bits) 12
3 lc3' 1024 bytes (8192 bits) 13
4 lc4' 2048 bytes (16384 bits) 14
5 lc5' 4096 bytes (32768 bits) 15
6 lc6' 8192 bytes (65536 bits) 16
  • "size": values for request format B.
  • "prefix": values for request format A.
  • "bitset size": size of bitsets
  • "id bits": last n bits are used for bitset

If the result bitset is almost full and the error is considered large or willing to know more accurate number, client CAN retry with larger size (refer "Accuracy" section for more detail).

For simplicity, we can just use

Request Format A:

["COUNT", "<id-with-prefix>", { "kinds": 7, "#e": ["<id>"] }]

Example:

["COUNT", "lc1'c4290f4b219c95f1bb010", { "kinds": 7, "#e": ["<id>"] }]

Request format B:

["COUNTX", "<id>", { "alg": "lc", "size": 1 }, { "kinds": 7, "#e": ["<id>"] }]

Response format

For request format A:

[
  "COUNT",
  "<id-with-prefix>",
  { "count": 500, "linear_counting": "<base64-encoded bitset>" }
]

For Request format B:

["COUNTX", "<id-with-prefix>", { "count": 500, "linear_counting": "<base64-encoded bitset>" }]

If server doesn't support this linear counting, it is expected to ignore the linear counting parameter and respond with the original NIP-45 response.

If there are no matching events, the server SHOULD omit "linear_counting" field from response and set "count" to 0. The client can interpret it as either no data or the client received data with all bits zero (unset).

Consideration

Bitset size

The client can choose the bitset size according to the expected number of events.

For example, if the client expects the number of events to be less than 100, the client can choose size=0. This could be suitable for the case of the number of reposts and the number of reactions.

If the client expects the number of events to be less than 2000, the client can choose size 1. This can be used for the case of the number of followers.

Of course, the client can choose a larger size if the client wants more accurate results.

Transferred data size

Base64 encoded data will be 33% bigger than original size.

Original size Base64 size
128 bytes (1024 bits) 171 chars
256 bytes (2048 bits) 341 chars
512 bytes (4096 bits) 680 chars
1024 bytes (8192 bits) 1362 chars

The larger the bitset size, the more accurate the result will be. However, the larger the bitset size, the more data will be transferred.

Database load

The database is needed to return ids rather than just COUNT(id). The traffic between the relay and the database will increase and database load will also increase a bit.

To reduce the amount of traffic, we can write a database query that take the last few bytes of the id instead of the entire id. For example, a client requests COUNT where size is 2, only the last 12 bits is needed to calculate.

Accuracy (error and bitset size)

If the error is considered to be large, the client can request with a larger size again.

According to the linear counting paper,

Non suitable use-cases

NIP-2570 allows user to use emojis in their reactions. It is hard to count emojis with this mechanism. If we simply create bitset for each emoji, the size of the result will be large and it is not desirable.

{ "+": "<base64-encoded bitset>", "😃": "<base64-encoded bitset>", ... }

Attack vectors: untrue result

The relay might tell a lie about the count by setting/un-setting some bits in the bitmap. The client can verify the result by comparing the COUNT result and the number of REQ results. It is also possible to compare a linear counting bitmap generated by the client from the REQ results with the bitmap generated by relay. If the bits in the client's bitmap were much different compared to the relay's bitmap, the server seems to lie. In that case, The client can choose to ignore the result from the relay for this use-case and inform the user of it. This lie can be verifiable but the problem is that it needs data transferring. However, there is more simple way to tell a lie -- just creating such events with random keys.

Attack vectors: collision

IDはマイニングのように故意に衝突を起こすことができる。しかし、そのようなことをしても攻撃者にはなんら利益がないと考えられる。

リクエストを行うとき、クライアントからシード値を与えることでこのような攻撃を防ぐことができるだろう。HMAC-SHA-256のような2つの入力を受け取るハッシュ関数の入力値としてIDを使用し、シード値としてクライアントからのシード値を用いる。その出力をbitsetに設定する。

シードは予測不可能でなければならない。クライアントは固定のシード値を使うのではなく、リクエストを行うたびに生成するか、セッションが有効な間にのみ有効なシード値を生成するべきだろう。

Attack vectors: DoS

The attacker won't use this feature to do DoS attack because the memory consumption is small.

import XXH from 'xxhashjs';
import stat from './stat.ts';
import { count, genRandomStrings, merge, split } from './common.ts';
const seeds = new Array(1).fill(0).map(() => Math.floor(Math.random() * 1.0e8));
const bloomCount = (n: number, data: string[]): bigint[] => {
if (n < data.length) throw new Error();
const bitsets = new Array(seeds.length).fill(0n);
// console.log(data.length)
for (let bitsetIndex = 0; bitsetIndex < bitsets.length; bitsetIndex++) {
for (let dataIndex = 0; dataIndex < data.length; dataIndex++) {
const bitset = bitsets[bitsetIndex];
const xxhash = XXH.h32(data[dataIndex], seeds[0]);
const rem = BigInt(xxhash.toNumber() % n);
let j = rem;
// 既にビットセットされていれば次の空きビットを探す
while ((bitset & (1n << j)) !== 0n) {
j = (j + 1n) % BigInt(n);
// 一周したら終了する
if (j === rem) break;
}
bitsets[bitsetIndex] = bitset | 1n << j;
}
}
return bitsets;
};
const n = 50000;
const len = 10000;
const counts = [];
for (let i = 0; i < 10; i++) {
const data = genRandomStrings(len);
// single result
console.time('bloomCount');
const result = bloomCount(n, data);
console.timeEnd('bloomCount');
// merging results
const splitted = split(10, data);
const results = splitted.map((d) => {
return bloomCount(n, d);
});
const merged = merge(results);
console.time('count');
const c = count(merged);
console.timeEnd('count');
counts.push(c);
}
console.log(`len = ${len}, n = ${n}`);
console.log(stat(counts));
const table: number[] = [];
for (let i = 0; i <= 0xff; i += 1) {
let count = 0;
for (let j = i; j > 0; j = j >> 1) {
if ((j & 1) == 1) count += 1;
}
table[i] = count;
}
export const genRandomStrings = (len: number) =>
new Array(len)
.fill(null)
.map(() => Math.floor(Math.random() * 1.0e16).toString());
export const count = (bitsets: bigint[]) => {
const counts = bitsets.map((bitset) => {
let count = 0;
for (let i = bitset; i > 0n; i = i >> 8n) {
count += table[Number(i & 0xffn)];
}
return count;
})
return Math.max(...counts);
};
export const merge = (bitsetsArray: bigint[][]) => {
if (!bitsetsArray.every((bitsets) => bitsetsArray[0].length === bitsets.length)) throw new Error();
const result = new Array(bitsetsArray[0].length).fill(0n);
for (let i = 0; i < bitsetsArray.length; i++) {
for (let j = 0; j < result.length; j++) {
result[j] |= bitsetsArray[i][j];
}
}
return result;
};
export const split = <T>(unit: number, data: T[]) => {
const result = [];
for (let i = 0; i < data.length; i += unit) {
result.push(data.slice(i, i + unit));
}
return result;
};
{
"version": "3",
"packages": {
"specifiers": {
"npm:@types/xxhashjs@^0.2.2": "npm:@types/xxhashjs@0.2.2",
"npm:typescript@^5.0.4": "npm:typescript@5.5.3",
"npm:xxhashjs@^0.2.2": "npm:xxhashjs@0.2.2"
},
"npm": {
"@types/node@18.16.19": {
"integrity": "sha512-IXl7o+R9iti9eBW4Wg2hx1xQDig183jj7YLn8F7udNceyfkbn1ZxmzZXuak20gR40D7pIkIY1kYGx5VIGbaHKA==",
"dependencies": {}
},
"@types/xxhashjs@0.2.2": {
"integrity": "sha512-+hlk/W1kgnZn0vR22XNhxHk/qIRQYF54i0UTF2MwBAPd0e7xSy+jKOJwSwTdRQrNnOMRVv+vsh8ITV0uyhp2yg==",
"dependencies": {
"@types/node": "@types/node@18.16.19"
}
},
"cuint@0.2.2": {
"integrity": "sha512-d4ZVpCW31eWwCMe1YT3ur7mUDnTXbgwyzaL320DrcRT45rfjYxkt5QWLrmOJ+/UEAI2+fQgKe/fCjR8l4TpRgw==",
"dependencies": {}
},
"typescript@5.5.3": {
"integrity": "sha512-/hreyEujaB0w76zKo6717l3L0o/qEUtRgdvUBvlkhoWeOVMjMuHNHk0BRBzikzuGDqNmPQbg5ifMEqsHLiIUcQ==",
"dependencies": {}
},
"xxhashjs@0.2.2": {
"integrity": "sha512-AkTuIuVTET12tpsVIQo+ZU6f/qDmKuRUcjaqR+OIvm+aCBsZ95i7UVY5WJ9TMsSaZ0DA2WxoZ4acu0sPH+OKAw==",
"dependencies": {
"cuint": "cuint@0.2.2"
}
}
}
},
"remote": {},
"workspace": {
"packageJson": {
"dependencies": [
"npm:@types/xxhashjs@^0.2.2",
"npm:typescript@^5.0.4",
"npm:xxhashjs@^0.2.2"
]
}
}
}
import stat from "./stat.ts";
import { count, genRandomStrings, merge, split } from "./common.ts";
const tests = [];
export const genRandomId = (): Uint8Array =>
globalThis.crypto.getRandomValues(new Uint8Array(256));
export const genRandomIds = (len: number): Uint8Array[] => {
const result = new Array(len);
for (let i = 0; i < len; i += 1) {
result[i] = genRandomId();
}
return result;
};
const takeBits = (data: Uint8Array, bits: number) => {
if (bits > 53) throw new Error("max bits is 53");
const bytes = bits >> 3; // divide by 8
const restBits = bits - (bytes << 3);
const readBytes = bytes + (restBits > 0 ? 1 : 0);
const firstByteIndex = data.byteLength - readBytes;
const firstByte = data[firstByteIndex];
const andBits = 0xff >> (restBits === 0 ? 0 : 8 - restBits);
let result = firstByte & andBits;
for (let i = firstByteIndex + 1; i < data.byteLength; i += 1) {
const currentByte = data[i];
result = result * 256 + currentByte;
}
return result;
};
tests.push(() => {
console.log("takeBits");
console.log(takeBits(new Uint8Array([0xff, 0xff]), 0) == 0x0);
console.log(takeBits(new Uint8Array([0xff, 0xff]), 1) == 0x1);
console.log(takeBits(new Uint8Array([0xff, 0xff]), 4) == 0x0f);
console.log(takeBits(new Uint8Array([0xff, 0xff]), 5) == 0x1f);
console.log(takeBits(new Uint8Array([0xff, 0xff]), 6) == 0x3f);
console.log(takeBits(new Uint8Array([0xff, 0xff]), 7) == 0x7f);
console.log(takeBits(new Uint8Array([0xff, 0xff]), 8) == 0xff);
console.log(takeBits(new Uint8Array([0xff, 0xff]), 9) == 0x1ff);
console.log(takeBits(new Uint8Array([0xff, 0xff]), 10) == 0x3ff);
console.log(takeBits(new Uint8Array([0xff, 0xff]), 15) == 0x7fff);
console.log(takeBits(new Uint8Array([0xff, 0xff]), 16) == 0xffff);
console.log(
takeBits(new Uint8Array([0xff, 0xff, 0xff, 0xff]), 32) == 0xffffffff,
);
console.log(
takeBits(new Uint8Array([0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff]), 53) ==
0x1fffffffffffff,
);
});
const populationCount = (data: Uint8Array): number => {
const b1 = 0x55,
b2 = 0x33,
b3 = 0x0f;
let result = 0;
for (let i = 0; i < data.byteLength; i += 1) {
let count = data[i];
count = ((count >> 1) & b1) + (count & b1);
count = ((count >> 2) & b2) + (count & b2);
count = ((count >> 4) & b3) + (count & b3);
result += count;
}
return result;
};
tests.push(() => {
console.log("populationCount");
console.log(populationCount(new Uint8Array([0x0])) == 0);
console.log(populationCount(new Uint8Array([0xff])) == 8);
console.log(populationCount(new Uint8Array([0xff, 0xff])) == 16);
console.log(populationCount(new Uint8Array([0x33, 0x33])) == 8);
console.log(populationCount(new Uint8Array([0x55, 0x55])) == 8);
});
// ハッシュ値が1バイト(8ビット = 0..255)だとすると
// 32バイト(256ビット) のビットセットが必要
//
// 128バイト(1024ビット) => 10ビット
// 256バイト(2048ビット)
// 512バイト(4096ビット)
// 1024バイト(8096ビット)
class LinearCounter {
byteLength: number;
#bitset: Uint8Array;
constructor(byteLength: number) {
if (!Number.isInteger(byteLength)) {
throw new Error("byteLength should be integer");
}
this.byteLength = byteLength;
this.#bitset = new Uint8Array(this.byteLength);
}
bitLength(): number {
return this.byteLength * 8;
}
add(hashValue: Uint8Array) {
const bits = Math.log2(this.bitLength());
const num = takeBits(hashValue, bits);
const bytes = num >> 3; // = num / 8
const bitIndex = num - (bytes << 3);
const byteIndex = this.byteLength - bytes - 1;
this.#bitset[byteIndex] |= 1 << bitIndex;
}
merge(other: LinearCounter) {
if (other.byteLength !== this.byteLength)
throw new Error("length is different");
const otherBitset = other.bitset();
for (let i = 0; i < this.byteLength; i += 1) {
this.#bitset[i] |= otherBitset[i];
}
}
countEstimated(): number {
const bitLength = this.bitLength();
const bitCount = this.countBits();
return -1 * bitLength * Math.log(1 - bitCount / bitLength);
}
countBits(): number {
return populationCount(this.#bitset);
}
encodeAsBase64() {
return window.btoa(String.fromCharCode(...this.#bitset));
}
bitset() {
/* copy bitset */
return new Uint8Array(this.#bitset);
}
}
tests.forEach((f) => f());
const n = 200;
const bytes = 4096 / 8; // 128 byte
const counts = [];
for (let i = 0; i < 10000; i++) {
const ids = genRandomIds(n);
// single result
// const counter = new LinearCounter(bytes);
// ids.forEach((id) => counter.add(id));
const result = split(6, ids)
.map((splitIds) => {
const counter = new LinearCounter(bytes); // 1024bits
splitIds.forEach((id) => {
counter.add(id);
});
return counter;
})
.reduce((prev, current) => {
prev.merge(current);
return prev;
}, new LinearCounter(bytes));
const c = Math.round(result.countEstimated());
counts.push(c);
}
console.log(`n = ${n}, byteLength = ${bytes}, bitLength = ${bytes * 8}`);
console.log(`Estimated value:`);
console.log(stat(counts));
{
"name": "bloomcount",
"version": "1.0.0",
"description": "",
"main": "bloomcount.js",
"scripts": {
"start-bloomcount": "deno run bloomcount.ts",
"start-cuckoocount": "deno run cuckoocount.ts",
"start-linearcount": "deno run linearcount.ts"
},
"author": "",
"license": "ISC",
"devDependencies": {
"@types/xxhashjs": "^0.2.2",
"typescript": "^5.0.4"
},
"dependencies": {
"xxhashjs": "^0.2.2"
}
}
const stat = (counts: number[]) => {
let min = Infinity;
let max = 0;
let sum = 0;
const freq: Record<number, number> = {};
counts.forEach((e) => {
min = Math.min(min, e);
max = Math.max(max, e);
sum += e;
freq[e] = (freq[e] ?? 0) + 1;
});
const avg = sum / counts.length;
return { min, max, sum, avg, freq };
};
export default stat;
{
"compilerOptions": {
/* Visit https://aka.ms/tsconfig to read more about this file */
/* Projects */
// "incremental": true, /* Save .tsbuildinfo files to allow for incremental compilation of projects. */
// "composite": true, /* Enable constraints that allow a TypeScript project to be used with project references. */
// "tsBuildInfoFile": "./.tsbuildinfo", /* Specify the path to .tsbuildinfo incremental compilation file. */
// "disableSourceOfProjectReferenceRedirect": true, /* Disable preferring source files instead of declaration files when referencing composite projects. */
// "disableSolutionSearching": true, /* Opt a project out of multi-project reference checking when editing. */
// "disableReferencedProjectLoad": true, /* Reduce the number of projects loaded automatically by TypeScript. */
/* Language and Environment */
"target": "esnext", /* Set the JavaScript language version for emitted JavaScript and include compatible library declarations. */
// "lib": [], /* Specify a set of bundled library declaration files that describe the target runtime environment. */
// "jsx": "preserve", /* Specify what JSX code is generated. */
// "experimentalDecorators": true, /* Enable experimental support for legacy experimental decorators. */
// "emitDecoratorMetadata": true, /* Emit design-type metadata for decorated declarations in source files. */
// "jsxFactory": "", /* Specify the JSX factory function used when targeting React JSX emit, e.g. 'React.createElement' or 'h'. */
// "jsxFragmentFactory": "", /* Specify the JSX Fragment reference used for fragments when targeting React JSX emit e.g. 'React.Fragment' or 'Fragment'. */
// "jsxImportSource": "", /* Specify module specifier used to import the JSX factory functions when using 'jsx: react-jsx*'. */
// "reactNamespace": "", /* Specify the object invoked for 'createElement'. This only applies when targeting 'react' JSX emit. */
// "noLib": true, /* Disable including any library files, including the default lib.d.ts. */
// "useDefineForClassFields": true, /* Emit ECMAScript-standard-compliant class fields. */
// "moduleDetection": "auto", /* Control what method is used to detect module-format JS files. */
/* Modules */
"module": "commonjs", /* Specify what module code is generated. */
// "rootDir": "./", /* Specify the root folder within your source files. */
// "moduleResolution": "node10", /* Specify how TypeScript looks up a file from a given module specifier. */
// "baseUrl": "./", /* Specify the base directory to resolve non-relative module names. */
// "paths": {}, /* Specify a set of entries that re-map imports to additional lookup locations. */
// "rootDirs": [], /* Allow multiple folders to be treated as one when resolving modules. */
// "typeRoots": [], /* Specify multiple folders that act like './node_modules/@types'. */
// "types": [], /* Specify type package names to be included without being referenced in a source file. */
// "allowUmdGlobalAccess": true, /* Allow accessing UMD globals from modules. */
// "moduleSuffixes": [], /* List of file name suffixes to search when resolving a module. */
// "allowImportingTsExtensions": true, /* Allow imports to include TypeScript file extensions. Requires '--moduleResolution bundler' and either '--noEmit' or '--emitDeclarationOnly' to be set. */
// "resolvePackageJsonExports": true, /* Use the package.json 'exports' field when resolving package imports. */
// "resolvePackageJsonImports": true, /* Use the package.json 'imports' field when resolving imports. */
// "customConditions": [], /* Conditions to set in addition to the resolver-specific defaults when resolving imports. */
// "resolveJsonModule": true, /* Enable importing .json files. */
// "allowArbitraryExtensions": true, /* Enable importing files with any extension, provided a declaration file is present. */
// "noResolve": true, /* Disallow 'import's, 'require's or '<reference>'s from expanding the number of files TypeScript should add to a project. */
/* JavaScript Support */
// "allowJs": true, /* Allow JavaScript files to be a part of your program. Use the 'checkJS' option to get errors from these files. */
// "checkJs": true, /* Enable error reporting in type-checked JavaScript files. */
// "maxNodeModuleJsDepth": 1, /* Specify the maximum folder depth used for checking JavaScript files from 'node_modules'. Only applicable with 'allowJs'. */
/* Emit */
// "declaration": true, /* Generate .d.ts files from TypeScript and JavaScript files in your project. */
// "declarationMap": true, /* Create sourcemaps for d.ts files. */
// "emitDeclarationOnly": true, /* Only output d.ts files and not JavaScript files. */
// "sourceMap": true, /* Create source map files for emitted JavaScript files. */
// "inlineSourceMap": true, /* Include sourcemap files inside the emitted JavaScript. */
// "outFile": "./", /* Specify a file that bundles all outputs into one JavaScript file. If 'declaration' is true, also designates a file that bundles all .d.ts output. */
// "outDir": "./", /* Specify an output folder for all emitted files. */
// "removeComments": true, /* Disable emitting comments. */
// "noEmit": true, /* Disable emitting files from a compilation. */
// "importHelpers": true, /* Allow importing helper functions from tslib once per project, instead of including them per-file. */
// "importsNotUsedAsValues": "remove", /* Specify emit/checking behavior for imports that are only used for types. */
// "downlevelIteration": true, /* Emit more compliant, but verbose and less performant JavaScript for iteration. */
// "sourceRoot": "", /* Specify the root path for debuggers to find the reference source code. */
// "mapRoot": "", /* Specify the location where debugger should locate map files instead of generated locations. */
// "inlineSources": true, /* Include source code in the sourcemaps inside the emitted JavaScript. */
// "emitBOM": true, /* Emit a UTF-8 Byte Order Mark (BOM) in the beginning of output files. */
// "newLine": "crlf", /* Set the newline character for emitting files. */
// "stripInternal": true, /* Disable emitting declarations that have '@internal' in their JSDoc comments. */
// "noEmitHelpers": true, /* Disable generating custom helper functions like '__extends' in compiled output. */
// "noEmitOnError": true, /* Disable emitting files if any type checking errors are reported. */
// "preserveConstEnums": true, /* Disable erasing 'const enum' declarations in generated code. */
// "declarationDir": "./", /* Specify the output directory for generated declaration files. */
// "preserveValueImports": true, /* Preserve unused imported values in the JavaScript output that would otherwise be removed. */
/* Interop Constraints */
// "isolatedModules": true, /* Ensure that each file can be safely transpiled without relying on other imports. */
// "verbatimModuleSyntax": true, /* Do not transform or elide any imports or exports not marked as type-only, ensuring they are written in the output file's format based on the 'module' setting. */
// "allowSyntheticDefaultImports": true, /* Allow 'import x from y' when a module doesn't have a default export. */
"esModuleInterop": true, /* Emit additional JavaScript to ease support for importing CommonJS modules. This enables 'allowSyntheticDefaultImports' for type compatibility. */
// "preserveSymlinks": true, /* Disable resolving symlinks to their realpath. This correlates to the same flag in node. */
"forceConsistentCasingInFileNames": true, /* Ensure that casing is correct in imports. */
/* Type Checking */
"strict": true, /* Enable all strict type-checking options. */
// "noImplicitAny": true, /* Enable error reporting for expressions and declarations with an implied 'any' type. */
// "strictNullChecks": true, /* When type checking, take into account 'null' and 'undefined'. */
// "strictFunctionTypes": true, /* When assigning functions, check to ensure parameters and the return values are subtype-compatible. */
// "strictBindCallApply": true, /* Check that the arguments for 'bind', 'call', and 'apply' methods match the original function. */
// "strictPropertyInitialization": true, /* Check for class properties that are declared but not set in the constructor. */
// "noImplicitThis": true, /* Enable error reporting when 'this' is given the type 'any'. */
// "useUnknownInCatchVariables": true, /* Default catch clause variables as 'unknown' instead of 'any'. */
// "alwaysStrict": true, /* Ensure 'use strict' is always emitted. */
// "noUnusedLocals": true, /* Enable error reporting when local variables aren't read. */
// "noUnusedParameters": true, /* Raise an error when a function parameter isn't read. */
// "exactOptionalPropertyTypes": true, /* Interpret optional property types as written, rather than adding 'undefined'. */
// "noImplicitReturns": true, /* Enable error reporting for codepaths that do not explicitly return in a function. */
// "noFallthroughCasesInSwitch": true, /* Enable error reporting for fallthrough cases in switch statements. */
// "noUncheckedIndexedAccess": true, /* Add 'undefined' to a type when accessed using an index. */
// "noImplicitOverride": true, /* Ensure overriding members in derived classes are marked with an override modifier. */
// "noPropertyAccessFromIndexSignature": true, /* Enforces using indexed accessors for keys declared using an indexed type. */
// "allowUnusedLabels": true, /* Disable error reporting for unused labels. */
// "allowUnreachableCode": true, /* Disable error reporting for unreachable code. */
/* Completeness */
// "skipDefaultLibCheck": true, /* Skip type checking .d.ts files that are included with TypeScript. */
"skipLibCheck": true, /* Skip type checking all .d.ts files. */
"allowImportingTsExtensions": true
}
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment