Skip to content

Instantly share code, notes, and snippets.

View sandeep-cs-dev's full-sized avatar

Sandeep Patel sandeep-cs-dev

View GitHub Profile
@sandeep-cs-dev
sandeep-cs-dev / docker-compose.yml
Created March 24, 2023 18:17 — forked from onjin/docker-compose.yml
example docker compose for postgresql with db init script
postgres:
image: postgres:9.4
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
@sandeep-cs-dev
sandeep-cs-dev / README.md
Created March 13, 2023 20:52 — forked from jasonk/README.md
MongoDB Update Pipeline Tricks

Starting with MongoDB 4.2, you can use [aggregation pipelines to update documents][$pipelines]. Which leads to some really cool stuff.

For example, prior to this you could easily add sub-documents to an array using [$addtoSet][$addtoSet], and you could remove documents from an array using [$pull][$pull], but you couldn't do both in the same operation, you had to send two separate update commands if you needed to remove some and add some.

With 4.2, now you can, because you can format your update as a pipeline, with multiple $set and $unset stages, which makes those things possible. However, since this is so new I had a really hard time finding examples of many of the things I wanted to do, so I started to collect some here for my reference (and yours).

See also:

--log_gc (Log heap samples on garbage collection for the hp2ps tool.)
type: bool default: false
--expose_gc (expose gc extension)
type: bool default: false
--max_new_space_size (max size of the new generation (in kBytes))
type: int default: 0
--max_old_space_size (max size of the old generation (in Mbytes))
type: int default: 0
--max_executable_size (max size of executable memory (in Mbytes))
type: int default: 0
@sandeep-cs-dev
sandeep-cs-dev / LearnXInYMinProtocolBuffer.proto
Created February 15, 2021 22:02 — forked from shankarshastri/LearnXInYMinProtocolBuffer.proto
Self-Explanatory Protocol Buffer Lang Guide (CheatSheet)
/*
* Self-Explanatory Protocol Buffer Lang Guide
*/
/*
* Why Protocol Buffers?
* Protocol buffers are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data – think XML, but smaller, faster, and simpler.
* You define how you want your data to be structured once, then you can use special generated source code to easily write and read your structured data to and from a variety of data streams and using a variety of languages.
* Protocol Buffers are Schema Of Messages. They are language agnostic.
@sandeep-cs-dev
sandeep-cs-dev / decrypt.js
Created March 12, 2020 14:36 — forked from fratuz610/decrypt.js
Encrypt from Java and decrypt on Node.js - aes 256 ecb
// we determine the key buffer
var stringKey = "example";
var cipherText = ".........";
// we compute the sha256 of the key
var hash = crypto.createHash("sha256");
hash.update(stringKey, "utf8");
var sha256key = hash.digest();
var keyBuffer = new Buffer(sha256key);
@sandeep-cs-dev
sandeep-cs-dev / sorter.js
Created March 3, 2020 15:24 — forked from liron-navon/sorter.js
Worker threads example
const { parentPort, workerData, isMainThread } = require("worker_threads");
// CPU consuming function (sorting a big array)
function sortBigArray(bigArray) {
return bigArray.sort((a, b) => a - b);
}
// check that the sorter was called as a worker thread
if (!isMainThread) {
// make sure we got an array of data
@sandeep-cs-dev
sandeep-cs-dev / output.md
Created February 23, 2020 22:35 — forked from brycebaril/output.md
process.nextTick vs setImmediate

@mafintosh asks: "Does anyone have a good code example of when to use setImmediate instead of nextTick?"

https://twitter.com/mafintosh/status/624590818125352960

The answer is "generally anywhere outside of core".

process.nextTick is barely asynchronous. Flow-wise it is asynchronous, but it will trigger before any other asynchronous events can (timers, io, etc.) and thus can starve the event loop.

In this script I show a starved event loop where I just synchronously block, use nextTick and setImmediate

@sandeep-cs-dev
sandeep-cs-dev / nextTick.js
Created February 23, 2020 19:48 — forked from mmalecki/nextTick.js
process.nextTick vs setTimeout(fn, 0)
for (var i = 0; i < 1024 * 1024; i++) {
process.nextTick(function () { Math.sqrt(i) } )
}
@sandeep-cs-dev
sandeep-cs-dev / nginx-tuning.md
Created December 26, 2019 18:18 — forked from denji/nginx-tuning.md
NGINX tuning for best performance

Moved to git repository: https://github.com/denji/nginx-tuning

NGINX Tuning For Best Performance

For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.

Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon with HyperThreading enabled, but it can work without problem on slower machines.

You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.

@sandeep-cs-dev
sandeep-cs-dev / sampleREADME.md
Created December 23, 2019 21:26 — forked from FrancesCoronel/sampleREADME.md
A sample README for all your GitHub projects.

FVCproductions

INSERT GRAPHIC HERE (include hyperlink in image)

Repository Title Goes Here

Subtitle or Short Description Goes Here