Created
February 25, 2017 20:31
-
-
Save alexpsi/43dd7fd4d6a263c7485326b843677740 to your computer and use it in GitHub Desktop.
Like Promise.all but executed in chunks, so you can have some async concurrency control.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
const chunks = (arr, chunkSize) => { | |
let results = []; | |
while (arr.length) results.push(arr.splice(0, chunkSize)); | |
return results; | |
}; | |
module.exports = (xs, f, concurrency) => { | |
if (xs.length == 0) return Promise.resolve(); | |
return Promise.all(chunks(xs, concurrency).reduce( | |
(acc, chunk) => acc.then(() => Promise.all(chunk.map(f))), | |
Promise.resolve() | |
)); | |
}; |
Hi guys! I know is an old thread but just run into it, amazing work! learning a lot about perf conditions you mention. My approach (saving the differences) were to stablish SNS publish and making use of bluebird
library (maybe avoidable) with ramda.
const Promise = require('bluebird');
const PASSENGERS_SPLIT = 10;
async function notificationsSNSPublisher(snsPublish, payload) {
const { passengers } = payload;
const snsPayload = dissoc('passengers', payload);
return await Promise.map(splitEvery(PASSENGERS_SPLIT, passengers), async passengersChunk => {
const messagePayload = {
...snsPayload,
passengers: passengersChunk
};
const { MessageId } = await snsPublish(messagePayload, { messageAttributes: {} });
return MessageId;
});
}
Is possible to add a delay()
function of course. I’m going to gather parts of the code you posted and improve this
Here is my typescript version:
const throttledConcurrencyPromiseAll = async <T>(
arr: Array<any>,
f: (v: any) => Promise<T>,
n = Infinity,
): Promise<T[]> => {
const results = Array(arr.length);
const entries = arr.entries();
const worker = async () => {
for (const [key, val] of entries) results[key] = await f(val);
};
await Promise.all(Array.from({ length: Math.min(arr.length, n) }, worker));
return results;
};
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
@tomardern You're mutating/copying (slice, concat) and using recursion. Performance will be abysmal.
In addition, chunking is suboptimal since the longest running task can lower concurrency down to 1 -- in fact, chunking will always cause spikey concurrency. Most use cases desire a fixed concurrency with new tasks immediately replacing finished ones.
The code for parallelMap that I refined with @caub 's help is as optimal as I can envision. It is smoothly concurrent and involves no allocations beyond what's necessary. If you find a way to improve it, that'd be cool. 🤡