Exhaustive list of SPDX (Software Package Data Exchange) licenses: https://spdx.org/licenses/
// Read the full explanation in my article "Straightforward Event Sourcing with TypeScript and NodeJS" | |
// https://event-driven.io/en/type_script_node_Js_event_sourcing/. | |
// | |
// Full configuration in: https://github.com/oskardudycz/EventSourcing.NodeJS/pull/21. | |
// | |
// Read also about Decider pattern by Jérémie Chassaing: https://thinkbeforecoding.com/post/2021/12/17/functional-event-sourcing-decider | |
import express, { | |
Application, | |
Router, |
/// +--------------------------------------+ | |
/// | | | |
/// | Decider | | |
/// | | | |
/// | Jérémie Chassaing | | |
/// | @thinkb4coding | | |
/// +--------------------------------------+ | |
// A decider is a structure define by 7 parameters: |
- Application
- Request Handling
- Authorization
var t1 = DoFooAsync(obj); | |
var t2 = DoBarAsync(obj); | |
var t = await WhenAnySuccessOrAllFail(t1, t2); | |
async Task WhenAnySuccessOrAllFail(params Task[] tasks) | |
{ | |
var remaining = new List<Task>(tasks); | |
while (remaining.Count > 0) |
Partial is an extremely useful type for e.g. rehydrating the aggregate state from events.
Check the sample below:
interface SeatReserved {
eventType: 'SeatReserved';
reservationId: string;
movieId: string;
seatId: string;
RDBMS-based job queues have been criticized recently for being unable to handle heavy loads. And they deserve it, to some extent, because the queries used to safely lock a job have been pretty hairy. SELECT FOR UPDATE followed by an UPDATE works fine at first, but then you add more workers, and each is trying to SELECT FOR UPDATE the same row (and maybe throwing NOWAIT in there, then catching the errors and retrying), and things slow down.
On top of that, they have to actually update the row to mark it as locked, so the rest of your workers are sitting there waiting while one of them propagates its lock to disk (and the disks of however many servers you're replicating to). QueueClassic got some mileage out of the novel idea of randomly picking a row near the front of the queue to lock, but I can't still seem to get more than an an extra few hundred jobs per second out of it under heavy load.
So, many developers have started going straight t
2019-09-19 20:36:53,961 ERROR Postgres|dbserver1|postgres-connector-task Producer failure [io.debezium.pipeline.ErrorHandler] | |
org.apache.kafka.connect.errors.ConnectException: Failed to start replication stream at LSN{0/16CB3C0} | |
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.startStreaming(PostgresReplicationConnection.java:253) | |
at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.execute(PostgresStreamingChangeEventSource.java:81) | |
at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:91) | |
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) | |
at java.util.concurrent.FutureTask.run(FutureTask.java:266) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) | |
at java.lang.Thread.run(Thread.java:748) |
public class MartenContractResolver : DefaultContractResolver | |
{ | |
protected override JsonProperty CreateProperty( | |
MemberInfo member, | |
MemberSerialization memberSerialization) | |
{ | |
var prop = base.CreateProperty(member, memberSerialization); | |
if (!prop.Writable) |